Abstract
The inferior alveolar nerve (IAN) is the major sensory nerve innervating the mandibular region, and its automatic segmentation is crucial. It has been indirectly identified on the mandibular canal, which surrounds the IAN, using computed tomography (CT) and cone-beam computed tomography (CBCT). Magnetic resonance neurography (MRN) is an imaging technique designed for nerve visualization, facilitating discrimination of small peripheral nerves from surrounding soft tissues. To our knowledge, this study is the first to perform semi-automatic segmentation of the IAN using MRN images. We developed a deep learning model based on 6,027 coronal MRN images and evaluated its performance using four quantitative metrics, comparing it with six state-of-the-art models that were retrained and tested on the same dataset for small-structure segmentation. Our model achieved a dice similarity coefficient (DSC) of 0.712 ± 0.254, significantly outperforming the six comparator models. In addition, in an analysis of segmentation failure rates according to DSC thresholds, our model demonstrated the lowest failure rate. In conclusion, unlike many previous studies that focused on bony boundaries using CBCT or CT, this study demonstrates the feasibility and potential clinical utility of MRN-based IAN segmentation.
Data availability
The code for model training and evaluation is publicly available at https://github.com/HaeSung-Oh/IAN-MRN-Segmentation. Datasets in the study are available from the corresponding author upon reasonable request, subject to approval by the institutional review board.
References
Norton, N. S. & Willett, G. M. Netter’s Head and Neck Anatomy for Dentistry 4th edn, 385 (Elsevier, 2024).
Agbaje, J. O. et al. Tracking of the inferior alveolar nerve: Its implication in surgical planning. Clin. Oral Investig. 21, 2213–2220. https://doi.org/10.1007/s00784-016-2014-x (2017).
Yu, S. K., Lee, M. H., Jeon, Y. H., Chung, Y. Y. & Kim, H. J. Anatomical configuration of the inferior alveolar neurovascular bundle: a histomorphometric analysis. Surg. Radiol. Anat. 38, 195–201. https://doi.org/10.1007/s00276-015-1540-6 (2016).
Liu, M. Q. et al. Deep learning-based evaluation of the relationship between mandibular third molar and mandibular canal on CBCT. Clin. Oral Investig. 26, 981–991. https://doi.org/10.1007/s00784-021-04082-5 (2022).
Lam, E. & Mallya, S. White and Pharoah’s Oral Radiology: Principles and Interpretation 9th edn, 156-185, 436-466, 626-678 (Elsevier, 2025).
de Oliveira-Santos, C. et al. Assessment of variations of the mandibular canal through cone beam computed tomography. Clin. Oral Investig. 16, 387–393. https://doi.org/10.1007/s00784-011-0544-9 (2012).
Na, J. Y. et al. Prognosis in case of nerve disturbance after mandibular implant surgery in relation to computed tomography findings and symptoms. J. Periodontal Implant Sci. 49, 127–135. https://doi.org/10.5051/jpis.2019.49.2.127 (2019).
Chhabra, A. et al. MR Neurography: Past, present, and future. AJR Am. J. Roentgenol. 197, 583–591. https://doi.org/10.2214/AJR.10.6012 (2012).
Al-Haj Husain, A. et al. Visualization of inferior alveolar and lingual nerve pathology by 3D double-echo steady-state MRI: Two case reports with literature review. J. Imaging 8, 75. https://doi.org/10.3390/jimaging8030075 (2022).
Martín Noguerol, T., Barousse, R., Socolovsky, M. & Luna, A. Quantitative magnetic resonance (MR) neurography for evaluation of peripheral nerves and plexus injuries. Quant. Imaging Med. Surg. 7, 398–421. https://doi.org/10.21037/qims.2017.08.01 (2017).
Burian, E. et al. High resolution MRI for quantitative assessment of inferior alveolar nerve impairment in course of mandible fractures: an imaging feasibility study. Sci. Rep. 10, 11566. https://doi.org/10.1038/s41598-020-68501-5 (2020).
Jaskari, J. et al. Deep learning method for mandibular canal segmentation in dental cone beam computed tomography volumes. Sci. Rep. 10, 5842. https://doi.org/10.1038/s41598-020-62321-3 (2020).
Lahoud, P. et al. Development and validation of a novel artificial intelligence driven tool for accurate mandibular canal segmentation on CBCT. J. Dent. 116, 103891. https://doi.org/10.1016/j.jdent.2021.103891 (2022).
Usman, M. et al. Dual-stage deeply supervised attention-based convolutional neural networks for mandibular canal segmentation in CBCT scans. Sensors 22, 9877. https://doi.org/10.3390/s22249877 (2022).
Cipriano, M. et al. Deep segmentation of the mandibular canal: A new 3D annotated dataset of CBCT volumes. IEEE Access. 10, 11500–11510. https://doi.org/10.1109/ACCESS.2022.3144840 (2022).
Du, G., Tian, X. & Song, Y. Mandibular canal segmentation from CBCT image using 3D convolutional neural network with scSE attention. IEEE Access. 10, 111272–111283. https://doi.org/10.1109/ACCESS.2022.3213839 (2022).
Jeoun, B. S. et al. Canal-Net for automatic and robust 3D segmentation of mandibular canals in CBCT images using a continuity-aware contextual network. Sci. Rep. 12, 13460. https://doi.org/10.1038/s41598-022-17341-6 (2022).
Lin, X. et al. Accurate mandibular canal segmentation of dental CBCT using a two-stage 3D-UNet based segmentation framework. BMC Oral Health 23, 551. https://doi.org/10.1186/s12903-023-03279-2 (2023).
Lv, J. et al. Automatic segmentation of mandibular canal using transformer based neural networks. Front. Bioeng. Biotechnol. 11, 1302524. https://doi.org/10.3389/fbioe.2023.1302524 (2023).
Zhao, H. et al. Whole mandibular canal segmentation using transformed dental CBCT volume in Frenet frame. Heliyon 9, e17651 https://doi.org/10.1016/j.heliyon.2023.e17651 (2023).
Oliveira-Santos, N. et al. Automated segmentation of the mandibular canal and its anterior loop by deep learning. Sci. Rep. 13, 10819. https://doi.org/10.1038/s41598-023-37798-3 (2023).
Ni, F. D. et al. Towards clinically applicable automated mandibular canal segmentation on CBCT. J. Dent. 144, 104931. https://doi.org/10.1016/j.jdent.2024.104931 (2024).
Koechner, D., Petropoulos, H., Eaton, R. P., Hart, B. L. & Brooks, W. M. Segmentation of small structures in MR images: Semiautomated tissue hydration measurement. J. Magn. Reson. Imaging 5, 347–351. https://doi.org/10.1002/jmri.1880050320 (1995).
Balsiger, F. et al. Segmentation of peripheral nerves from magnetic resonance neurography: A fully-automatic, deep learning-based approach. Front. Neurol. 9, 777. https://doi.org/10.3389/fneur.2018.00777 (2018).
Beste, N. C. et al. Automated peripheral nerve segmentation for MR-neurography. Eur. Radiol. Exp. 8, 97. https://doi.org/10.1186/s41747-024-00503-8 (2024).
Meurant, G. & SIAM. The Lanczos and Conjugate gradient algorithms: from theory to finite precision computations 1–44 https://doi.org/10.1137/1.9780898718140.ch1 (2006).
Zhang, H. et al. ResNeSt: Split-attention networks. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2735–2745. https://doi.org/10.1109/CVPRW56347.2022.00309 (2022).
Chen, L. C. et al. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRSs. IEEE Trans. Pattern Anal. Mach. Intell. 40, 834–848. https://doi.org/10.1109/TPAMI.2017.2699184 (2018).
Zhao, H. et al. Pyramid scene parsing network. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition. 6230–6239 https://doi.org/10.1109/CVPR.2017.660 (2017).
Oktay, O. et al. Attention U-Net: Learning where to look for the pancreas. arXiv preprint arXiv. 1804.03999v3. https://doi.org/10.48550/arXiv.1804.03999 (2018).
Shelhamer, E., Long, J. & Darrell, T. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 640–651. https://doi.org/10.1109/TPAMI.2016.2572683 (2017).
Ronneberger, O., Fischer, P., Brox, T. & U-Net convolutional networks for biomedical image segmentation in medical image computing and computer-assisted intervention – MICCAI 2015 (eds Navab, N., Hornegger, J., Wells, W. & Frangi, A.) 234–241 (Springer, 2015). https://doi.org/10.1007/978-3-319-24574-4_28.
He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition. 770–778. https://doi.org/10.1109/CVPR.2016.90 (2016).
Lou, A., Guan, S. & Loew, M. CaraNet: Context axial reverse attention network for segmentation of small medical objects. J. Med. Imaging 10, 014005. https://doi.org/10.1117/1.JMI.10.1.014005 (2023).
Lin, A. et al. Ds-TransUNet: Dual swin transformer u-net for medical image segmentation. IEEE Trans. Instrum. Meas. 71, 1–15. https://doi.org/10.48550/arXiv.2106.06716 (2022).
Dumitru, R. G., Peteleaza, D. & Craciun, C. Using DUCK-Net for polyp image segmentation. Sci. Rep. 13, 9803. https://doi.org/10.1038/s41598-023-36940-5 (2023).
Chen, L. C., Zhu, Y., Papandreou, G., Schroff, F. & Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Computer Vision – ECCV 2018 (eds. Ferrari, V., Hebert, M., Sminchisescu, C. & Weiss, Y.) 833–851. (Springer, 2018) https://doi.org/10.1007/978-3-030-01234-2_49.
Huang, C. H., Wu, H. Y., Lin, Y. L. & HarDNet-MSEG a simple encoder-decoder polyp segmentation neural network that achieves over 0.9 mean dice and 86 FPS. arXiv preprint arXiv:2101.07172v2. https://doi.org/10.48550/arXiv.2101.07172 (2021).
Bui, N. T. et al. Multi-scale edge-guided attention network for weak boundary polyp segmentation. In Proc. of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 7970–7979. https://doi.org/10.48550/arXiv.2309.03329 (2024).
Müller, D., Soto-Rey, I. & Kramer, F. Towards a guideline for evaluation metrics in medical image segmentation. BMC Res. Notes. 15, 210. https://doi.org/10.1186/s13104-022-06096-y (2022).
Jung, J. Y., Lin, Y. & Carrino, J. A. An updated review of magnetic resonance neurography for plexus imaging. Korean J. Radiol. 24, 1114–1130. https://doi.org/10.3348/kjr.2023.0150 (2023).
Usui, K. et al. Evaluation of motion artefact reduction depending on the artefacts’ directions in head MRI using conditional generative adversarial networks. Sci. Rep. 13, 8526. https://doi.org/10.1038/s41598-023-35794-1 (2023).
Kim, S., Park, H. & Park, S. H. A review of deep learning-based reconstruction methods for accelerated MRI using spatiotemporal and multi-contrast redundancies. Biomed. Eng. Lett. 14, 1221–1242. https://doi.org/10.1007/s13534-024-00425-9 (2024).
Funding
This study was supported by the Yonsei University College of Dentistry (6-2024-0007).
Author information
Authors and Affiliations
Contributions
All authors gave their final approval and agreed to be accountable for all aspects of the work. C.L. proposed the ideas; Y.J.C., K.J.J., and C.L. collected data; Sujeong H. and H.O. designed the deep learning model and contributed to data analysis; Y.J.C. and Sujeong H. drafted the manuscript; and Sang-sun H. and J.L. critically revised the manuscript.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Ethical approval
Approval was obtained from the ethics committee of Yonsei Dental College Hospital IRB ( No. 2-2024-0010). The procedure used in this study adheres to the tenets of the Declaration of Helsinki.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Choi, Y.J., Han, S., Lee, C. et al. A feasibility study of deep learning-based segmentation of the inferior alveolar nerve on magnetic resonance neurography. Sci Rep (2026). https://doi.org/10.1038/s41598-026-45392-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-026-45392-6