Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
A feasibility study of deep learning-based segmentation of the inferior alveolar nerve on magnetic resonance neurography
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 01 April 2026

A feasibility study of deep learning-based segmentation of the inferior alveolar nerve on magnetic resonance neurography

  • Yoon Joo Choi1 na1,
  • Sujeong Han2 na1,
  • Chena Lee1,
  • Kug Jin Jeon1,
  • Haesung Oh2,
  • Sang-Sun Han1,3 &
  • …
  • Jaesung Lee2,4 

Scientific Reports , Article number:  (2026) Cite this article

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Computational biology and bioinformatics
  • Diseases
  • Engineering
  • Health care
  • Mathematics and computing
  • Medical research

Abstract

The inferior alveolar nerve (IAN) is the major sensory nerve innervating the mandibular region, and its automatic segmentation is crucial. It has been indirectly identified on the mandibular canal, which surrounds the IAN, using computed tomography (CT) and cone-beam computed tomography (CBCT). Magnetic resonance neurography (MRN) is an imaging technique designed for nerve visualization, facilitating discrimination of small peripheral nerves from surrounding soft tissues. To our knowledge, this study is the first to perform semi-automatic segmentation of the IAN using MRN images. We developed a deep learning model based on 6,027 coronal MRN images and evaluated its performance using four quantitative metrics, comparing it with six state-of-the-art models that were retrained and tested on the same dataset for small-structure segmentation. Our model achieved a dice similarity coefficient (DSC) of 0.712 ± 0.254, significantly outperforming the six comparator models. In addition, in an analysis of segmentation failure rates according to DSC thresholds, our model demonstrated the lowest failure rate. In conclusion, unlike many previous studies that focused on bony boundaries using CBCT or CT, this study demonstrates the feasibility and potential clinical utility of MRN-based IAN segmentation.

Data availability

The code for model training and evaluation is publicly available at https://github.com/HaeSung-Oh/IAN-MRN-Segmentation. Datasets in the study are available from the corresponding author upon reasonable request, subject to approval by the institutional review board.

References

  1. Norton, N. S. & Willett, G. M. Netter’s Head and Neck Anatomy for Dentistry 4th edn, 385 (Elsevier, 2024).

  2. Agbaje, J. O. et al. Tracking of the inferior alveolar nerve: Its implication in surgical planning. Clin. Oral Investig. 21, 2213–2220. https://doi.org/10.1007/s00784-016-2014-x (2017).

    Google Scholar 

  3. Yu, S. K., Lee, M. H., Jeon, Y. H., Chung, Y. Y. & Kim, H. J. Anatomical configuration of the inferior alveolar neurovascular bundle: a histomorphometric analysis. Surg. Radiol. Anat. 38, 195–201. https://doi.org/10.1007/s00276-015-1540-6 (2016).

    Google Scholar 

  4. Liu, M. Q. et al. Deep learning-based evaluation of the relationship between mandibular third molar and mandibular canal on CBCT. Clin. Oral Investig. 26, 981–991. https://doi.org/10.1007/s00784-021-04082-5 (2022).

    Google Scholar 

  5. Lam, E. & Mallya, S. White and Pharoah’s Oral Radiology: Principles and Interpretation 9th edn, 156-185, 436-466, 626-678 (Elsevier, 2025).

  6. de Oliveira-Santos, C. et al. Assessment of variations of the mandibular canal through cone beam computed tomography. Clin. Oral Investig. 16, 387–393. https://doi.org/10.1007/s00784-011-0544-9 (2012).

    Google Scholar 

  7. Na, J. Y. et al. Prognosis in case of nerve disturbance after mandibular implant surgery in relation to computed tomography findings and symptoms. J. Periodontal Implant Sci. 49, 127–135. https://doi.org/10.5051/jpis.2019.49.2.127 (2019).

    Google Scholar 

  8. Chhabra, A. et al. MR Neurography: Past, present, and future. AJR Am. J. Roentgenol. 197, 583–591. https://doi.org/10.2214/AJR.10.6012 (2012).

    Google Scholar 

  9. Al-Haj Husain, A. et al. Visualization of inferior alveolar and lingual nerve pathology by 3D double-echo steady-state MRI: Two case reports with literature review. J. Imaging 8, 75. https://doi.org/10.3390/jimaging8030075 (2022).

    Google Scholar 

  10. Martín Noguerol, T., Barousse, R., Socolovsky, M. & Luna, A. Quantitative magnetic resonance (MR) neurography for evaluation of peripheral nerves and plexus injuries. Quant. Imaging Med. Surg. 7, 398–421. https://doi.org/10.21037/qims.2017.08.01 (2017).

    Google Scholar 

  11. Burian, E. et al. High resolution MRI for quantitative assessment of inferior alveolar nerve impairment in course of mandible fractures: an imaging feasibility study. Sci. Rep. 10, 11566. https://doi.org/10.1038/s41598-020-68501-5 (2020).

    Google Scholar 

  12. Jaskari, J. et al. Deep learning method for mandibular canal segmentation in dental cone beam computed tomography volumes. Sci. Rep. 10, 5842. https://doi.org/10.1038/s41598-020-62321-3 (2020).

    Google Scholar 

  13. Lahoud, P. et al. Development and validation of a novel artificial intelligence driven tool for accurate mandibular canal segmentation on CBCT. J. Dent. 116, 103891. https://doi.org/10.1016/j.jdent.2021.103891 (2022).

    Google Scholar 

  14. Usman, M. et al. Dual-stage deeply supervised attention-based convolutional neural networks for mandibular canal segmentation in CBCT scans. Sensors 22, 9877. https://doi.org/10.3390/s22249877 (2022).

    Google Scholar 

  15. Cipriano, M. et al. Deep segmentation of the mandibular canal: A new 3D annotated dataset of CBCT volumes. IEEE Access. 10, 11500–11510. https://doi.org/10.1109/ACCESS.2022.3144840 (2022).

    Google Scholar 

  16. Du, G., Tian, X. & Song, Y. Mandibular canal segmentation from CBCT image using 3D convolutional neural network with scSE attention. IEEE Access. 10, 111272–111283. https://doi.org/10.1109/ACCESS.2022.3213839 (2022).

    Google Scholar 

  17. Jeoun, B. S. et al. Canal-Net for automatic and robust 3D segmentation of mandibular canals in CBCT images using a continuity-aware contextual network. Sci. Rep. 12, 13460. https://doi.org/10.1038/s41598-022-17341-6 (2022).

    Google Scholar 

  18. Lin, X. et al. Accurate mandibular canal segmentation of dental CBCT using a two-stage 3D-UNet based segmentation framework. BMC Oral Health 23, 551. https://doi.org/10.1186/s12903-023-03279-2 (2023).

    Google Scholar 

  19. Lv, J. et al. Automatic segmentation of mandibular canal using transformer based neural networks. Front. Bioeng. Biotechnol. 11, 1302524. https://doi.org/10.3389/fbioe.2023.1302524 (2023).

    Google Scholar 

  20. Zhao, H. et al. Whole mandibular canal segmentation using transformed dental CBCT volume in Frenet frame. Heliyon 9, e17651 https://doi.org/10.1016/j.heliyon.2023.e17651 (2023).

  21. Oliveira-Santos, N. et al. Automated segmentation of the mandibular canal and its anterior loop by deep learning. Sci. Rep. 13, 10819. https://doi.org/10.1038/s41598-023-37798-3 (2023).

    Google Scholar 

  22. Ni, F. D. et al. Towards clinically applicable automated mandibular canal segmentation on CBCT. J. Dent. 144, 104931. https://doi.org/10.1016/j.jdent.2024.104931 (2024).

    Google Scholar 

  23. Koechner, D., Petropoulos, H., Eaton, R. P., Hart, B. L. & Brooks, W. M. Segmentation of small structures in MR images: Semiautomated tissue hydration measurement. J. Magn. Reson. Imaging 5, 347–351. https://doi.org/10.1002/jmri.1880050320 (1995).

    Google Scholar 

  24. Balsiger, F. et al. Segmentation of peripheral nerves from magnetic resonance neurography: A fully-automatic, deep learning-based approach. Front. Neurol. 9, 777. https://doi.org/10.3389/fneur.2018.00777 (2018).

    Google Scholar 

  25. Beste, N. C. et al. Automated peripheral nerve segmentation for MR-neurography. Eur. Radiol. Exp. 8, 97. https://doi.org/10.1186/s41747-024-00503-8 (2024).

    Google Scholar 

  26. Meurant, G. & SIAM. The Lanczos and Conjugate gradient algorithms: from theory to finite precision computations 1–44 https://doi.org/10.1137/1.9780898718140.ch1 (2006).

  27. Zhang, H. et al. ResNeSt: Split-attention networks. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2735–2745. https://doi.org/10.1109/CVPRW56347.2022.00309 (2022).

  28. Chen, L. C. et al. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRSs. IEEE Trans. Pattern Anal. Mach. Intell. 40, 834–848. https://doi.org/10.1109/TPAMI.2017.2699184 (2018).

    Google Scholar 

  29. Zhao, H. et al. Pyramid scene parsing network. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition. 6230–6239 https://doi.org/10.1109/CVPR.2017.660 (2017).

  30. Oktay, O. et al. Attention U-Net: Learning where to look for the pancreas. arXiv preprint arXiv. 1804.03999v3. https://doi.org/10.48550/arXiv.1804.03999 (2018).

    Google Scholar 

  31. Shelhamer, E., Long, J. & Darrell, T. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 640–651. https://doi.org/10.1109/TPAMI.2016.2572683 (2017).

    Google Scholar 

  32. Ronneberger, O., Fischer, P., Brox, T. & U-Net convolutional networks for biomedical image segmentation in medical image computing and computer-assisted intervention – MICCAI 2015 (eds Navab, N., Hornegger, J., Wells, W. & Frangi, A.) 234–241 (Springer, 2015). https://doi.org/10.1007/978-3-319-24574-4_28.

  33. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition. 770–778. https://doi.org/10.1109/CVPR.2016.90 (2016).

  34. Lou, A., Guan, S. & Loew, M. CaraNet: Context axial reverse attention network for segmentation of small medical objects. J. Med. Imaging 10, 014005. https://doi.org/10.1117/1.JMI.10.1.014005 (2023).

    Google Scholar 

  35. Lin, A. et al. Ds-TransUNet: Dual swin transformer u-net for medical image segmentation. IEEE Trans. Instrum. Meas. 71, 1–15. https://doi.org/10.48550/arXiv.2106.06716 (2022).

    Google Scholar 

  36. Dumitru, R. G., Peteleaza, D. & Craciun, C. Using DUCK-Net for polyp image segmentation. Sci. Rep. 13, 9803. https://doi.org/10.1038/s41598-023-36940-5 (2023).

    Google Scholar 

  37. Chen, L. C., Zhu, Y., Papandreou, G., Schroff, F. & Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Computer Vision – ECCV 2018 (eds. Ferrari, V., Hebert, M., Sminchisescu, C. & Weiss, Y.) 833–851. (Springer, 2018) https://doi.org/10.1007/978-3-030-01234-2_49.

  38. Huang, C. H., Wu, H. Y., Lin, Y. L. & HarDNet-MSEG a simple encoder-decoder polyp segmentation neural network that achieves over 0.9 mean dice and 86 FPS. arXiv preprint arXiv:2101.07172v2. https://doi.org/10.48550/arXiv.2101.07172 (2021).

  39. Bui, N. T. et al. Multi-scale edge-guided attention network for weak boundary polyp segmentation. In Proc. of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 7970–7979. https://doi.org/10.48550/arXiv.2309.03329 (2024).

  40. Müller, D., Soto-Rey, I. & Kramer, F. Towards a guideline for evaluation metrics in medical image segmentation. BMC Res. Notes. 15, 210. https://doi.org/10.1186/s13104-022-06096-y (2022).

    Google Scholar 

  41. Jung, J. Y., Lin, Y. & Carrino, J. A. An updated review of magnetic resonance neurography for plexus imaging. Korean J. Radiol. 24, 1114–1130. https://doi.org/10.3348/kjr.2023.0150 (2023).

    Google Scholar 

  42. Usui, K. et al. Evaluation of motion artefact reduction depending on the artefacts’ directions in head MRI using conditional generative adversarial networks. Sci. Rep. 13, 8526. https://doi.org/10.1038/s41598-023-35794-1 (2023).

    Google Scholar 

  43. Kim, S., Park, H. & Park, S. H. A review of deep learning-based reconstruction methods for accelerated MRI using spatiotemporal and multi-contrast redundancies. Biomed. Eng. Lett. 14, 1221–1242. https://doi.org/10.1007/s13534-024-00425-9 (2024).

    Google Scholar 

Download references

Funding

This study was supported by the Yonsei University College of Dentistry (6-2024-0007).

Author information

Author notes
  1. Yoon Joo Choi and Sujeong Han have contributed equally to this work.

Authors and Affiliations

  1. Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, 50-1 Yonsei-ro Seodaemun-gu, Seoul, 03722, Republic of Korea

    Yoon Joo Choi, Chena Lee, Kug Jin Jeon & Sang-Sun Han

  2. Department of Artificial Intelligence, Chung-Ang University, Seoul, Republic of Korea

    Sujeong Han, Haesung Oh & Jaesung Lee

  3. Institute for Innovation in Digital Healthcare, Yonsei University Health System, Seoul, Republic of Korea

    Sang-Sun Han

  4. AI/ML Innovation Research Center, Chung-Ang University, Seoul, Republic of Korea

    Jaesung Lee

Authors
  1. Yoon Joo Choi
    View author publications

    Search author on:PubMed Google Scholar

  2. Sujeong Han
    View author publications

    Search author on:PubMed Google Scholar

  3. Chena Lee
    View author publications

    Search author on:PubMed Google Scholar

  4. Kug Jin Jeon
    View author publications

    Search author on:PubMed Google Scholar

  5. Haesung Oh
    View author publications

    Search author on:PubMed Google Scholar

  6. Sang-Sun Han
    View author publications

    Search author on:PubMed Google Scholar

  7. Jaesung Lee
    View author publications

    Search author on:PubMed Google Scholar

Contributions

All authors gave their final approval and agreed to be accountable for all aspects of the work. C.L. proposed the ideas; Y.J.C., K.J.J., and C.L. collected data; Sujeong H. and H.O. designed the deep learning model and contributed to data analysis; Y.J.C. and Sujeong H. drafted the manuscript; and Sang-sun H. and J.L. critically revised the manuscript.

Corresponding authors

Correspondence to Sang-Sun Han or Jaesung Lee.

Ethics declarations

Competing interests

The authors declare no competing interests.

Ethical approval

Approval was obtained from the ethics committee of Yonsei Dental College Hospital IRB ( No. 2-2024-0010). The procedure used in this study adheres to the tenets of the Declaration of Helsinki.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary Material 1 (download DOCX )

Supplementary Material 2 (download DOCX )

Supplementary Material 3 (download TIF )

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Choi, Y.J., Han, S., Lee, C. et al. A feasibility study of deep learning-based segmentation of the inferior alveolar nerve on magnetic resonance neurography. Sci Rep (2026). https://doi.org/10.1038/s41598-026-45392-6

Download citation

  • Received: 07 November 2025

  • Accepted: 18 March 2026

  • Published: 01 April 2026

  • DOI: https://doi.org/10.1038/s41598-026-45392-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Artificial intelligence
  • Deep learning
  • Magnetic resonance neurography
  • Image segmentation
  • Inferior alveolar nerve
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com footer links

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics