Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
A clinically applicable and generalizable deep learning model for anterior mediastinal tumors in CT images across multiple institutions
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 30 January 2026

A clinically applicable and generalizable deep learning model for anterior mediastinal tumors in CT images across multiple institutions

  • Chihiro Takemura1,
  • Mototaka Miyake2,
  • Kazuma Kobayashi3,4,
  • Hiromi Matsumoto3,5,
  • Ryota Shibaki3,5,
  • Atsushi Urikura2,6,
  • Yasushi Goto7,
  • Yasushi Yatabe8,
  • Shun-ichi Watanabe1,
  • Miyuki Sone2,
  • Masahiko Kusumoto2,
  • Ryuji Hamamoto3,5 &
  • …
  • Hirokazu Watanabe2 

Scientific Reports , Article number:  (2026) Cite this article

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Cancer imaging
  • Information technology

Abstract

Rare diseases are often difficult to diagnose, and their scarcity also makes it challenging to develop deep learning models for them due to limited large-scale datasets. Anterior mediastinal tumors—including thymoma and thymic carcinoma—represent such rare entities. A few diagnostic support systems for these tumors have been proposed; however, no prior studies have tested them across multiple institutions, and clinically applicable and generalizable models remain lacking. A total of 711 computed tomography (CT) images were collected from 136 hospitals, each from a different patient with pathologically proven anterior mediastinal tumors (339 males, 372 females). Of these, 485 images were used for training, 62 for tuning, and 164 for external testing. The external testing dataset comprised CT images from 121 unique institutions not involved in the other datasets. A 3D U-Net-based model was trained on the training dataset, and the model with the best performance on the tuning dataset was selected. This model was then evaluated on the external testing dataset for its segmentation and detection performance across different institutions. Based on the reference standards provided by board-certified diagnostic radiologists, the trained model achieved average Dice scores of 0.82, Intersection over Union (IoU) of 0.72, Precision of 0.85, and Recall of 0.82 for tumor segmentation at the CT-image level. The free-response receiver operating characteristic curve—derived from lesion-wise IoU thresholds—demonstrated high sensitivity and a low false-positive rate for tumor detection. Even under a stricter IoU threshold of 0.50, the model maintained a sensitivity of 0.87 with only 0.61 false positives per scan. Our model achieved clinically applicable segmentation and detection performance for anterior mediastinal tumors, demonstrating broad generalizability across 121 institutions and overcoming the data-scarcity challenges inherent to such rare diseases.

Similar content being viewed by others

Deep learning for automated, motion-resolved tumor segmentation in radiotherapy

Article Open access 30 June 2025

Predicting the risk category of thymoma with machine learning-based computed tomography radiomics signatures and their between-imaging phase differences

Article Open access 19 August 2024

Predicting treatment response from longitudinal images using multi-task deep learning

Article Open access 25 March 2021

Data availability

The datasets generated and/or analyzed during the current study are not publicly available due to patient privacy concerns but are available from the corresponding author on reasonable request. The trained model weights are available at https://huggingface.co/hirwatan/FFSCS.

References

  1. Tomiyama, N. et al. Anterior mediastinal tumors: Diagnostic accuracy of CT and MRI. Eur. J. Radiol. 69, 280–288. https://doi.org/10.1016/j.ejrad.2007.10.002 (2009).

    Google Scholar 

  2. Takahashi, K. & Al-Janabi, N. J. Computed tomography and magnetic resonance imaging of mediastinal tumors. J. Magn. Reson. Imaging 32, 1325–1339. https://doi.org/10.1002/jmri.22377 (2010).

    Google Scholar 

  3. Marom, E. M. Advances in thymoma imaging. J. Thorac. Imaging 28, 69–80. https://doi.org/10.1097/RTI.0b013e31828609a0 (2013).

    Google Scholar 

  4. Marx, A. et al. The 2021 WHO Classification of Tumors of the Thymus and Mediastinum: What is new in thymic epithelial, germ cell, and mesenchymal tumors?. J. Thorac. Oncol. 17, 200–213. https://doi.org/10.1016/j.jtho.2021.10.010 (2022).

    Google Scholar 

  5. Tang, R. et al. Pan-mediastinal neoplasm diagnosis via nationwide federated learning: A multicentre cohort study. Lancet Digit. Health 5, 560–570. https://doi.org/10.1016/S2589-7500(23)00106-1 (2023).

    Google Scholar 

  6. Huang, S. et al. Anterior mediastinal lesion segmentation based on two-stage 3D ResUNet with attention gates and lung segmentation. Front. Oncol. 10, 618357. https://doi.org/10.3389/FONC.2020.618357/BIBTEX (2021).

    Google Scholar 

  7. Zhou, Z. et al. Privacy enhancing and generalizable deep learning with synthetic data for mediastinal neoplasm diagnosis. NPJ Digit. Med. 7, 1–15 (2024).

    Google Scholar 

  8. Huang, S., Yu, H., Ai, D., Ma, G. & Yang, J. CLIP and image integrative prompt for anterior mediastinal lesion segmentation in CT image. 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) 99, 3315–3318. https://doi.org/10.1109/BIBM62325.2024.10822165 (2024).

  9. Yamada, S. et al. Automatic assessment of disproportionately enlarged subarachnoid-space hydrocephalus from 3D MRI using two deep learning models. Front. Aging Neurosci. 16, 1362637. https://doi.org/10.3389/fnagi.2024.1362637 (2024).

    Google Scholar 

  10. Tejani, A. S. Checklist for artificial intelligence in medical imaging (CLAIM): 2024 update. Radiol. Artif. Intell. https://doi.org/10.1148/RYAI.240300 (2024).

    Google Scholar 

  11. Carter, B. W. et al. ITMIG classification of mediastinal compartments and multidisciplinary approach to mediastinal masses. Radiographics 37, 413–436. https://doi.org/10.1148/RG.2017160095/ASSET/IMAGES/LARGE/RG.2017160095.FIG20B.JPEG (2017).

    Google Scholar 

  12. Ronneberger, O., Fischer, P. & Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015 (eds Navab, N. et al.) 234–241 (Springer, Cham, 2015).

  13. Milletari, F., Navab, N., & Ahmadi, S.-A.: V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571. IEEE, Stanford, CA, USA https://doi.org/10.1109/3DV.2016.79 (2016).

  14. Aerts, H. J. W. L. et al. Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat. Commun. 5, 4006. https://doi.org/10.1038/ncomms5006 (2014).

    Google Scholar 

  15. Dasegowda, G. et al. No code machine learning: Validating the approach on use-case for classifying clavicle fractures. Clin. Imaging 112, 110207. https://doi.org/10.1016/j.clinimag.2024.110207 (2024).

    Google Scholar 

  16. Walston, S. L. et al. Data set terminology of deep learning in medicine: A historical review and recommendation. Jpn. J. Radiol. 42, 1100–1109. https://doi.org/10.1007/s11604-024-01608-1 (2024).

    Google Scholar 

  17. Santomartino, S. M. et al. Performance and usability of code-free deep learning for chest radiograph classification, object detection, and segmentation. Radiol. Artif. Intell. 5, 220062. https://doi.org/10.1148/ryai.220062 (2023).

    Google Scholar 

  18. Jarrahi, M. H., Memariani, A. & Guha, S. The principles of data-centric AI. Commun. ACM 66(8), 84–92. https://doi.org/10.1145/3571724 (2023).

    Google Scholar 

  19. Yang, Y. et al. Development and validation of contrast-enhanced CT-based deep transfer learning and combined clinical-radiomics model to discriminate thymomas and thymic cysts: A multicenter study. Acad. Radiol. 31, 1615–1628. https://doi.org/10.1016/J.ACRA.2023.10.018 (2024).

    Google Scholar 

  20. Nakajo, M. et al. The efficacy of18F-FDG-PET-based radiomic and deep-learning features using a machine-learning approach to predict the pathological risk subtypes of thymic epithelial tumors. Brit. J. Radiol. https://doi.org/10.1259/BJR.20211050/SUPPL_FILE/BJR.20211050.SUPPL-01.DOCX (2022).

    Google Scholar 

  21. Han, S. et al. Fully automatic quantitative measurement of 18F-FDG PET/CT in thymic epithelial tumors using a convolutional neural network. Clin. Nucl. Med. 47, 590–598. https://doi.org/10.1097/RLU.0000000000004146 (2022).

    Google Scholar 

  22. Yamada, D. et al. Multimodal modeling with low-dose CT and clinical information for diagnostic artificial intelligence on mediastinal tumors: A preliminary study. BMJ Open Respir. Res. 11, 002249. https://doi.org/10.1136/BMJRESP-2023-002249 (2024).

    Google Scholar 

  23. Wang, F. et al. A deep learning model combining circulating tumor cells and radiological features in the multi-classification of mediastinal lesions in comparison with thoracic surgeons: A large-scale retrospective study. BMC Med. 23, 1–11. https://doi.org/10.1186/S12916-025-04104-Z (2025).

    Google Scholar 

Download references

Acknowledgements

This work was supported by the National Cancer Center Research and Development Fund (grant number: 2023-A-19). We would like to express our sincere gratitude to Hironori Matsumasa, Keigo Nakamura, and Hokuto Yonezawa from Medical System Research & Development Center, FUJIFILM Corporation, Tokyo, Japan, for the invaluable assistance in data analysis for this study.

Funding

This work was supported by the National Cancer Center Research and Development Fund (grant number: 2023-A-19).

Author information

Authors and Affiliations

  1. Department of Thoracic Surgery, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, 104-0045, Tokyo, Japan

    Chihiro Takemura & Shun-ichi Watanabe

  2. Department of Diagnostic Radiology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, 104-0045, Tokyo, Japan

    Mototaka Miyake, Atsushi Urikura, Miyuki Sone, Masahiko Kusumoto & Hirokazu Watanabe

  3. Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, 104-0045, Tokyo, Japan

    Kazuma Kobayashi, Hiromi Matsumoto, Ryota Shibaki & Ryuji Hamamoto

  4. Digital Content and Media Sciences Research Division, National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, 101-8430, Tokyo, Japan

    Kazuma Kobayashi

  5. AI Medical Engineering Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, 103-0027, Tokyo, Japan

    Hiromi Matsumoto, Ryota Shibaki & Ryuji Hamamoto

  6. Division of Radiological Sciences, Graduate School of Health Sciences, Ibaraki Prefectural University of Health Sciences, 4669-2, Ami-machi, Inashiki-gun, 300-0394, Ibaraki, Japan

    Atsushi Urikura

  7. Department of Thoracic Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, 104-0045, Tokyo, Japan

    Yasushi Goto

  8. Department of Diagnostic Pathology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, 104-0045, Tokyo, Japan

    Yasushi Yatabe

Authors
  1. Chihiro Takemura
    View author publications

    Search author on:PubMed Google Scholar

  2. Mototaka Miyake
    View author publications

    Search author on:PubMed Google Scholar

  3. Kazuma Kobayashi
    View author publications

    Search author on:PubMed Google Scholar

  4. Hiromi Matsumoto
    View author publications

    Search author on:PubMed Google Scholar

  5. Ryota Shibaki
    View author publications

    Search author on:PubMed Google Scholar

  6. Atsushi Urikura
    View author publications

    Search author on:PubMed Google Scholar

  7. Yasushi Goto
    View author publications

    Search author on:PubMed Google Scholar

  8. Yasushi Yatabe
    View author publications

    Search author on:PubMed Google Scholar

  9. Shun-ichi Watanabe
    View author publications

    Search author on:PubMed Google Scholar

  10. Miyuki Sone
    View author publications

    Search author on:PubMed Google Scholar

  11. Masahiko Kusumoto
    View author publications

    Search author on:PubMed Google Scholar

  12. Ryuji Hamamoto
    View author publications

    Search author on:PubMed Google Scholar

  13. Hirokazu Watanabe
    View author publications

    Search author on:PubMed Google Scholar

Contributions

All authors contributed to the conceptualization and/or design of this study. C.T. performed data curation, formal analysis, investigation, software operation, validation, visualization, and drafted the original manuscript. M.M. performed conceptualization, data curation, formal analysis, investigation, methodology, project administration, software operation, supervision, validation, visualization, and manuscript writing (review and editing). K.K. contributed to conceptualization, investigation, software operation, supervision, validation, visualization, and writing (review and editing). H.M. contributed to investigation, software operation, validation, visualization, and writing (review and editing). R.S. contributed to investigation, validation, and writing (review and editing). A.U. performed data curation, formal analysis, investigation, software operation, supervision, validation, visualization, and manuscript writing (review and editing). Y.G., Y.Y., S.W., M.K., and R.H. participated in manuscript writing (review and editing). M.S. contributed to visualization, investigation, and manuscript writing (review and editing). H.W. was responsible for data curation, conceptualization, funding acquisition, investigation, methodology, project administration, resources, supervision, visualization, and manuscript writing (review and editing). All authors had approved the final version. They take responsibility for the decision to submit for publication.

Corresponding author

Correspondence to Hirokazu Watanabe.

Ethics declarations

Competing interests

K.K. has received research funding from FUJIFILM Corporation. A.U. has received honoraria for lectures on CT imaging from Canon Medical Systems, Japan; GE Healthcare Pharma, Japan; and Fuji Pharma, Japan. Furthermore, A.U. serves as the Vice President of the Japanese Society of CT Technology and as a Delegate for the Japanese Society of Radiological Technology. Y.G. has received grants or contracts paid to his institution from AIQIVA Services Japan, MSD, Astellas Pharma, AstraZeneca, AbbVie, Amgen, Syneos Health, Sysmex Corporation, CMIC, Novartis Pharma, Bayer Pharmaceuticals, Bristol-Myers Squibb, MedPace Japan, Janssen Pharma, Clinical Research Support Center Kyushu, SATOMI, Ono Pharmaceutical, Daiichi Sankyo, Takeda Pharmaceutical, Chugai Pharmaceutical, NPO Thoracic Oncology Research Group, Eli Lilly Japan, and Preferred Network. He has also received grants or contracts paid to himself from AstraZeneca, AbbVie, Eli Lilly, Pfizer, Bristol Myers Squibb, Ono, Novartis, Kyorin, and Daiichi Sankyo. In addition, Y.G. has received payment or honoraria for lectures, presentations, speakers bureaus, manuscript writing, or educational events from Eli Lilly, Chugai, Taiho, Boehringer Ingelheim, Ono, Bristol Myers Squibb, Pfizer, MSD, Novartis, Merck, and Thermo Fisher. He has served on the monitoring or advisory boards for AstraZeneca, Chugai, Boehringer Ingelheim, Eli Lilly, GlaxoSmithKline, Taiho, Pfizer, Novartis, Kyorin, Guardant Health Inc., Illumina, Daiichi-Sankyo, Merck, MSD, Ono, and Janssen. He also holds a leadership or fiduciary role in Cancer Net Japan and JAMT. M.K. has received grants or contracts from Canon Medical Systems Corporation. M.K. has also received consulting fees from Daiichi-Sankyo Co., Ltd. R.H. has received research funds under contract from Fujifilm Corporation since the initial planning of this work. All other authors declare that they have no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Takemura, C., Miyake, M., Kobayashi, K. et al. A clinically applicable and generalizable deep learning model for anterior mediastinal tumors in CT images across multiple institutions. Sci Rep (2026). https://doi.org/10.1038/s41598-026-37504-z

Download citation

  • Received: 19 May 2025

  • Accepted: 22 January 2026

  • Published: 30 January 2026

  • DOI: https://doi.org/10.1038/s41598-026-37504-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Anterior mediastinal tumors
  • Deep learning
  • No-code AI platform
  • Segmentation
  • Detection
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on Twitter
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com sitemap

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing: Cancer

Sign up for the Nature Briefing: Cancer newsletter — what matters in cancer research, free to your inbox weekly.

Get what matters in cancer research, free to your inbox weekly. Sign up for Nature Briefing: Cancer