Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
CT-to-MRI translation of medical volume data based on an enhanced diffusion model
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 23 March 2026

CT-to-MRI translation of medical volume data based on an enhanced diffusion model

  • Ji Ma1 na1,
  • Jinjin Chen2 na1 &
  • Aoxiang Liang3 

Scientific Reports , Article number:  (2026) Cite this article

  • 710 Accesses

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Biomedical engineering
  • Computer science
  • Information technology

Abstract

In clinical practice, medical inter-modality imaging results can assist doctors in making better decisions, as different modalities imaging results can provide complementary information. Traditionally, obtaining these imaging results requires using various medical devices to scan patients, which can be time-consuming, costly, and potentially harmful to the patient. Motivated by the need to address these limitations, we propose an alternative method that facilitates the conversion of volume CT into volume MRI. The method is based on a Diffusion model and incorporates a post-processing approach to enhance the model’s output. To validate our approach, we conduct experiments and achieve good results on brain and pelvic datasets obtained from clinical practice, despite approximately 6% of the slices being incompletely paired. We also compare our method with state-of-the-art techniques, both qualitatively and quantitatively. Our experimental results show that our method outperforms state-of-the-art techniques, including MedSynthesisV1, CycleGAN, Pix2Pix and Diffusion, when using ground truth as a reference. Finally, we conduct an experiment to select the optimal hyperparameters, including the number of epochs and the parameters \(cutoffPercentage\_left\) and \(cutoffPercentage\_right\).

Similar content being viewed by others

Denoising diffusion probabilistic models for 3D medical image generation

Article Open access 05 May 2023

Accurate prediction of disease-risk factors from volumetric medical scans by a deep vision model pre-trained with 2D scans

Article 01 October 2024

Pre- and post-surgery brain tumor multimodal magnetic resonance imaging data optimized for large scale computational modelling

Article Open access 05 November 2022

Data availability

The data that support the findings of this study are openly available in Grand Challenge repository at https://synthrad2023.grand-challenge.org/.

References

  1. Gu, X. et al. Cross-modality image translation: CT image synthesis of MR brain images using multi generative network with perceptual supervision. Comput. Methods Progr. Biomed. 237, 107571 (2023).

    Google Scholar 

  2. Ranjan, A., Lalwani, D. & Misra, R. Gan for synthesizing CT from t2-weighted MRI data towards MR-guided radiation treatment. Magn. Reson. Mater. Phys. Biol. Med. 35, 449–457 (2022).

    Google Scholar 

  3. Touati, R., Le, W. T. & Kadoury, S. A feature invariant generative adversarial network for head and neck MRI/CT image synthesis. Phys. Med. Biol. 66, 095001 (2021).

    Google Scholar 

  4. Zhao, B. et al. Ct synthesis from MR in the pelvic area using residual transformer conditional GAN. Comput. Med. Imaging Graph. 103, 102150 (2023).

    Google Scholar 

  5. Xu, L. et al. Bpgan: Bidirectional CT-to-MRI prediction using multi-generative multi-adversarial nets with spectral normalization and localization. Neural Netw. 128, 82–96 (2020).

    Google Scholar 

  6. Li, W. et al. Magnetic resonance image (MRI) synthesis from brain computed tomography (CT) images based on deep learning methods for magnetic resonance (MR)-guided radiotherapy. Quant. Imaging Med. Surg. 10, 1223 (2020).

    Google Scholar 

  7. Dong, X. et al. Synthetic CT generation from non-attenuation corrected pet images for whole-body pet imaging. Phys. Med. Biol. 64, 215016 (2019).

    Google Scholar 

  8. Liu, Y. et al. CBCT-based synthetic CT generation using deep-attention cyclegan for pancreatic adaptive radiotherapy. Med. Phys. 47, 2472–2483 (2020).

    Google Scholar 

  9. Thummerer, A. et al. Comparison of CBCT based synthetic CT methods suitable for proton dose calculations in adaptive proton therapy. Phys. Med. Biol. 65, 095002 (2020).

    Google Scholar 

  10. Wang, T. et al. A review on medical imaging synthesis using deep learning and its clinical applications. J. Appl. Clin. Med. Phys. 22, 11–36 (2021).

    Google Scholar 

  11. McNaughton, J. et al. Machine learning for medical image translation: A systematic review. Bioengineering 10, 1078 (2023).

    Google Scholar 

  12. Patashnik, O., Danon, D., Zhang, H. & Cohen-Or, D. Balagan: Image translation between imbalanced domains via cross-modal transfer. arXiv preprint arXiv:2010.02036 (2020).

  13. Huang, Y. et al. Multi-constraint transferable generative adversarial networks for cross-modal brain image synthesis. Int. J. Comput. Vis. 1–17 (2024).

  14. Jin, C.-B. et al. Dc2anet: Generating lumbar spine MR images from CT scan data based on semi-supervised learning. Appl. Sci. 9, 2521 (2019).

    Google Scholar 

  15. Lee, J. H. et al. Spine computed tomography to magnetic resonance image synthesis using generative adversarial networks: A preliminary study. J. Korean Neurosurg. Soc. 63, 386–396 (2020).

    Google Scholar 

  16. Kida, S. et al. Visual enhancement of cone-beam CT by use of cyclegan. Med. Phys. 47, 998–1010 (2020).

    Google Scholar 

  17. Armanious, K. et al. Independent attenuation correction of whole body [(18) f] fdg-pet using a deep learning approach with generative adversarial networks. EJNMMI Res. 10, 53 (2020).

  18. Liu, F. et al. A deep learning approach for 18 f-fdg pet attenuation correction. EJNMMI Phys. 5, 1–15 (2018).

    Google Scholar 

  19. Chen, X., Pun, C.-M. & Wang, S. Medprompt: Cross-modal prompting for multi-task medical image translation. arXiv preprint arXiv:2310.02663 (2023).

  20. Dai, Y., Gao, Y. & Liu, F. Transmed: Transformers advance multi-modal medical image classification. Diagnostics 11, 1384 (2021).

    Google Scholar 

  21. Yan, S., Wang, C., Chen, W. & Lyu, J. Swin transformer-based GAN for multi-modal medical image translation. Front. Oncol. 12, 942511 (2022).

    Google Scholar 

  22. Hu, Z., Liu, H., Li, Z. & Yu, Z. Cross-model transformer method for medical image synthesis. Complexity 2021, 5624909 (2021).

    Google Scholar 

  23. Zhu, L. et al. Make-a-volume: Leveraging latent diffusion models for cross-modality 3d brain mri synthesis. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 592–601 (Springer, 2023).

  24. Wang, Z. et al. Diffusion based zero-shot medical image-to-image translation for cross modality segmentation. arXiv preprint arXiv:2404.01102 (2024).

  25. Kim, J. & Park, H. Adaptive latent diffusion model for 3d medical image to image translation: Multi-modal magnetic resonance imaging study. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 7604–7613 (2024).

  26. Wang, Z. et al. Cross-modal image translation using di?usion models. ArXiv (2023).

  27. Özbey, M. et al. Unsupervised medical image translation with adversarial diffusion models. IEEE Trans. Med. Imaging (2023).

  28. Zhu, Y. et al. Cross-domain medical image translation by shared latent gaussian mixture model. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part II 23, 379–389 (Springer, 2020).

  29. Saharia, C. et al. Palette: Image-to-image diffusion models. In ACM SIGGRAPH 2022 Conference Proceedings, 1–10 (2022).

  30. Umirzakova, S., Shakhnoza, M., Sevara, M. & Whangbo, T. K. Deep learning for multiple sclerosis lesion classification and stratification using MRI. Comput. Biol. Med. 192, 110078. https://doi.org/10.1016/j.compbiomed.2024.110078 (2025).

    Google Scholar 

  31. Thummerer, A. et al. Synthrad 2023 grand challenge dataset: Generating synthetic CT for radiotherapy. Med. Phys. 50, 4664–4674 (2023).

    Google Scholar 

  32. Nie, D. et al. Medical image synthesis with deep convolutional adversarial networks. IEEE Trans. Biomed. Eng. 65, 2720–2730 (2018).

    Google Scholar 

  33. Zhu, J.-Y., Park, T., Isola, P. & Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In 2017 IEEE International Conference on Computer Vision (ICCV), 2242–2251. https://doi.org/10.1109/ICCV.2017.244 (2017).

  34. Isola, P., Zhu, J.-Y., Zhou, T. & Efros, A. A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1125–1134 (2017).

  35. Ho, J., Jain, A. & Abbeel, P. Denoising diffusion probabilistic models. Adv. Neural. Inf. Process. Syst. 33, 6840–6851 (2020).

    Google Scholar 

  36. Erfurt, J. et al. A study of the perceptually weighted peak signal-to-noise ratio (WPSNR) for image compression. In 2019 IEEE International Conference on Image Processing (ICIP), 2339–2343 (IEEE, 2019).

  37. Wang, Z., Simoncelli, E. P. & Bovik, A. C. Multiscale structural similarity for image quality assessment. In The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, vol. 2, 1398–1402 (IEEE, 2003).

  38. Zhang, L., Zhang, L., Mou, X. & Zhang, D. Fsim: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 20, 2378–2386 (2011).

    Google Scholar 

  39. Wang, Z. & Bovik, A. C. A universal image quality index. IEEE Signal Process. Lett. 9, 81–84 (2002).

    Google Scholar 

Download references

Funding

This work was funded by the Humanities and Social Sciences Foundation of Ministry of Education of China (Grant No. 23YJC760011).

Author information

Author notes
  1. Ji Ma and Jinjin Chen contributed equally to this work and share first authorship.

Authors and Affiliations

  1. School of Ocean Information Engineering, Jimei University, Xiamen, China

    Ji Ma

  2. School of Design and Art, Communication University of Zhejiang, Hangzhou, China

    Jinjin Chen

  3. School of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, China

    Aoxiang Liang

Authors
  1. Ji Ma
    View author publications

    Search author on:PubMed Google Scholar

  2. Jinjin Chen
    View author publications

    Search author on:PubMed Google Scholar

  3. Aoxiang Liang
    View author publications

    Search author on:PubMed Google Scholar

Contributions

Ji Ma and Jinjin Chen contributed equally to this work and share first authorship.

Corresponding author

Correspondence to Jinjin Chen.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ma, J., Chen, J. & Liang, A. CT-to-MRI translation of medical volume data based on an enhanced diffusion model. Sci Rep (2026). https://doi.org/10.1038/s41598-026-45181-1

Download citation

  • Received: 24 February 2025

  • Accepted: 17 March 2026

  • Published: 23 March 2026

  • DOI: https://doi.org/10.1038/s41598-026-45181-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com footer links

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics