Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
ProSeg: multi-scale context fusion for high-precision prostate segmentation in MRI
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 16 March 2026

ProSeg: multi-scale context fusion for high-precision prostate segmentation in MRI

  • Jiangwei Qin1 &
  • Yanli Yang2 

Scientific Reports , Article number:  (2026) Cite this article

  • 522 Accesses

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Computational biology and bioinformatics
  • Medical research

Abstract

Prostate MRI segmentation is critical for accurate diagnosis and treatment planning but remains challenging due to the complex interplay between the peripheral zone’s thin, irregular boundaries and the central gland’s homogeneous textures, compounded by variability across imaging protocols. To address these challenges, we propose ProSeg, a novel deep learning framework featuring a specialized ProSeg block that integrates dual complementary processes: (1) anisotropic convolutions for precise peripheral zone boundary delineation and (2) cross-slice attention mechanisms for robust central gland texture modeling. Extensive evaluations on the Promise12 and Promise158 datasets demonstrate ProSeg’s state-of-the-art performance, achieving Dice scores of 84.31% (peripheral zone) and 57.92% (central gland) on Promise12, and 83.15% (peripheral zone) and 56.38% (central gland) on Promise158, significantly outperforming existing methods. ProSeg’s consistent accuracy across diverse protocols highlights its clinical potential for reliable prostate zonal segmentation in real-world settings.

Similar content being viewed by others

Deep learning for fully automatic detection, segmentation, and Gleason grade estimation of prostate cancer in multiparametric magnetic resonance images

Article Open access 22 February 2022

Prostate cancer malignancy detection and localization from mpMRI using auto-deep learning as one step closer to clinical utilization

Article Open access 27 December 2022

Region-adaptive magnetic resonance image enhancement for improving CNN-based segmentation of the prostate and prostatic zones

Article Open access 13 January 2023

Data availability

The data supporting the findings of this study are publicly available. The Promise12 dataset is available at: https://promise12.grand-challenge.org/. The Prostate158 dataset is available at:https://github.com/kbressem/prostate158.

References

  1. Mahapatra, D. & Buhmann, J. M. Prostate MRI segmentation using learned semantic knowledge and graph cuts. IEEE Trans. Biomed. Eng. 61, 756–764 (2013).

    Google Scholar 

  2. Hung, A. L. Y. et al. CAT-Net: A cross-slice attention transformer model for prostate zonal segmentation in MRI. IEEE Trans. Med. Imaging 42, 291–303 (2022).

    Google Scholar 

  3. van den Kroonenberg, D. L. et al. Development and validation of an algorithm for segmentation of the prostate and its zones from three-dimensional transrectal multiparametric ultrasound images. Eur. Urol. Open Sci. 75(2025), 48–54 (2025).

    Google Scholar 

  4. Rajagopal, A. et al. Mixed supervision of histopathology improves prostate cancer classification from MRI. IEEE Trans. Med. Imaging 43, 2610–2623 (2024).

    Google Scholar 

  5. Zhe, X., Donghuan, L., Luo, J., Zheng, Y. & Tong, R. K. Separated collaborative learning for semi-supervised prostate segmentation with multi-site heterogeneous unlabeled MRI data. Med. Image Anal. 93(2024), 103095 (2024).

    Google Scholar 

  6. Pati, P. et al. Weakly supervised joint whole-slide segmentation and classification in prostate cancer. Med. Image Anal. 89(2023), 102915 (2023).

    Google Scholar 

  7. Pang, Y. et al. Slim UNETRV2: 3D image segmentation for resource-limited medical portable devices. IEEE Trans. Med. Imaging 45, 542–553 (2025).

    Google Scholar 

  8. Vesal, S. et al. Domain generalization for prostate segmentation in transrectal ultrasound images: A multi-center study. Med. Image Anal. 82(2022), 102620 (2022).

    Google Scholar 

  9. Litjens, G. et al. Evaluation of prostate segmentation algorithms for MRI: The PROMISE12 challenge. Med. Image Anal. 18, 359–373 (2014).

    Google Scholar 

  10. Pang, Y. et al. Online self-distillation and self-modeling for 3D brain tumor segmentation. IEEE J. Biomed. Health Inform. 29, 8965–8975 (2025).

    Google Scholar 

  11. Duran, A. et al. ProstAttention-Net: A deep attention model for prostate cancer segmentation by aggressiveness in MRI scans. Med. Image Anal. 77(2022), 102347 (2022).

    Google Scholar 

  12. Pang, Y. et al. Efficient breast lesion segmentation from ultrasound videos across multiple source-limited platforms. IEEE J. Biomed. Health Inform. 29, 8890–8903 (2025).

    Google Scholar 

  13. Cuocolo, R. et al. Deep learning whole-gland and zonal prostate segmentation on a public MRI dataset. J. Magn. Reson. Imaging 54(2), 452–459 (2021).

    Google Scholar 

  14. Pang, Y. et al. Endoscopic adaptive transformer for enhanced polyp segmentation in endoscopic imaging. IEEE Trans. Med. Imaging 45, 987–999 (2025).

    Google Scholar 

  15. Ding, M. et al. A multi-scale channel attention network for prostate segmentation. IEEE Trans. Circuits Syst. II Express Briefs 70, 1754–1758 (2023).

    Google Scholar 

  16. He, K. et al. HF-UNet: Learning hierarchically inter-task relevance in multi-task U-net for accurate prostate segmentation in CT images. IEEE Trans. Med. Imaging 40, 2118–2128 (2021).

    Google Scholar 

  17. Pang, Y. et al. SegTom: A 3D volumetric medical image segmentation framework for thoracoabdominal multi-organ anatomical structures. IEEE J. Biomed. Health Inform. 30, 551–563 (2025).

    Google Scholar 

  18. Ronneberger, O., Fischer, P., & Brox, T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention 234–241 (Springer, 2015).

  19. Yu, L., Yang, X., Chen, H., Qin, J., & Heng, P. A. Volumetric ConvNets with mixed residual connections for automated prostate segmentation from 3D MR images. In Proceedings of the AAAI Conference on Artificial Intelligence Vol. 31 (2017).

  20. Li, X. et al. H-DenseUNet: Hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE Trans. Med. Imaging 37, 2663–2674 (2018).

    Google Scholar 

  21. Zhou, Z., Siddiquee, M. M. R., Tajbakhsh, N. & Liang, J. Unet++: Redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med. Imaging 39, 1856–1867 (2019).

    Google Scholar 

  22. Debesh J. et al. Resunet++: An advanced architecture for medical image segmentation. In 2019 IEEE International Symposium on Multimedia (ISM) 225–2255 (IEEE, 2019).

  23. Hatamizadeh, A., et al. Unetr: Transformers for 3d medical image segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision 574–584 (2022).

  24. Hatamizadeh, A. et al. Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images. In International MICCAI Brainlesion Workshop 272–284 (Springer, 2021).

  25. Chen, J. et al. Transunet: Transformers make strong encoders for medical image segmentation. arXiv preprint. arXiv:2102.04306 (2021).

  26. Chen, G., Li, L., Dai, Y., Zhang, J. & Yap, M. H. Aau-net: An adaptive attention u-net for breast lesions segmentation in ultrasound images. IEEE Trans. Med. Imaging 42, 1289–1300 (2022).

    Google Scholar 

  27. Xing, Z., Ye, T., Yang, Y., Liu, G., & Zhu, L. Segmamba: Long-range sequential modeling mamba for 3d medical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention 578–588 (Springer, 2024).

  28. Tian, Z., Liu, L., Zhang, Z. & Fei, B. PSNet: Prostate segmentation on MRI based on a convolutional neural network. J. Med. Imaging 5, 021208–021208 (2018).

    Google Scholar 

  29. Jia, H. et al. 3D APA-Net: 3D adversarial pyramid anisotropic convolutional network for prostate segmentation in MR images. IEEE Trans. Med. Imaging 39, 447–457 (2019).

    Google Scholar 

  30. Zhu, Q., Du, B. & Yan, P. Boundary-weighted domain adaptive neural network for prostate MR image segmentation. IEEE Trans. Med. Imaging 39, 753–763 (2019).

    Google Scholar 

  31. Rundo, L. et al. USE-Net: Incorporating Squeeze-and-Excitation blocks into U-Net for prostate zonal segmentation of multi-institutional MRI datasets. Neurocomputing 365(2019), 31–43 (2019).

    Google Scholar 

  32. Xiangxiang Qin, Yu., Zhu, W. W., Gui, S., Zheng, B. & Wang, P. 3D multi-scale discriminative network with multi-directional edge loss for prostate zonal segmentation in bi-parametric MR images. Neurocomputing 418(2020), 148–161 (2020).

    Google Scholar 

  33. Jia, H., Cai, W., Huang, H. & Xia, Y. Learning multi-scale synergic discriminative features for prostate image segmentation. Pattern Recogn. 126(2022), 108556 (2022).

    Google Scholar 

  34. Ma, L., Fan, Q., Tian, Z., Liu, L. & Fei, B. A novel Residual and Gated Network for prostate segmentation on MR images. Biomed. Signal Process. Control 87(2024), 105508 (2024).

    Google Scholar 

  35. Arshad, M. et al. RaNet: A residual attention network for accurate prostate segmentation in T2-weighted MRI. Front. Med. 12(2025), 1589707 (2025).

    Google Scholar 

  36. Howard, A. G. et al. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint. arXiv:1704.04861 (2017).

  37. Pang, Y. et al. Slim UNETR: Scale hybrid transformers to efficient 3D medical image segmentation under limited computational resources. IEEE Trans. Med. Imaging 43(3), 994–1005 (2023).

    Google Scholar 

  38. Lin, M., Chen, Q., & Yan, S. Network in network. arXiv preprint. arXiv:1312.4400 (2013).

  39. Khan, S. et al. Transformers in vision: A survey. ACM Comput. Surv. 54, 1–41 (2022).

    Google Scholar 

  40. Milletari, F., Navab, N., & Ahmadi, S.-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 2016 Fourth International Conference on 3D Vision (3DV) 565–571 (IEEE, 2016).

  41. Lin, T.-Y., Goyal, P., Girshick, R., He, K., & Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision 2980–2988 (2017).

  42. Kervadec, H. et al. Constrained deep networks: Lagrangian optimization via log-barrier extensions. In 2022 30th European Signal Processing Conference (EUSIPCO) 962–966 (IEEE, 2022).

  43. Adams, L. C. et al. Prostate158-An expert-annotated 3T MRI dataset and algorithm for prostate cancer detection. Comput. Biol. Med. 148(2022), 105817 (2022).

    Google Scholar 

  44. Litjens, G., Debats, O., Barentsz, J., Karssemeijer, N. & Huisman, H. Computer-aided detection of prostate cancer in MRI. IEEE Trans. Med. Imaging 33, 1083–1092 (2014).

    Google Scholar 

  45. Cuocolo, R., Stanzione, A., Castaldo, A., Lucia, D. R. D. & Imbriaco, M. Quality control and whole-gland, zonal and lesion annotations for the PROSTATEx challenge public dataset. Eur. J. Radiol. 138(2021), 109647 (2021).

    Google Scholar 

  46. Paszke, A. et al. Pytorch: An imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 32, 8026–8037 (2019).

    Google Scholar 

  47. Pinaya, W. H. L. et al. Generative ai for medical imaging: Extending the monai framework. arXiv preprint. arXiv:2307.15208 (2023).

  48. Loshchilov, I., & Hutter, F. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint. arXiv:1608.03983 (2016).

  49. Viriyasaranon, T., Woo, S. M. & Choi, J.-H. Unsupervised visual representation learning based on segmentation of geometric pseudo-shapes for transformer-based medical tasks. IEEE J. Biomed. Health Inform. 27(4), 2003–2014. https://doi.org/10.1109/JBHI.2023.3237596 (2023).

    Google Scholar 

  50. Chang, H.-H., Zhuang, A. H., Valentino, D. J. & Chu, W.-C. Performance measure characterization for evaluating neuroimage segmentation algorithms. Neuroimage 47(1), 122–135 (2009).

    Google Scholar 

  51. Powers, D. M. W. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation (2020).

Download references

Author information

Authors and Affiliations

  1. Guangdong Pharmaceutical University, Guangzhou, China

    Jiangwei Qin

  2. Department of Pain Medicine, Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Third Hospital of Shanxi Medical University, Tongji Shanxi Hospital, Shanxi, China

    Yanli Yang

Authors
  1. Jiangwei Qin
    View author publications

    Search author on:PubMed Google Scholar

  2. Yanli Yang
    View author publications

    Search author on:PubMed Google Scholar

Contributions

J.Q. and Y.Y. conceived the study. J.Q. developed the methodology, implemented the software, performed validation and formal analysis, conducted the investigation, curated the data, and prepared the original draft. Y.Y. supervised the project, administered the research, and edited the manuscript. All authors reviewed and approved the final manuscript.

Corresponding author

Correspondence to Jiangwei Qin.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qin, J., Yang, Y. ProSeg: multi-scale context fusion for high-precision prostate segmentation in MRI. Sci Rep (2026). https://doi.org/10.1038/s41598-026-43589-3

Download citation

  • Received: 20 July 2025

  • Accepted: 05 March 2026

  • Published: 16 March 2026

  • DOI: https://doi.org/10.1038/s41598-026-43589-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Medical image analysis
  • Prostate segmentation
  • Peripheral zone
  • Central gland
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com footer links

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing