Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
SwinCup-DiscNet: A fusion transformer framework for glaucoma diagnosis using optic disc and cup features
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 09 February 2026

SwinCup-DiscNet: A fusion transformer framework for glaucoma diagnosis using optic disc and cup features

  • Rajitha Chilukuri1,
  • P. Praveen1,
  • Ranjith Kumar Gatla2 &
  • …
  • Reem A. Almenweer3 

Scientific Reports , Article number:  (2026) Cite this article

  • 352 Accesses

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Biomarkers
  • Computational biology and bioinformatics
  • Diseases
  • Mathematics and computing
  • Medical research

Abstract

Glaucoma remains a critical cause of permanent global visual disability, and is produced by advancing destruction of the visual nerve head (ONH). Early detection is critical important in preventing vision loss. We propose a new fusion transformer pipeline, which integrates optic disc/cup and feature-based segmentation to aid in the effective screening of glaucoma, in this paper. The proposed approach integrates U-Net with an attention mechanism to cut the Optic Disc (OD) and Optic Cup (OC), enabling after processing spectral shape descriptors to evaluate Vertical Cup-to-Disc Ratio (CDR). Fundus image descriptors are extracted together with the Swin Transformer encoder to detect glaucoma at the image scale. They employ a probabilistic fusion method to merge structural biomarker (CDR) and deep learning features to finally obtain the final glaucoma classification. The framework was studied in detail on three popular publicly available datasets: LAG, ACRIMA, and DRISTHI-GS. According to the experimental results, SwinCup-DiscNet consistently outperforms the traditional CNN-based models and methods that are based only on segmentation, as it surpasses these approaches on all datasets. The framework proves to be robust, reliable, and clinically interpretable, using execution metrics like DSC IoU, accuracy measures, and F1-score, as well as Cup-to-Disc Ratio Mean Absolute Error (CDR MAE). Findings show that SwinCup-DiscNet is a highly effective clinical tool used in real-world clinical settings to detect glaucoma early.

Similar content being viewed by others

Automated vertical cup-to-disc ratio determination from fundus images for glaucoma detection

Article Open access 24 February 2024

A generalizable deep learning regression model for automated glaucoma screening from fundus images

Article Open access 13 June 2023

A hybrid multi model artificial intelligence approach for glaucoma screening using fundus images

Article Open access 27 February 2025

Data availability

Data are available from the corresponding author upon reasonable request.

Abbreviations

\(\:I\left(x,y\right)\) :

Original input fundus image at pixel coordinates \(\:\left(x,y\right)\)

\(\:{I}_{p}\) :

Pre-processed image after resizing, normalization, enhancement, and denoising

\(\:R\left(\cdot\:\right)\) :

The resizing operator is applied to the image

\(\:N\left(\cdot\:\right)\) :

Normalization operator applied from the image

\(\:CLAHE\left(\cdot\:\right)\) :

CLAHE for illumination correction

\(\:{M}_{c}\) :

Segmented optic cup mask

\(\:{M}_{d}\) :

Segmented optic disc mask

\(\:{S}_{\theta\:}\left(\cdot\:\right)\) :

Cup segmentation function of Attention U-Net with parameters \(\:\theta\:\)

\(\:{D}_{\theta\:}\left(\cdot\:\right)\) :

Disc segmentation function of Attention U-Net with parameters \(\:\theta\:\)

\(\:\varPhi\:\left(\cdot\:\right)\) :

Post-processing operator for contour smoothing and ellipse fitting

\(\:{E}_{c}\) :

Elliptical boundary within the optic cup

\(\:{E}_{d}\) :

An elliptical boundary located optic disc

\(\:{H}_{c}\) :

Vertical dimension optic cup

\(\:{H}_{d}\) :

Vertical optic cup dimension

vCDR:

Vertical Cup-to-Disc Ratio, defined as \(\:{H}_{c}/{H}_{d}\)

OD:

Optic Disc

OC:

Optic Cup

\(\:{z}_{0}\) :

Initial tokenized representation of the pre-processed image

\(\:{z}_{l}^{\prime\:}\) :

Intermediate representation at Swin Transformer stage \(\:l\) after window-based self-attention

\(\:{z}_{l}\) :

Updated representation at stage \(\:l\) after feed-forward MLP

LN(\(\:\cdot\:\)):

Layer Normalization operator

SW-MSA(\(\:\cdot\:\)):

Swin Transformer Window-Multi-Head Self-Attention mechanism

MLP(\(\:\cdot\:\)):

Multi-Layer-Perceptron Transformation

\(\:{z}_{L}\) :

Final feature representation at the last Swin Transformer stage \(\:L\)

GAP(\(\:\cdot\:\)):

Global Average Pooling operation

\(\:{P}_{g}\) :

Probability of glaucoma predicted by the Swin Transformer branch

\(\:W,b\) :

Learnable weight matrix and bias vector in the classification head

\(\:\sigma\:\left(\cdot\:\right)\) :

Sigmoid activation function

\(\:\varPsi\:\) :

Fused decision score combining Swin Transformer probability and vCDR

\(\:\alpha\:\) :

Fusion weight factor balancing between \(\:{P}_{g}\) and vCDR

\(\:\mu\:\) :

Mean vCDR value from the training set

\(\:\sigma\:\) (in fusion):

Standard deviation of vCDR values in the training set

\(\:\tau\:\) :

Threshold for binary decision (glaucoma vs. normal)

\(\:\widehat{y}\) :

Final binary decision: \(\:1\) = Glaucoma, \(\:0\) = Normal

\(\:N\) :

Number of test samples used for evaluation

\(\:CD{R}_{i}\) :

Ground truth cup-to-disc ratio for sample \(\:i\)

\(\:{\widehat{CDR}}_{i}\) :

Predicted cup-to-disc ratio for sample \(\:i\)

\(\:MA{E}_{CDR}\) :

Mean Absolute Error in CDR estimation

IoU:

Intersection over Union

DSC:

Dice Score Coefficient

References

  1. Lee, S. S. Y. & Mackey, D. A. Glaucoma–risk factors and current challenges in the diagnosis of a leading cause of visual impairment. Maturitas 163, 15–22 (2022).

    Google Scholar 

  2. Sun, Y. et al. Time trends, associations and prevalence of blindness and vision loss due to glaucoma: an analysis of observational data from the global burden of disease study 2017. BMJ open. 12 (1), e053805 (2022).

    Google Scholar 

  3. Jan, C. L. et al. Analysing diagnostic practices and referral pathways for glaucoma in Australian primary eye care. Ophthalmic Physiol. Opt. 45 (5), 1211–1220 (2025).

    Google Scholar 

  4. Bazi, Y., Al Rahhal, M. M., Elgibreen, H. & Zuair, M. Vision transformers for segmentation of disc and cup in retinal fundus images. Biomed. Signal Process. Control. 91, 105915 (2024).

    Google Scholar 

  5. Tao, T. et al. Predicting diabetic retinopathy based on biomarkers: classification and regression tree models. Diabetes Res. Clin. Pract. 222, 112091 (2025).

    Google Scholar 

  6. Ikram, A. & Imran, A. ResViT FusionNet model: an explainable AI-driven approach for automated grading of diabetic retinopathy in retinal images. Comput. Biol. Med. 186, 109656 (2025).

    Google Scholar 

  7. Khan, S. D., Basalamah, S. & Lbath, A. A novel deep learning framework for retinal disease detection leveraging contextual and local features cues from retinal images. Med. Biol. Eng. Comput., 63(7).,1–18. (2025).

  8. Guo, Y., Peng, Y., Sun, J., Li, D. & Zhang, B. DSLN: dual-tutor student learning network for multiracial glaucoma detection. Neural Comput. Appl. 34 (14), 11885–11910 (2022).

    Google Scholar 

  9. Tulsani, A., Kumar, P. & Pathan, S. Automated segmentation of optic disc and optic cup for glaucoma assessment using improved UNET + + architecture. Biocybernetics Biomedical Eng. 41 (2), 819–832 (2021).

    Google Scholar 

  10. Zhang, Y. et al. TAU: transferable attention U-Net for optic disc and cup segmentation. Knowl. Based Syst. 213, 106668 (2021).

    Google Scholar 

  11. Shen, Y. et al. Graph attention u-net for retinal layer surface detection and choroid neovascularization segmentation in Oct images. IEEE Trans. Med. Imaging. 42 (11), 3140–3154 (2023).

    Google Scholar 

  12. Xiong, H., Long, F., Alam, M. S. & Sang, J. Multi-GlaucNet: A multi-task model for optic disc segmentation, blood vessel segmentation and glaucoma detection. Biomed. Signal Process. Control. 99, 106850 (2025).

    Google Scholar 

  13. Joshi, A. & Sharma, K. K. Graph deep network for optic disc and optic cup segmentation for glaucoma disease using retinal imaging. Phys. Eng. Sci. Med. 45 (3), 847–858 (2022).

    Google Scholar 

  14. Tabassum, M. et al. CDED-Net: joint segmentation of optic disc and optic cup for glaucoma screening. IEEE Access. 8, 102733–102747 (2020).

    Google Scholar 

  15. Elmoufidi, A., Skouta, A., Jai-Andaloussi, S. & Ouchetto, O. CNN with multiple inputs for automatic glaucoma assessment using fundus images. Int. J. Image Graphics. 23 (01), 2350012 (2023).

    Google Scholar 

  16. Közkurt, C. et al. Trish: an efficient activation function for CNN models and analysis of its effectiveness with optimizers in diagnosing glaucoma. J. Supercomputing. 80 (11), 15485–15516 (2024).

    Google Scholar 

  17. Sonti, K. & Dhuli, R. A new convolution neural network model KR-NET for retinal fundus glaucoma classification. Optik 283, 170861 (2023).

    Google Scholar 

  18. Velpula, V. K. & Sharma, L. D. Multi-stage glaucoma classification using pre-trained convolutional neural networks and voting-based classifier fusion. Front. Physiol. 14, 1175881 (2023).

    Google Scholar 

  19. Pattanaik, S., Behera, S., Majhi, S. K., Pradhan, R. & Dwibedy, P. An ensemble stacked bi-lstm with resnet50 method for glaucoma classification in Iot framework: an ensemble method for glaucoma classification Iot framework. J. Sci. Indust. Res. (JSIR). 84 (1), 24–35 (2025).

    Google Scholar 

  20. Sangeetha, J., Rekha, D. & Priyanka, M. A residual network integrated with multimodal fundus features for automatic glaucoma classification. Comput. Electr. Eng. 122, 109880 (2025).

    Google Scholar 

  21. Pathan, S., Kumar, P., Pai, R. M. & Bhandary, S. V. Automated segmentation and classifcation of retinal features for glaucoma diagnosis. Biomed. Signal Process. Control. 63, 102244 (2021).

    Google Scholar 

  22. Fang, L. & Qiao, H. Glaucoma multi-classification using the novel syndrome mechanism-based dual-channel network. Biomed. Signal Process. Control. 86, 105143 (2023).

    Google Scholar 

  23. Geetha, A. & Prakash, N. B. Classification of glaucoma in retinal images using EfficientnetB4 deep learning model. Comput. Syst. Sci. Eng., 43(3). 1041-1055 (2022).

  24. Meenakshi Devi, P., Gnanavel, S., Narayana, K. E. & Sangeethaa, S. N. Novel methods for diagnosing glaucoma: segmenting optic discs and cups using ensemble learning algorithms and Cdr ratio analysis. IETE J. Res. 70 (8), 6828–6847 (2024).

    Google Scholar 

  25. Bengani, S. & S, V. Automatic segmentation of optic disc in retinal fundus images using semi-supervised deep learning. Multimed. Tools Appl. 80 (3), 3443–3468 (2021).

    Google Scholar 

  26. Naidana, K. S. & Barpanda, S. S. A unique discrete wavelet & deterministic Walk-Based glaucoma classification approach using Image-Specific enhanced retinal images. Comput. Syst. Sci. Eng. 47(1). 699-720 (2023).

  27. Li, L. et al. A large-scale database and a CNN model for attention-based glaucoma detection. IEEE Trans. Med. Imaging. 39 (2), 413–424 (2019).

    Google Scholar 

  28. Lin, M. et al. Automated diagnosing primary open-angle glaucoma from fundus image by simulating human’s grading with deep learning. Sci. Rep. 12 (1), 14080 (2022).

    Google Scholar 

  29. Elangovan, P. & Nath, M. K. En-ConvNet: a novel approach for glaucoma detection from color fundus images using ensemble of deep convolutional neural networks. Int. J. Imaging Syst. Technol. 32 (6), 2034–2048 (2022).

    Google Scholar 

  30. Mouhafid, M., Zhou, Y., Shan, C. & Xiao, Z. Towards efficient glaucoma screening with modular convolution-involution cascade architecture. PeerJ Comput. Sci. 11, e2844 (2025).

    Google Scholar 

  31. Elangovan, P. & Nath, M. K. Glaucoma assessment from color fundus images using convolutional neural network. Int. J. Imaging Syst. Technol. 31 (2), 955–971 (2021).

    Google Scholar 

  32. Barros, D. M. et al. Machine learning applied to retinal image processing for glaucoma detection: review and perspective. Biomed. Eng. Online. 19 (1), 20 (2020).

    Google Scholar 

  33. Virbukaitė, S., Bernatavičienė, J. & Imbrasienė, D. Glaucoma identification using convolutional neural networks ensemble for optic disc and cup segmentation. IEEE Access. 12, 82720–82729 (2024).

    Google Scholar 

Download references

Author information

Authors and Affiliations

  1. School of Computer Science and Artificial Intelligence, SR University, Warangal, Telangana, 506371, India

    Rajitha Chilukuri & P. Praveen

  2. Department of Computer Science and Engineering (Data Science), Institute of Aeronautical Engineering, Hyderabad, Telangana, 500043, India

    Ranjith Kumar Gatla

  3. Faculty of Mechanical and Electrical Engineering, Damascus University, Damascus, Syrian Arab Republic

    Reem A. Almenweer

Authors
  1. Rajitha Chilukuri
    View author publications

    Search author on:PubMed Google Scholar

  2. P. Praveen
    View author publications

    Search author on:PubMed Google Scholar

  3. Ranjith Kumar Gatla
    View author publications

    Search author on:PubMed Google Scholar

  4. Reem A. Almenweer
    View author publications

    Search author on:PubMed Google Scholar

Contributions

Praveen P and Ranjith Kumar Gatla: supervision, validation, project administration, resources, writing, review & editing. Reem A. Almenweer: investigation, result interpretation, critical revision of the manuscript, writing, review & editing.

Corresponding authors

Correspondence to P. Praveen or Reem A. Almenweer.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chilukuri, R., Praveen, P., Gatla, R.K. et al. SwinCup-DiscNet: A fusion transformer framework for glaucoma diagnosis using optic disc and cup features. Sci Rep (2026). https://doi.org/10.1038/s41598-026-39065-7

Download citation

  • Received: 02 October 2025

  • Accepted: 02 February 2026

  • Published: 09 February 2026

  • DOI: https://doi.org/10.1038/s41598-026-39065-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Cup-to-disc ratio
  • Swin transformer
  • Optic cup
  • Glaucoma
  • Attention U-Net
  • Deep learning
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com sitemap

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing: Translational Research

Sign up for the Nature Briefing: Translational Research newsletter — top stories in biotechnology, drug discovery and pharma.

Get what matters in translational research, free to your inbox weekly. Sign up for Nature Briefing: Translational Research