Abstract
Adaptive optics scanning laser ophthalmoscope (AOSLO) imaging enables the cone photoreceptor mosaic to be visualised in the living human eye. Performing quantitative analysis of these images requires identification of individual photoreceptors. This is typically performed by manual labelling, which is subjective, time consuming and not feasible on a large scale. Automated algorithms to replace manual labelling are required and deep learning-based methods provide an effective way of achieving this. However, this approach requires large volumes of annotated training data that are difficult to acquire. Synthetic data may help to bridge this lack of annotated training data. A U-Net configuration was trained using a large synthetic dataset of confocal AOSLO images generated using ERICA alongside a smaller dataset of real confocal AOSLO images (Milwaukee dataset). Model performance was assessed by calculating the Dice coefficient, a metric quantifying segmentation overlap, on both a real held-out test set and an independent real dataset (Oxford dataset). Results from this evaluation were benchmarked against expert labelling and two automated cone detection methods: a confocal convolutional neural network (CNN) (1), and a combined graph-theory and dynamic programming approach (2)). The mean Dice coefficient compared to manual labelling was 0.989 (U-Net), 0.989 (confocal CNN), and 0.985 (graph-theory and dynamic programming) on the held-out test set. On the independent Oxford dataset, the U-Net achieved a mean Dice coefficient of 0.962 compared to manual labelling. Results show performance that is comparable to the gold standard of manual labelling and two automated cone detection methods. Furthermore, we demonstrate generalisability of this approach on an independent real dataset with images from higher retinal eccentricities. This approach may be useful for quantitative analysis of the photoreceptor mosaic in patients with retinal disease to provide cell-specific imaging biomarkers from AOSLO images.
Similar content being viewed by others
Data availability
The datasets supporting the conclusions of this article are available in the following GitHub repositories, https://github.com/DavidCunefare/CNN-Cone-Detection/tree/master/Images and Results/Confocal as cited by Cunefare et al.1 and https://github.com/LauraKateYoung/ERICA as cited by Young et al.11.
Abbreviations
- AOSLO:
-
Adaptive optics scanning laser ophthalmoscope
- CNN:
-
Convolutional neural network
- C-CNN:
-
Confocal CNN
- ERICA:
-
Emulated Retinal Image CApture
- FN:
-
False negatives
- FP:
-
False positives
- GTDP:
-
Graph-theory and dynamic programming
- nW:
-
Nanowatts
- SNR:
-
Signal-to-noise ratio
- TP:
-
True positives
References
Cunefare, D. et al. Open source software for automatic detection of cone photoreceptors in adaptive optics ophthalmoscopy using convolutional neural networks. Sci Rep. 7(1), 1–11 (2017).
Chiu, S. J. et al. Automatic cone photoreceptor segmentation using graph theory and dynamic programming. Biomed. Opt Express. 4(6), 924 (2013).
Li, K. Y. & Roorda, A. Automated identification of cone photoreceptors in adaptive optics retinal images. J. Opt. Soc. Am. A. 24(5), 1358 (2007).
Xue, B., Choi, S. S., Doble, N. & Werner, J. S. Photoreceptor counting and montaging of en-face retinal images from an adaptive optics fundus camera. J. Opt. Soc. Am. A. 24(5), 1364 (2007).
Bukowska, D. M. et al. Semi-automated identification of cones in the human retina using circle Hough transform. Biomed Opt. Express. 6(12), 4676 (2015).
Cooper, R. F., Langlo, C. S., Dubra, A. & Carroll, J. Automatic detection of modal spacing (Yellott’s ring) in adaptive optics scanning light ophthalmoscope images. Ophthal. Physiol Opt. 33(4), 540–549 (2013).
Cunefare, D. et al. Automatic detection of cone photoreceptors in split detector adaptive optics scanning light ophthalmoscope images. Biomed. Opt Express. 7(5), 2036 (2016).
Bergeles, C. et al. Unsupervised identification of cone photoreceptors in non-confocal adaptive optics scanning light ophthalmoscope images. Biomed. Opt Express. 8(6), 3081 (2017).
Mariotti, L. & Devaney, N. Performance analysis of cone detection algorithms. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 32(4), 497–506 (2015).
Ronneberger, O., Fischer, P. & Brox, T. U-net: Convolutional networks for biomedical image segmentation. Lect. Notes Comput. Sci. (including Subser Lect Notes Artif Intell Lect Notes Bioinformatics). 9351, 234–241 (2015).
Young, L. K. & Smithson, H. E. Emulated retinal image capture (ERICA) to test, train and validate processing of retinal images. Sci. Rep. 11(1), 11225 (2021).
Young, L. K., Morris, T. J., Saunter, C. D. & Smithson, H. E. Compact, modular and in-plane AOSLO for high-resolution retinal imaging. Biomed. Opt. Express. 9(9), 4275 (2018).
Jarosz, J. et al. High temporal resolution aberrometry in a 50-eye population and implications for adaptive optics error budget. Biomed. Opt. Express. 8(4), 2088 (2017).
Chollet F et al. Keras. 2015.
Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M, et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. 2016;
Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. Going deeper with convolutions. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2015;07–12-June:1–9.
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2016;2016-Decem:770–8.
Kingma DP, Ba JL. Adam: A method for stochastic optimization. 3rd Int Conf Learn Represent ICLR 2015 - Conf Track Proc. 2015;1–15.
J Hamwood D Alonso-caneiro DM Sampson MJ Collins FK Chen 2019 Automatic Detection of Cone Photoreceptors With Fully Convolutional Networks. 8 6 1 8
Tanna, P. et al. Reliability and repeatability of cone density measurements in patients with stargardt disease and RPGR-associated retinopathy. Investig. Ophthalmol. Vis. Sci. 58(9), 3608–3615 (2017).
Liu, B. S. et al. The reliability of parafoveal cone density measurements. Br. J. Ophthalmol. 98(8), 1126–1131 (2014).
Funding
This research was funded in whole, or in part, by the Wellcome Trust 105605/Z/14/Z, Fight For Sight 1467/8, the University of Oxford Medical Research Fund MRF/LSV2015/2161, the EPA Cephalosporin Fund CF 277, and the John Fell Oxford University Press Research Fund 103/786 and 151/139. L.K.Y is supported by a UKRI Future Leaders Fellowship (MR/T042192/1) and by a Reece Foundation Fellowship in Translational Systems Neuroscience (Newcastle University). For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
Author information
Authors and Affiliations
Contributions
M.S. and L.K.Y conceptualisation and acquisition of data. M.S. creation of new software used in the work and drafting the manuscript. M.S., L.K.Y., H.E.S., and A.I.L.N. analysis and interpretation. L.K.Y., S.M.D., H.E.S., and A.I.L.N. supervision. All authors substantively revised, read, and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Shah, M., Young, L.K., Downes, S.M. et al. Automated cone photoreceptor detection using synthetic data and deep learning in confocal adaptive optics scanning laser ophthalmoscope images. Sci Rep (2026). https://doi.org/10.1038/s41598-026-39570-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-026-39570-9


