Abstract
CorneAI for iOS is an artificial intelligence (AI) application to classify the condition of the cornea and cataract into nine categories: normal, infectious keratitis, non-infection keratitis, scar, tumor, deposit, acute primary angle closure, lens opacity, and bullous keratopathy. We evaluated its performance to classify multiple conditions of the cornea and cataract of various races in images published in the Cornea journal. The positive predictive value (PPV) of the top classification with the highest predictive score was 0.75, and the PPV for the top three classifications exceeded 0.80. For individual diseases, the highest PPVs were 0.91, 0.73, 0.42, 0.72, 0.77, and 0.55 for infectious keratitis, normal, non-infection keratitis, scar, tumor, and deposit, respectively. CorneAI for iOS achieved an area under the receiver operating characteristic curve of 0.78 (95% confidence interval [CI] 0.5–1.0) for normal, 0.76 (95% CI 0.67–0.85) for infectious keratitis, 0.81 (95% CI 0.64–0.97) for non-infection keratitis, 0.55 (95% CI 0.41–0.69) for scar, 0.62 (95% CI 0.27–0.97) for tumor, and 0.71 (95% CI 0.53–0.89) for deposit. CorneAI performed well in classifying various conditions of the cornea and cataract when used to diagnose journal images, including those with variable imaging conditions, ethnicities, and rare cases.
Similar content being viewed by others
Introduction
The cornea and crystalline lenses are crucial for focusing light onto the retina and maintaining optimal vision. Pathological conditions of the ocular media, such as, corneal opacity, infectious keratitis and cataracts, are the leading causes of vision impairment, affecting 75 million people worldwide (15 million with blindness (presenting visual acuity of < 3/60 in the better eye) and 60 million with moderate-to-severe vision impairment)1,2. Corneal diseases and cataracts are considered as avoidable vision loss with early detection and timely medical intervention1,2,3. However, real-world diagnosis and treatment depend on the availability of skilled ophthalmologists. Despite recent medical progress, the number of patients with avoidable blindness due to corneal diseases and cataracts continues to increase as the global population grows and ages, owing to the limited number of experienced ophthalmologists3,4.
In recent years, the integration of artificial Intelligence (AI) into healthcare has emerged as a transformative force, revolutionizing diagnostic processes and patient care5,6. AI applications are now used in various medical fields, including ophthalmology7,8.Advancements in AI have significantly benefited ophthalmology, enabling the use of fundus images to identify retinal diseases, such as retinal detachment9, age-related macular degeneration10, diabetic retinopathy11,12, and glaucoma13,14. For anterior segment diseases, studies have reported the classification of bacterial and fungal keratitis using anterior segment slit-lamp images15,16. Diagnosing corneal diseases requires a skilled ophthalmologist to examine the patient’s cornea using a slit-lamp microscope or slit-lamp imaging. Although these AI techniques have achieved promising results, their ability to differentiate between multiple corneal diseases remains limited17.
Furthermore, previous studies on AI applications in ophthalmology often rely on specialized medical equipment, such as slit-lamp microscopes, fundus cameras, and anterior segment optical coherence tomography (OCT) devices. Limited availability of such equipment in medically underserved areas leads to delays in diagnosis and treatment. AI that can be used for different races is necessary. One notable breakthrough is the integration of AI-powered diagnostic tools into everyday devices, such as smartphones18,19. Recently, we developed an AI-powered smartphone application called “CorneAI” that can categorize anterior eye images20. However, its performance in classifying diverse, real-world images captured under various conditions remains unclear. Ophthalmology journals, such as the Cornea journal (Lippincott, Philadelphia, PA, USA), contain anterior eye images for various diseases, races, and imaging conditions. This AI was created using a Japanese image dataset and we did not evaluate whether it can be utilized for a variety of races. Therefore, this study evaluated the performance of CorneAI in classifying anterior eye diseases published in the Cornea journal over 12 years from 2011 to 2022.
Results
A total of 357 anterior segment images met the inclusion criteria for this study. The demographics characteristics of patients categorised by disease are summarised in Table 1. Infectious keratitis was the most prevalent category (122 eyes), followed by scar (73 eyes) and tumor (72 eyes). Notably, images related to APAC, lens opacity, and bullous keratopathy were not observed in the Cornea journal between 2011 and 2022 (Table 1).
Performance of CorneAI
The total PPV for the highest-ranking predictive score was 0.75. Its performance exceeded 0.92 when the third classification candidate was included. Infectious keratitis demonstrated the highest individual disease PPV at 0.91. PPVs for other categories were normal eye (0.73), non-infection keratitis (0.42), scar (0.72), tumor (0.77), and deposit (0.55) (Table 2). When the third classification candidate was included, the classification performance for each disease exceeded 0.80.
Figure 1 shows the confusion matrices for the nine diseases. Elements (i, j) of each confusion matrix represent the empirical probability of the predicting class j, while the ground truth is class i. There were some cases wherein CorneAI for iOS misclassified infectious keratitis as non-infectious keratitis, scarring or deposition.
Confusion matrices describing nine corneal disease/cataract categories. This chart shows the confusion matrix for nine corneal disease/cataract classifications. Element (i, j) of each confusion matrix represents the empirical probability of the predicting class j given that the ground truth was class i.
The AUC of the proposed deep learning algorithm were: 0.78 (95% CI 0.53–1.00) for normal eye, 0.76 (95% CI 0.67–0.85) for infectious keratitis, 0.81 (95% CI 0.64–0.97) for non-infection keratitis, 0.55 (95% CI 0.41–0.69) for scar, 0.62 (95% CI 0.27–0.97) for tumor, and 0.71 (95% CI 0.53–0.89) for deposit (Fig. 2). The sensitivity and specificity were 1.00 (95% CI 0.741–1.00) and 0.500 (95% CI 0.187–0.812) for normal eye, 0.607 (95% CI 0.514–0.692) and 0.793 (95% CI 0.616–0.901) for infectious keratitis, 0.727 (95% CI 0.434–0.902) and 0.823 (95% CI 0.589–0.938) for non-infection keratitis, 0.370 (95% CI 0.254–0.503) and 0.739 (95% CI 0.535–0.874) for scar, 0.981 (95% CI 0.903–0.999) and 0.333 (95% CI 0.171–0.881) for tumor, 1.00 (95% CI 0.866–1.00) and 0.428 (95% CI 0.212–0.674) for deposit, and 1.00 (95% CI 0.177–1.00) and 1.00 (95% CI 0.177–1.00) for bullous keratopathy, respectively (Table 3).
The total PPV for the highest-ranking predictive score was 0.75 and 0.73 when classifying PNG images with the photographic mode and the journal images of Ophthalmology with the real-time mode, respectively (Supplementary Fig. 1). The PPV for APAC was 0.81 when classifying APAC images from Japanese textbooks.
Classification errors
Figure 3 shows representative images of correctly classified infectious keratitis (a, b), non-infection keratitis (c, d), scar (e), and tumor (f). CorneAI effectively categorised typical infectious keratitis cases and identified rarer instances, such as Mycobacterium keratitis and Acanthamoeba keratitis.
Representative examples of images correctly classified by CorneAI. (A) Image of mycobacterium keratitis classified as “infection.” (B) Image of bacterial keratitis classified as “infection.” (C) Image of phlyctenular keratitis classified as “non-infection.” (D) Image of Mooren ulcer classified as “non-infection.” (E) Image of corneal scar classified as “scar.”(F) Image of squamous cell carcinoma classified as "tumor".
Figure 4 illustrates examples of misclassified images. In Schnyder corneal dystrophy (a-d), some lesions were correctly classified as deposits, whereas others were misclassified as scars (50%). Similarly, most gelatinous drop-like corneal dystrophy (GDLD, e–h) were misclassified as infectious keratitis, potentially owing to the small number of GDLD examples in the CorneAI training dataset.
Since CorneAI was trained using images from brown-eyed Japanese, its performance in other races must be validated. We tested it on images published in the Cornea journal featuring Caucasian with blue, gray, and hazel eyes. CorneAI effectively identified abnormalities in blue-eyed individuals, accurately classifying cases like granular corneal dystrophy and Schnyder corneal dystrophy as deposits (Fig. 5A, B). However, normal eyes of blue-eyed individuals were prone to misclassify as scars or tumors (Fig. 5C, D).
Classification of images with blue irises. (A) Image of granular corneal dystrophy with blue iris classified as “deposit.” (B) Image of Schnyder corneal dystrophy with blue iris classified as “deposit”. (C) Image of normal eye with blue iris misclassified as “scar.” *(D) Image of normal eye with blue iris misclassified as “scar.”
Discussion
We evaluated the classification performance of CorneAI using typical anterior segment images published in the Cornea journal, encompassing difference races and diseases. The total PPVs for the top 1 and 1–3 classifications were 0.75 and 0.92, respectively, comparing favourably with previously reported smartphone-based classifications23. CorneAI’s performance for infectious keratitis was particularly high (PPV = 0.91), while non-infection keratitis and deposit classifications were lower (0.42 and 0.55, respectively). This disparity may be attributed to race differences and limited training datasets for rare diseases. Classification performance was ensured under various conditions using the real-time mode, which is easy for everyone to use. Smartphone real-time-based classification holds the potential to revolutionize conditions of the cornea and cataract diagnosis.
Gu et al.17recently reported deep learning systems for detecting infectious keratitis, non-infection keratitis, corneal dystrophy or degeneration, and corneal neoplasms using slit-lamp images. The area under the ROC curve of the algorithm for each type of corneal disease was > 0.91. The PPV for the correct classification of infectious keratitis was 0.88. Li et al.19 reported an algorithm for classifying infectious keratitis, other corneal diseases, and normal eyes using a smartphone (all AUCs > 0.96). Here, the PPV of the correct classification for infectious keratitis was 0.94. Notably, the classification performance of our algorithm for infectious keratitis was comparable with that of previous studies. Infectious keratitis can be characterized by hyperemia, ulceration, and infiltration and these may be inherently easier for AI to identify. However, some images of late-stage infectious keratitis were classified as scarring, suggesting that images with minimal hyperemia, ulceration, and infiltration (which may represent healing stages) with scarring could be prone to misclassification.
Compared with Gu et al.17, our study recorded lower PPVs for non-infection keratitis (0.42 vs. 0.85), deposit (0.55 vs. 0.84) and tumor (0.77 vs. 0.89). One-third of the images of non-infection keratitis were incorrectly classified as infectious keratitis. However, we postulate that the low PPV in the current study was due to the rare non-infection keratitis cases published in Cornea, and not because of the incorrect classification of typical images from different races and imaging conditions. For example, CorneAI accurately diagnosed typical peripheral ulcerative keratitis, phlyctenular keratitis, and neurotrophic keratitis. In contrast, rare non-infection keratitis due to conditions such as Takayasu diseases21, Henoch-Schönlein purpura21, or Buerger disease21 were incorrectly classified as scar. Corneal melt following SARS-CoV-19 vaccination was classified as infectious keratitis and concluded to be non-infection keratitis, in which the authors suspected an infection with culture-negative results22. In actual clinical practice, corneal specialists tend to make mistakes in rare non-infection keratitis cases. Therefore, the performance of CorneAI should not be viewed as inferior to human diagnosis.
Approximately half of the deposit images were misclassified as infectious keratitis or scar (Fig. 4). The crystalline-like round deposition of typical Schnyder corneal dystrophy was correctly classified as deposit (Fig. 4A, B). In contrast, in cases with Schnyder dystrophy, irregular asymmetric opacities (Fig. 4C) or blurred white opacities (Fig. 4D) were incorrectly classified as scar, which may be difficult for CorneAI to classify. Similarly, approximately one-third of the GDLD cases were incorrectly classified as infectious keratitis (Fig. 4E–H). Most of them were typical mulberry type, and AI may have incorrectly classified the glossy amyloid on the ocular surface as an infectious infiltrate23. These two categories reduced the classification performance. The poor classification performance of Schnyder corneal dystrophy and GDLD may be due to limited training data at the time of algorithm creation. It will be necessary to increase the number of cases and variations in disease findings to improve the algorithm in the future. This underscores the importance of recognizing specific diseases where AI may not excel, necessitating caution and clinical judgement when utilizing AI-based classification tools.
This study uses images from an international journal, including some with blue irises. Our AI was initially trained on anterior-segment images that primarily featured Japanese individuals with brown irises. CorneAI's performance varied when analyzing blue irises. While some eyes with normal blue iris were incorrectly classified as having scar (Fig. 5C, D), CorneAI accurately classified cases with deposits and blue irises (Fig. 5A, B). This suggests that AI can correctly classify conditions when noticeable lesions exist, regardless of the iris color. However, in normal eyes with blue irises, the distinct color of the iris itself seems to distract the AI, leading to false classifications. When we collected and classified images of 12 cases of normal eyes with blue, gray, and hazel irises, the PPV was only 0.16 (Supplementary Fig. 2). CorneAI needs to be retrained with a dataset encompassing blue iris images and then re-evaluate its performance to improve its accuracy.
Infectious keratitis is an emergency disease, and early diagnosis determines the prognosis owing to its rapid progression24,25. APAC is an emergency disease as it causes blindness if not treated early. Although there were no images in the Cornea journal, the classification of APAC images from textbooks was highly accurate, with a PPV of 0.81. As an automated screening tool, this system could be applied in developing countries or areas without access to medical resources to identify keratitis in its early stages and provide timely referrals for positive cases, potentially preventing corneal visual impairment. Furthermore, ocular surface tumors, such as conjunctival squamous cell carcinoma, may metastasize and cause death26. Non-infection keratitis also requires early treatment for a good visual prognosis. CorneAI, potentially available to all smartphone users in the future, aims to assist people or non-ophthalmology doctors unfamiliar with eye diseases. CorneAI not only adheres to necessary regulations but also places a high priority on user privacy, ensuring that personal health data is handled with the utmost confidentiality. We compared the classification performance in the real-time and photographic modes; the PPVs of both methods were similar (Supplementary Fig. 1). The real-time and photographic modes are available in CorneAI for iOS. The real-time mode would be useful for patients’friends or family members. In contrast, the photographic mode would be useful in remote AI-assisted diagnosis, sending anterior segment images captured using smartphones. It simplifies disease classification into urgent, needs further examination, or normal. Conditions such as infectious keratitis and acute glaucoma, classified as urgent, encourage prompt hospital visits, ensuring timely treatment20. However, using CorneAI on individual smartphones introduces potential limitations. Images captured with scratched camera’s lenses may induce artifacts affecting classification accuracy and make performance dependent on individual device capabilities. Therefore, testing CorneAI across various smartphone software versions and models is necessary.
We changed the PyTorch model to CoreML to run the “You Only Look Once” version 5 (YOLO V.5) on the Apple neural engine in iOS. This conversion affects the accuracy, but Jens et al. reports that CoreML greatly reduces the latency of a machine learning model and only performed around 1% worse on average27. Programmatic problems do not cause accuracy loss and, as aforementioned, rare cases and cases with blue irises were the causes of low accuracy.
Our study had several limitations that should be considered. First, while selecting images for classification, images with poor quality or decentered images of peripheral corneal area were excluded. This selection bias could potentially underestimate the AI’s performance in real-world settings. In clinical practice, acquiring clear images is essential. However, our study conducted classifications in the real-time mode, which may have mitigated issues related to variations in image acquisition conditions. Second, our current algorithm cannot display heatmaps. This hampers our ability to pinpoint the specific features or regions that influenced the AI’s incorrect classifications. We are actively working on introducing a heatmap visualization feature to address this limitation. This feature would highlight abnormal corneal regions in images, aiding the clinical review and verification of the AIs classifications.
In conclusion, we evaluated the classification performance of anterior segment images from the Cornea journal using CorneAI, a smartphone-based AI model installed in an iPhone13Pro, for categorising anterior segment diseases from diverse image sources. This app would be useful for classification and can be installed on portable devices, such as smartphones, which could be helpful in triaging conditions of the cornea and cataract in developing countries and areas with limited access to medical resources.
Materials and methods
This study was approved by the Institution Review Board of the Japanese Ophthalmological Society (Protocol number: 15000133-20001). All procedures conformed to the tenets of the Declaration of Helsinki and the Japanese Guidelines for Life Science and Medical Research.
CorneAI and image selection
We developed an AI model named CorneAI using the YOLO V.5 architecture to classify the condition of the cornea and cataract into nine categories: normal, infectious keratitis, non-infection keratitis, scar, tumor, deposit, acute primary angle closure (APAC), lens opacity, and bullous keratopathy20. We retrieved the anterior segment images from the international professional journal (Cornea) between 2011 (30[1]) and 2023 (42[12]). Exclusion criteria included: (1) images with slit light or fluorescein staining; (2) monochrome images; (3) images obtained after keratoplasty; (4) low-quality images that were decentered or had inadequate light exposure.
Anterior eye images were classified using real-time mode of CorneAI installed on an iPhone 13 Pro smartphone (Apple Inc. Cupertino, California, USA). Smartphone images were captured using the super macro mode of an iPhone 13 Pro under the following conditions: (1) under standard room illumination (300 LUX); (2) a distance of approximately 3–5 cm between the image and the iPhone cameras; and (3) clear focus on the image (Videos 1 and 2). Images of the paper journal were directly retrieved with a smartphone. In real-time mode, the top three classification candidates for the image were displayed. Images were captured at the same location in the hospital, and the top three disease candidates with the highest predictive scores were recorded. We extracted some images in the PNG format and classified them using the photographic mode of CorneAI. Furthermore, we retrieved images from the international professional journal Ophthalmology to confirm whether accuracy is guaranteed with other journals, we also classified APAC images from some textbooks.
Predictive score calculation
During AI model testing, the estimated categories and corresponding predictive scores of the predicted bounding boxes were calculated. The category with the highest predictive score was selected for the final classification. The predictive scores were calculated using the sigmoid function in AI models. In the final layer (output layer) of deep learning (DL)-based AI models, the sigmoid function is applied to the feature value provided to the final layer, which is represented by:
where s_(b,c) is the predictive score, x_(b,c) is the feature value provided to the final layer, b is the index of the estimated bounding boxes, and c = 1,…,9 is the index of the categories.
Data analysis
The classification performance of each disease was evaluated, and the positive predictive value (PPV), sensitivity, and specificity of each category were calculated. The PPV including all nine categories was defined as “total PPV”. We determined ROC curves for the predictive scores of the nine categories. We also calculated the areas under the curves (AUCs) and their 95% confidence intervals (CIs). Curves closer to the top left quadrant indicate a better performance level than those close to the baseline. Statistical analyses were performed using Prism (version 6.04) for Windows software (Graphpad Software, Inc., San Diego, CA, USA).
Inform consent
The review committee stated that patient consent was not required for the retrospective study of slit-lamp microscopy images, because all slit-lamp images used in the study were published previously and deidentified.
Ethical approval
This study was approved by the Institution Review Board of Japanese Ophthalmological Society (Protocol number: 15000133-20001). All the procedures conformed to the tenets of the Declaration of Helsinki and the Japanese Guidelines for Life Science and Medical Research.
Meeting presentation
We presented this study in 129th Japanese Society of Clinical Ophthalmology.
Data availability
The data that support the findings of this study are available on request from the corresponding author (T.Y.). The data are not publicly available due to them containing information that could compromise research participant privacy/consent.
References
Flaxman, S. R. et al. Global causes of blindness and distance vision impairment 1990–2020: A systematic review and meta-analysis. Lancet Glob. Heal. 5, e1221–e1234 (2017).
Pascolini, D. & Mariotti, S. P. Global estimates of visual impairment: 2010. Br. J. Ophthalmol. 96, 614–618 (2012).
Bourne, R. R. A. et al. Trends in prevalence of blindness and distance and near vision impairment over 30 years: An analysis for the Global Burden of Disease Study. Lancet Glob. Heal. 9, e130–e143 (2021).
Gupta, N., Tandon, R., Gupta, S., Sreenivas, V. & Vashist, P. Burden of corneal blindness in India. Indian J. Commun. Med. 38, 198–206 (2013).
Litjens, G. et al. A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017).
Matheny, M. E., Whicher, D. & Thadaney Israni, S. Artificial intelligence in health care: A report from the National Academy of Medicine. JAMA J. Am. Med. Assoc. 323, 509–510 (2020).
Rashidi, P. & Bihorac, A. Artificial intelligence approaches to improve kidney care. Nat Rev Nephrol. 16, 71–72 (2020).
Chilamkurthy, S. et al. Deep learning algorithms for detection of critical findings in head CT scans: A retrospective study. Lancet 392, 2388–2396 (2018).
Li, Z. et al. Deep learning for detecting retinal detachment and discerning macular status using ultra-widefield fundus images. Commun. Biol. 3, 1–10 (2020).
Burlina, P. M. et al. Automated grading of age-related macular degeneration from color fundus images using deep convolutional neural networks. JAMA Ophthalmol. 135, 1170–1176 (2017).
Gulshan, V. et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA J. Am. Med. Assoc. 316, 2402–2410 (2016).
Gargeya, R. & Leng, T. Automated identification of diabetic retinopathy using deep learning. Ophthalmology 124, 962–969 (2017).
Li, Z. et al. Efficacy of a deep learning system for detecting glaucomatous optic neuropathy based on color fundus photographs. Ophthalmology 125, 1199–1206 (2018).
Li, Z. et al. Deep learning for automated glaucomatous optic neuropathy detection from ultra-widefield fundus images. Br. J. Ophthalmol. 105, 1548–1554 (2021).
Ghosh, A. K., Thammasudjarit, R., Jongkhajornpong, P., Attia, J. & Thakkinstian, A. Deep learning for discrimination between fungal keratitis and bacterial keratitis: DeepKeratitis. Cornea 41, 616–622 (2022).
Hung, N. et al. Using slit-lamp images for deep learning-based identification of bacterial and fungal keratitis: Model development and validation with different convolutional neural networks. Diagnostics 11(7), 1246 (2021).
Gu, H. et al. Deep learning for identifying corneal diseases from ocular surface slit-lamp photographs. Sci. Rep. 10, 1–11 (2020).
Freeman, K. et al. Algorithm based smartphone apps to assess risk of skin cancer in adults: systematic review of diagnostic accuracy studies. BMJ 368,127 (2020).
Li, Z. et al. Preventing corneal blindness caused by keratitis using artificial intelligence. Nat. Commun. https://doi.org/10.1038/s41467-021-24116-6 (2021).
Ueno, Y., Oda, M., Yamaguchi, T., Fukuoka, H., Nejima, R., Kitaguchi, Y., Miyake, M., Akiyama, M., Miyata, K., Kashiwagi, K., Naoyuki Maeda, J. S. & Hisashi Noma Kensaku Mori, T. O. Deep learning model for extensive smartphone-based diagnosis and triage of cataracts and multiple corneal diseases. Br. J. Ophthalmol. Epub ahead of print. https://doi.org/10.1136/bjo-2023-324488 (2024) .
Levitt, A. E. et al. Ocular inflammation in the setting of concomitant systemic autoimmune conditions in an older male population. Cornea 34, 762–767 (2015).
Khan, T. A. et al. Bilateral immune-mediated keratolysis after immunization with SARS-CoV-2 recombinant viral vector vaccine. Cornea 40, 1629–1632 (2021).
Ide, T. et al. A spectrum of clinical manifestations of gelatinous drop-like corneal dystrophy in Japan. Am. J. Ophthalmol. 137, 1081–1084 (2004).
Watson, S., Cabrera-Aguas, M. & Khoo, P. Common eye infections. Aust. Prescr. 41, 67–72 (2018).
Sharma, A. & Taniguchi, J. Review: Emerging strategies for antimicrobial drug delivery to the ocular surface: Implications for infectious keratitis. Ocul. Surf. 15, 670–679 (2017).
Santoni, A. et al. Management of invasive squamous cell carcinomas of the conjunctiva. Am. J. Ophthalmol. 200, 1–9 (2019).
Ahremark, J. et al. Benchmarking a machine learning model in the transformation from PyTorch to CoreML. LiU Electronic Press 33 (2022)
Acknowledgements
We thank Editage for the English language editing.
Funding
This study was supported by the Japan Agency for Medical Research and Development (Y.U., JP22hma322004). The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Author information
Authors and Affiliations
Contributions
Concept and design: Y. T., Y. U., M. O., and T. Y. Acquisition of photograph: Y. T., and T. Y. Development of the network architectures: M. O. Software engineering: Y. K. Critical revision of the manuscript for important intellectual content: Y. U., M. O., N. A., O. I., and Y. K. Management of this project: Y. U., and T. Y. Obtained funding: Y. U. Administrative, technical, or material support: Y. U., and T. Y.. Supervision: Y. U., and T. Y.
Corresponding author
Ethics declarations
Competing interests
Takefumi Yamaguchi: Grants (Nortis Pharma); honoria for lectures (Alcon Japan, HOYA, Novartis Pharma, AMO Japan, Santen Pharmaceuticals, Senju Pharmaceutical, Johnson & Johnson).
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Taki, Y., Ueno, Y., Oda, M. et al. Analysis of the performance of the CorneAI for iOS in the classification of corneal diseases and cataracts based on journal photographs. Sci Rep 14, 15517 (2024). https://doi.org/10.1038/s41598-024-66296-3
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41598-024-66296-3







