Abstract
Accurate endoscopy reports are crucial for the diagnosis and management of patients with upper gastrointestinal (UGI) diseases, yet errors and omissions are common. The preparation of routine reports for common diseases is labor-intensive and time-consuming. To address this, here we developed Report-Angel, an integrated AI system based on a multi-modal large language model (MLLM) and conventional deep learning models and trained it on 20,617 image-text pairs to automatically generate detailed draft reports for UGI endoscopy. Report-Angel achieved a clinically acceptable report rate of 79.3% (95% CI: 74.4–83.5%) in the prospective internal and 83.3% (95% CI: 78.7–87.3%) in the prospective external cohort. At the case level, Report-Angel achieved a report completeness of 88.51% (95% CI: 84.64−92.38%) and a report accuracy of 78.93% (95% CI: 73.98–83.88%), with average processing time of 1.5 s per lesion in the internal prospective video dataset. Lesion-level reporting accuracies were 91.92% (95% CI: 90.58−93.25%), 89.07% (95% CI: 87.57–90.57%), and 83.94% (95% CI: 81.58–86.31%) on retrospective image and prospective single- and multi-center video datasets, respectively. Report-Angel generates expert-level draft endoscopy reports and demonstrates robust generalizability. By providing reliable foundation draft reports, this system has the potential to effectively standardize reporting and reduce endoscopists’ workloads.
Similar content being viewed by others
Data availability
Individual de-identified participant data that underlie the results reported in this article can be shared with investigators for research purposes. Access to the data can be requested from the first corresponding author, yuhonggang1968@163.com. Data access will be granted after signing a data access agreement. The pretraining model, software, source code used in the paper, and associated test data and parameters have been provided in https://github.com/endo-angel/MLLM-for-Automatically-Reporting-Lesions-of-Upper-GI-Endoscopy.
References
Kaminski, M. F. et al. Performance measures for lower gastrointestinal endoscopy: a European Society of Gastrointestinal Endoscopy (ESGE) Quality Improvement Initiative. Endoscopy 49, 378–397 (2017).
Rutter, M. D. & Rees, C. J. Quality in gastrointestinal endoscopy. Endoscopy 46, 526–528 (2014).
Barbetta, A. et al. Quality of endoscopy reports for esophageal cancer patients: where do we stand? J. Gastrointest. Surg. 22, 778–784 (2018).
Bazerbachi, F., Chahal, P. & Shaukat, A. Improving upper gastrointestinal endoscopy quality. Clin. Gastroenterol. Hepatol. 21, 2457–2461 (2023).
Yokota, Y. et al. Effects of a novel endoscopic reporting system with voice recognition on the endoscopic procedure time and report preparation time: propensity score matching analysis. J. Gastroenterol. 57, 1–9 (2022).
Cid, Y. D. et al. Development and validation of open-source deep neural networks for comprehensive chest X-ray reading: a retrospective, multicentre study. Lancet Digit. Health 6, e44–e57 (2024).
Kim, C. et al. Transparent medical image AI via an image-text foundation model grounded in medical literature. Nat. Med 30, 1154–1165 (2024).
Ji, J., Hou, Y., Chen, X., Pan, Y. & Xiang, Y. Vision-language model for generating textual descriptions from clinical images: model development and validation study. JMIR Form. Res. 8, e32690 (2024).
Singhal, K. et al. Large language models encode clinical knowledge. Nature 620, 172–180 (2023).
Kottlors, J. et al. Feasibility of differential diagnosis based on imaging patterns using a large language model. Radiology 308, e231167 (2023).
Fink, M. A. et al. Potential of ChatGPT and GPT-4 for data mining of free-text CT reports on lung cancer. Radiology 308, e231362 (2023).
Adams, L. C. et al. Leveraging GPT-4 for post hoc transformation of free-text radiology reports into structured reporting: a multilingual feasibility study. Radiology 307, e230725 (2023).
Sun, Z. et al. Evaluating GPT4 on impressions generation in radiology reports. Radiology 307, e231259 (2023).
Huang, J. et al. Generative artificial intelligence for chest radiograph interpretation in the emergency department. JAMA Netw. Open 6, e2336100 (2023).
Tanno, R. et al. Collaboration between clinicians and vision-language models in radiology report generation. Nat. Med 31, 599–608 (2025).
Dong, Z. et al. A deep learning-based system for real-time image reporting during esophagogastroduodenoscopy: a multicenter study. Endoscopy 54, 771–777 (2022).
Zhang, L. et al. Effect of a deep learning-based automatic upper GI endoscopic reporting system: a randomized crossover study (with video). Gastrointest. Endosc. 98, 181–190.e10 (2023).
Lahat, A. et al. Evaluating the use of large language model in identifying top research questions in gastroenterology. Sci. Rep. 13, 4164 (2023).
Savage, T., Wang, J. & Shieh, L. A large language model screening tool to target patients for best practice alerts: development and validation. JMIR Med Inf. 11, e49886 (2023).
Kim, H. J., Gong, E. J. & Bang, C. S. Application of machine learning based on structured medical data in gastroenterology. Biomim. (Basel) 8, 512 (2023).
Wang S. et al. Leveraging large language and vision models for knowledge extraction from large-scale image-text colonoscopy records. Nat. Biomed. Eng. https://doi.org/10.1038/s41551-025-01500-x (2025).
Carlini L. et al. Large language models for detecting colorectal polyps in endoscopic images. Gut. https://doi.org/10.1136/gutjnl-2025-335091(2025).
Boers, T. G. W. et al. Foundation models in gastrointestinal endoscopic AI: impact of architecture, pre-training approach and data efficiency. Med Image Anal. 98, 103298 (2024).
Aabakken, L. et al. Minimal standard terminology for gastrointestinal endoscopy - MST 3.0. Endoscopy 41, 727–728 (2009).
Nagula, S., Parasa, S., Laine, L. & Shah, S. C. AGA clinical practice update on high-quality upper endoscopy: expert review. Clin. Gastroenterol. Hepatol. 22, 933–943 (2024).
Beg, S. et al. Quality standards in upper gastrointestinal endoscopy: a position statement of the British Society of Gastroenterology (BSG) and Association of Upper Gastrointestinal Surgeons of Great Britain and Ireland (AUGIS). Gut 66, 1886–1899 (2017).
Cohen, J. & Pike, I. M. Defining and measuring quality in endoscopy. Am. J. Gastroenterol. 110, 46–47 (2015).
Roorda, A. K. & Triadafilopoulos, G. A fellow’s guide to generating the endoscopy procedure report. Gastrointest. Endosc. 72, 803–805 (2010).
Yao, K. et al. Guidelines for endoscopic diagnosis of early gastric cancer. Dig. Endosc. 32, 663–698 (2020).
Wu, L. et al. Randomised controlled trial of WISENSE, a real-time quality improving system for monitoring blind spots during esophagogastroduodenoscopy. Gut 68, 2161–2169 (2019).
Wu, L. et al. A deep neural network improves endoscopic detection of early gastric cancer without blind spots. Endoscopy 51, 522–531 (2019).
Wang, H., Gao, C., Dantona, C., Hull, B. & Sun, J. DRG-LLaMA : tuning LLaMA model to predict diagnosis-related group for hospitalized patients. NPJ Digit Med 7, 16 (2024).
He, M. et al. Efficient multimodal learning from data-centric perspective. ArXiv Preprint https://doi.org/10.48550/arXiv.2402.11530 (2024).
Acknowledgements
This work was supported by the National Key Research and Development Program of China (grant no. 2022YFC2505105, to Lianlian Wu, W.Z.); the Natural Science Foundation of Wuhan, (grant no. 2025040601020197, to Z.H.D.); the Hubei Provincial Key Laboratory Open Project, (grant no. 2024KFZ005, to Z.H.D.); the Key Research and Development Program of Hubei Province (grant no. 2023BCB153, to H.G.Y.); and the National Natural Science Foundation of China-Youth Science Fund (grant no. 82202257, to Lianlian Wu). The funders had no role in the study design, data collection, data analysis, interpretation, or manuscript preparation.
Author information
Authors and Affiliations
Contributions
Conceptualization: R.Q.J., B.R.C., Z.H.D. Methodology: R.Q.J., B.R.C., Z.H.D., X.Q.Z., H.Y. Investigation: R.Q.J., B.R.C., Z.H.D., X.Q.Z., H.Y., Y.X.L., Y.C.D., G.G.M,. J.W., L.H., J.L., D.C., W.Z. Visualization: R.Q.J., B.R.C., Z.H.D., X.Q.Z. Funding acquisition: H.G.Y., W.Z., Z.H.D.. Project administration: R.Q.J., B.R.C, Z.H.D., X.Q.Z. Supervision: H.G.Y., W.Z. Writing—original draft: R.Q.J., B.R.C., Z.H.D. Writing—review and editing: H.G.Y., W.Z., Z.H.D.
Corresponding authors
Ethics declarations
Competing interests
Wuhan EndoAngel Co., Ltd. provided equipment for this study. The sponsor had no role in the design or conduct of the study; data collection, management, analysis, and interpretation; manuscript preparation; or the decision to submit the manuscript for publication. The other authors declare no competing financial or non-financial interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Jiang, R., Chen, B., Dong, Z. et al. Domain specific multimodal large language model for automated endoscopy reporting with multicenter prospective validation. npj Digit. Med. (2026). https://doi.org/10.1038/s41746-026-02569-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41746-026-02569-7


