We read with great interest the article by Mullan et al.1 on the Multiple Observed Standardised Long Examination Record (MOSLER) format, which combines the realism of long cases with the reliability of objective structured clinical examinations (OSCEs). This approach is a significant step forward in assessing complex clinical reasoning, communication, and professionalism. While we commend the authors' work, we propose incorporating artificial intelligence (AI) technologies to further enhance the framework.

AI offers promising solutions to limitations outlined in the article, particularly in consistency, scalability, and holistic performance evaluation. For instance, AI-driven natural language processing (NLP) tools could objectively analyse candidate-patient interactions, capturing subtle aspects like tone and context to quantify communication skills, empathy, and clarity.2,3 These insights are challenging to achieve with traditional rubrics. Machine learning models can also support diagnostic reasoning evaluations by mapping candidates' responses to established diagnostic pathways, reducing subjectivity and providing detailed feedback for reflective learning.4,5 In addition, AI could enhance longitudinal monitoring through predictive analytics in electronic portfolios.6 By analysing attendance, procedural logs, and assessments, AI can identify students needing additional support, enabling a personalised and proactive approach to education.

Furthermore, AI-driven simulations could complement MOSLER encounters by using advanced virtual patients. These simulations adapt dynamically to candidates' decisions, offering diverse clinical scenarios beyond the constraints of standardised patients.3,5,7 This approach addresses variability and resource concerns, enhancing realism in assessments. However, the integration of AI must be thoughtful and evidence-based, ensuring it augments rather than replaces human judgement. Ethical considerations, transparency, and equitable implementation are essential.5,7