Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Comment in 2025

Filter By:

Article Type
Year
  • Ambient AI “digital scribes” are rapidly moving into routine practice, easing documentation burden and physician burnout. Early evidence suggests these tools can increase billing and risk-adjustment coding intensity, prompting payer responses such as downcoding and risk-score recalibration. This Policy Brief contrasts their implications in fee-for-service and Medicare Advantage models, notes relevance for systems blending encounter-based and capitated payment, and outlines steps to preserve value without fueling a coding arms race.

    • Tinglong Dai
    • Joseph C. Kvedar
    • Daniel Polsky
    CommentOpen Access
  • Large language models (LLMs) are increasingly used for mental health interactions, often mimicking therapeutic behaviour without regulatory oversight. Documented harms, including suicides, highlight the urgent need for stronger safeguards. This manuscript argues that LLMs providing therapy-like functions should be regulated as medical devices, with standards ensuring safety, transparency and accountability. Pragmatic regulation is essential to protect vulnerable users and maintain the credibility of digital health interventions.

    • Max Ostermann
    • Oscar Freyer
    • Stephen Gilbert
    CommentOpen Access
  • Prescription Digital Therapeutics (PDTs) deliver evidence-based treatments through FDA-cleared software. Despite demonstrated impact internationally, particularly in Germany, outdated U.S. reimbursement structures restrict access. Modernizing benefit pathways to align with the clinical rigor of PDTs can expand patient access, reduce healthcare costs, and improve outcomes. Timely action is critical to bridge care gaps, ensuring that patients receive safe, effective digital interventions supported by evidence and regulatory oversight.

    • Lani Reilly
    • Andrew Molnar
    CommentOpen Access
  • Medicaid serves over 70 million Americans, yet barriers to consistent, high-quality care endure due to workforce shortages, fragmented service delivery, and administrative burden. Artificial intelligence (AI) offers not just operational efficiency but the potential to transform the Medicaid care experience. AI-powered digital assistants can deliver 24/7 multilingual voice or text support, expanding access to personalized, emotionally-intelligent assistance. Under existing workforce supervision, these agents can bridge critical gaps in behavioral health and community coordination through tools like therapy chatbots that reduce loneliness and improve engagement. As “embedded staff” in provider offices and community organizations, digital assistants can create a unified infrastructure for whole-person care. We introduce the concept of Precision Benefits: delivering the right support to the right person at the right time to prevent avoidable health and social deterioration. This aligns with administrative and eligibility reforms in H.R.1, which require states to improve efficiency and verification while fostering innovation and preserving state authority over AI regulation. Realizing this vision demands responsible AI development – addressing safety, bias, privacy, and trust – and modernization of infrastructure and payment models. Yet the opportunity is clear: AI can power a smarter and more equitable Medicaid system, one that puts everyone on an upward life trajectory.

    • Nathan Favini
    • Neil Batlivala
    • Roxana Daneshjou
    CommentOpen Access
  • Clinical trials face persistent challenges in cost, enrollment, and generalizability. This perspective examines how artificial intelligence (AI), large language models (LLMs), adaptive trial designs, and digital twins (DTs) can modernize trial design and execution. We detail AI-driven eligibility optimization, reinforcement learning for real-time adaptation, and in silico DT modeling. Methodological, regulatory, and ethical hurdles are addressed, emphasizing the need for validated, scalable frameworks to enable responsible and widespread integration.

    • Aarav Badani
    • Fabio Ynoe de Moraes
    • Alireza Mansouri
    CommentOpen Access
  • Silent brain infarctions (SBIs), affecting 20% of adults and increasing stroke risk, evade routine MRI screening. While retinal scans offer a “window to the brain,” prior AI failed to simultaneously detect SBIs and predict strokes. DeepRETStroke overcomes this by analysing eye scans. Trained on ~900,000 images, it uses deep learning combining self-supervised pattern recognition from unlabeled images, semi-supervised SBI detection with limited MRI, and knowledge transfer refinement, transforming eye exams into affordable stroke screenings.

    • Minyan Ge
    • Yuchun Wang
    • Shumao Xu
    CommentOpen Access
  • With a growing number of studies applying generative artificial intelligence (GAI) models for health purposes, reporting standards are being developed to guide authors in this space. We describe the currently available reporting guidelines that apply to GAI models and provide an overview of upcoming reporting standards. Investigators must remain up-to-date with the most applicable tools to guide the comprehensive reporting of their research as we integrate GAI in healthcare.

    • Bright Huo
    • Gary S. Collins
    • Gordon Guyatt
    CommentOpen Access
  • Biased and poorly documented dermatology datasets pose risks to the development of safe and generalizable artificial intelligence (AI) tools. We created a Dataset Nutrition Label (DNL) for multiple dermatology datasets to support transparent and responsible data use. The DNL offers a structured, digestible summary of key attributes, including metadata, limitations, and risks, enabling data users to better assess suitability and proactively address potential sources of bias in datasets.

    • Yingjoy Li
    • Matthew Taylor
    • Veronica Rotemberg
    CommentOpen Access
  • The npj Digital Medicine Editorial Fellowship (https://www.nature.com/npjdigitalmed/editorial-fellowship) is a year-long program that provides trainees and early career researchers with direct exposure to peer review, editorial writing, and journal operations with npj Digital Medicine. Since 2021, the program has graduated 4 fellows, who remain active with the journal as reviewers, editorial board members, and guest editors. As the 2024–25 Editorial Fellow, I discuss the fellowship’s structure, outcomes, and learning experiences.

    • Ben Li
    CommentOpen Access
  • Despite its rapid advancement, digital health has little considered issues of climate change or environmental degradation. As the digital health community begin to engage with this critical issue scholars have started mapping progression in the field, typically focusing on the relationship between digital health as it applies to climate and/or environmental mitigation or climate adaptation. In this Comment, we argue that climate and environment learning for mitigation and adaptation constitutes a critical yet overlooked dimension intersecting mitigation and adaptation strategies, warranting deliberate attention. This learning category is the systematic and transparent approach that applies structured and replicable methods to identify, appraise, and make use of evidence from data analytics across decision-making processes related to mitigation and adaptation, including for implementation, and informs the exchange of new best practices in a post-climate era. The WHO’s Digital Health Classification framework offers a good option for ultimately formalising learning into practice. As a foundational step, however, learning needs to be conceptualised and developed into its own research agenda, organised around a shared language of metrics and evidence. We call on actors in the digital health field to develop this concrete strategy and initiate this process.

    • Maeghan Orton
    • Gabrielle Samuel
    • Peter Drury
    CommentOpen Access
  • Artificial intelligence (AI) is transforming traditional medicine, particularly in radiology. Its integration across patient care stages has made it increasingly ubiquitous. The European Union’s (EU) AI Act will additionally regulate AI-enabled solutions within the EU. However, without standardized guidelines, the Act’s flexibility poses practical challenges for providers and deployers, leading to inconsistencies in meeting requirements for high-risk systems like radiology AI, potentially impacting patients’ fundamental rights and safety.

    • Jaka Potočnik
    • Damjan Fujs
    CommentOpen Access
  • While large language models (LLMs) hold promise for transforming clinical healthcare, current comparisons and benchmark evaluations of large language models in medicine often fail to capture real-world efficacy. Specifically, we highlight how key discrepancies arising from choices of data, tasks, and metrics can limit meaningful assessment of translational impact and cause misleading conclusions. Therefore, we advocate for rigorous, context-aware evaluations and experimental transparency across both research and deployment.

    • Monica Agrawal
    • Irene Y. Chen
    • Shalmali Joshi
    CommentOpen Access
  • Artificial intelligence (AI) scribes have been rapidly adopted across health systems, driven by their promise to ease the documentation burden and reduce clinician burnout. While early evidence shows efficiency gains, this commentary cautions that adoption is outpacing validation and oversight. Without greater scrutiny, the rush to deploy AI scribes may compromise patient safety, clinical integrity, and provider autonomy.

    • Maxim Topaz
    • Laura Maria Peltonen
    • Zhihong Zhang
    CommentOpen Access
  • The rise of biomedical foundation models creates new hurdles in model testing and authorization, given their broad capabilities and susceptibility to complex distribution shifts. We suggest tailoring robustness tests according to task-dependent priorities and propose to integrate granular notions of robustness in a predefined specification to guide implementation. Our approach facilitates the standardization of robustness assessments in the model lifecycle and connects abstract AI regulatory frameworks with concrete testing procedures.

    • R. Patrick Xian
    • Noah R. Baker
    • Reza Abbasi-Asl
    CommentOpen Access
  • Foundation models are rapidly integrated into medicine, offering opportunities and ethical challenges. Unlike traditional medical technologies, they often enter real-world use without rigorous testing or oversight. We argue that their use constitutes a social experiment. This perspective highlights the unpredictable and partly uncontrollable nature of foundation models. We propose an ethical framework to guide responsible implementation, focusing on conditions for responsible experimentation rather than unattainable full predictability.

    • Robert Ranisch
    • Joschka Haltaufderheide
    CommentOpen Access
  • The use of synthetic data to augment real-world data in healthcare can ensure AI models perform more accurately, and fairly across subgroups. By examining a parallel case study of NHS England’s care.data platform, this paper explores why care.data failed and offers recommendations for future synthetic data initiatives centring on confidentiality, consent and transparency as key areas of focus needed to encourage successful adoption.

    • Sahar Abdulrahman
    • Markus Trengove
    CommentOpen Access
  • Large language models (LLMs), such as ChatGPT-o1, display subtle blind spots in complex reasoning tasks. We illustrate these pitfalls with lateral thinking puzzles and medical ethics scenarios. Our observations indicate that patterns in training data may contribute to cognitive biases, limiting the models’ ability to navigate nuanced ethical situations. Recognizing these tendencies is crucial for responsible AI deployment in clinical contexts.

    • Shelly Soffer
    • Vera Sorin
    • Eyal Klang
    CommentOpen Access
  • Artificial intelligence (AI) has primarily enhanced individual primary care visits, yet its potential for population health management remains untapped. Effective AI should integrate longitudinal patient data, automate proactive outreach, and mitigate disparities by addressing barriers such as transportation and language. Properly deployed, AI can significantly reduce administrative burden, facilitate early intervention, and improve equity in primary care, necessitating rigorous evaluation and adaptive design to realize sustained population-level benefits.

    • Sanjay Basu
    • Pablo Bermudez-Canete
    • Pranav Rajpurkar
    CommentOpen Access
  • Generative artificial intelligence can fulfil the criteria to be the ‘more knowledgeable other’ in a social constructivist framework. By scaffolding learning and providing a unique and augmented zone of proximal development for learners, it can simulate social interactions and contribute to the human-AI co-construction of knowledge. The presence of generative artificial intelligence in medical education prompts a re-imagining and re-interpretation of traditional roles within established pedagogy.

    • Michael Tran
    • Chinthaka Balasooriya
    • Joel Rhee
    CommentOpen Access
  • Systemic integration and equitable adoption of Digital Health Technologies (DHTs) require timely, comprehensive, harmonised policies. This paper presents five complementary key enablers: defining DHTs in the scope of fit-for-purpose policy interventions, implementing AI-ready regulatory approaches, adopting dynamic assessment criteria, establishing dedicated reimbursement models, and promoting evidence generation, clinical guidelines, interoperability, and education. Cross-border and multistakeholder collaboration are also crucial to reducing fragmentation, addressing inequities, and driving scalable, systemic value.

    • Alberta M. C. Spreafico
    • Rosanna Tarricone
    • Ariel D. Stern
    CommentOpen Access

Search

Quick links