Filter By:

Journal Check one or more journals to show results from those journals only.

Choose more journals

Article type Check one or more article types to show results from those article types only.
Subject Check one or more subjects to show results from those subjects only.
Date Choose a date option to show results from those dates only.

Custom date range

Clear all filters
Sort by:
Showing 1–20 of 20 results
Advanced filters: Author: Roxana Daneshjou Clear advanced filters
  • MedHELM, an extensible evaluation framework including a new taxonomy for classifying medical tasks and a benchmark of many datasets across these categories, enables the evaluation of large language models on real-world clinical tasks.

    • Suhana Bedi
    • Hejie Cui
    • Nigam H. Shah
    Research
    Nature Medicine
    P: 1-9
  • Biased and poorly documented dermatology datasets pose risks to the development of safe and generalizable artificial intelligence (AI) tools. We created a Dataset Nutrition Label (DNL) for multiple dermatology datasets to support transparent and responsible data use. The DNL offers a structured, digestible summary of key attributes, including metadata, limitations, and risks, enabling data users to better assess suitability and proactively address potential sources of bias in datasets.

    • Yingjoy Li
    • Matthew Taylor
    • Veronica Rotemberg
    Comments & OpinionOpen Access
    npj Digital Medicine
    Volume: 8, P: 1-4
  • By learning to pair dermatological images and related concepts in a self-supervised manner, a visual-language foundation model is shown to have comparable performance to supervised models for concept annotation and is used to scrutinize model decisions for enhanced interpretability and accountability of medical imaging applications.

    • Chanwoo Kim
    • Soham U. Gadgil
    • Su-In Lee
    Research
    Nature Medicine
    Volume: 30, P: 1154-1165
  • As medical AI development gathers momentum, a new study reveals that much work still needs to be done before the public will willingly embrace AI-based technologies in healthcare.

    • Aaron Fanous
    • Kirsten Steffner
    • Roxana Daneshjou
    News & Views
    Nature Medicine
    Volume: 30, P: 3057-3058
  • TRIPOD-LLM (transparent reporting of a multivariable model for individual prognosis or diagnosis–large language model) is a checklist of items considered essential for good reporting of studies that are developing or evaluating an LLM for use in healthcare settings. It is a ‘living guideline’ that emphasizes transparency, human oversight and task-specific performance reporting.

    • Jack Gallifant
    • Majid Afshar
    • Danielle S. Bitterman
    Reviews
    Nature Medicine
    Volume: 31, P: 60-69
  • In a large-scale study involving 389 board-certified dermatologists and 459 primary-care physicians from 39 countries, the impact of a deep learning-aided decision support system on physicians’ diagnostic accuracy was tested across 46 skin diseases and for both light and dark skin tones.

    • Matthew Groh
    • Omar Badri
    • Rosalind Picard
    ResearchOpen Access
    Nature Medicine
    Volume: 30, P: 573-583
  • Medicaid serves over 70 million Americans, yet barriers to consistent, high-quality care endure due to workforce shortages, fragmented service delivery, and administrative burden. Artificial intelligence (AI) offers not just operational efficiency but the potential to transform the Medicaid care experience. AI-powered digital assistants can deliver 24/7 multilingual voice or text support, expanding access to personalized, emotionally-intelligent assistance. Under existing workforce supervision, these agents can bridge critical gaps in behavioral health and community coordination through tools like therapy chatbots that reduce loneliness and improve engagement. As “embedded staff” in provider offices and community organizations, digital assistants can create a unified infrastructure for whole-person care. We introduce the concept of Precision Benefits: delivering the right support to the right person at the right time to prevent avoidable health and social deterioration. This aligns with administrative and eligibility reforms in H.R.1, which require states to improve efficiency and verification while fostering innovation and preserving state authority over AI regulation. Realizing this vision demands responsible AI development – addressing safety, bias, privacy, and trust – and modernization of infrastructure and payment models. Yet the opportunity is clear: AI can power a smarter and more equitable Medicaid system, one that puts everyone on an upward life trajectory.

    • Nathan Favini
    • Neil Batlivala
    • Roxana Daneshjou
    Comments & OpinionOpen Access
    npj Digital Medicine
    Volume: 8, P: 1-3
  • Cognitive bias accounts for a significant portion of preventable errors in healthcare, contributing to significant patient morbidity and mortality each year. As large language models (LLMs) are introduced into healthcare and clinical decision-making, these systems are at risk of inheriting – and even amplifying – these existing biases. This article explores both the cognitive biases impacting LLM-assisted medicine and the countervailing strengths these technologies bring to addressing these limitations.

    • Arjun Mahajan
    • Ziad Obermeyer
    • Dylan Powell
    News & ViewsOpen Access
    npj Digital Medicine
    Volume: 8, P: 1-4