Filter By:

Journal Check one or more journals to show results from those journals only.

Choose more journals

Article type Check one or more article types to show results from those article types only.
Subject Check one or more subjects to show results from those subjects only.
Date Choose a date option to show results from those dates only.

Custom date range

Clear all filters
Sort by:
Showing 1–10 of 10 results
Advanced filters: Author: Weidi Xie Clear advanced filters
  • Open-source, multilingual medical LLMs can benefit a wide audience from different regions. Here, the authors present a large-scale corpus, a benchmark, and a series of LLMs openly to promote development in this field.

    • Pengcheng Qiu
    • Chaoyi Wu
    • Weidi Xie
    ResearchOpen Access
    Nature Communications
    Volume: 15, P: 1-15
  • Medical imaging has transformed clinical diagnostics. Here, authors present RadDiag, a foundational model for comprehensive disease diagnosis using multi-modal inputs, demonstrating superior zero-shot performance on external datasets compared to other foundation models and showing broad applicability across various medical conditions.

    • Qiaoyu Zheng
    • Weike Zhao
    • Weidi Xie
    ResearchOpen Access
    Nature Communications
    Volume: 15, P: 1-16
  • Evaluation of medical LLMs’ reasoning process in professional medicine remains underexplored. Here, the authors present MedR Bench, which evaluates LLMs’ medical reasoning across exam recommendation, diagnosis, and treatment. They find that models excel at diagnosis but struggle with exams and treatment.

    • Pengcheng Qiu
    • Chaoyi Wu
    • Weidi Xie
    ResearchOpen Access
    Nature Communications
    Volume: 16, P: 1-14
  • Developing generalist medical foundation models is challenging but crucial. This study presents a Radiology Foundation Model designed to integrate text with 2D and 3D medical scans for diverse radiologic tasks, as well as a large-scale multimodal dataset and a comprehensive benchmark.

    • Chaoyi Wu
    • Xiaoman Zhang
    • Weidi Xie
    ResearchOpen Access
    Nature Communications
    Volume: 16, P: 1-22
  • Despite the success of multi-modal foundation models in natural language and vision tasks, their use in medical domains is limited. Here, the authors propose to train a foundation model for chest X-ray diagnosis that combines medical domain knowledge with vision-language representation learning.

    • Xiaoman Zhang
    • Chaoyi Wu
    • Yanfeng Wang
    ResearchOpen Access
    Nature Communications
    Volume: 14, P: 1-12
  • Zhang et al. investigate how the large body of publicly available images from the biomedical domain can be used to generate a new medical visual question-answering dataset. Along with the resulting benchmark dataset, the authors propose a novel visual-language model and compare its performance against existing approaches.

    • Xiaoman Zhang
    • Chaoyi Wu
    • Weidi Xie
    ResearchOpen Access
    Communications Medicine
    Volume: 4, P: 1-13