Table 4 The comparison of Me-LLaMA models and existing open source medical LLMs
From: Medical foundation large language models for comprehensive text analysis and beyond
Model | Backbone | Model size | Biomedical literature | Clinical notes | Continual pre-training (# of tokens) | Instruction tuning (# of instructions) | Evaluation tasks | Release date |
|---|---|---|---|---|---|---|---|---|
MedAlpaca3 | LLaMA | 7/13B | ✓ | ✗ | - | 160 K | QA | 04/14/2023 |
ChatDoctor12 | LLaMA2 | 7B | ✓ | ✗ | - | 100 K | QA | 05/24/2023 |
AlpaCare28 | LLaMA | 7/13B | ✓ | ✗ | - | 52 K | QA, Summarization | 10/23/2023 |
Clinical LLaMA11 | LLaMA | 7B | ✗ | ✓ | - | - | Classification | 07/06/2023 |
Meditron10 | LLaMA2 | 7/70B | ✓ | ✗ | 48B | - | QA | 11/27/2023 |
PMC-LLaMA2 | LLaMA | 7/13B | ✓ | ✗ | 79B | 514 K | QA | 04/27/2023 |
Me-LLaMA | LLaMA2 | 13/70B | ✓ | ✓ | 129B | 214 K | QA, NER, RE, Classification, Summarization, NLI, Medical Diagnosis | 06/05/2024 |