Fig. 3: Predictive performance of fine-tuning strategies across all outcomes. | npj Digital Medicine

Fig. 3: Predictive performance of fine-tuning strategies across all outcomes.

From: The foundational capabilities of large language models in predicting postoperative risks using clinical notes

Fig. 3

An illustration of predictive performance across the experimented fine-tuning strategies for all outcomes. Each figure panel depicts the predictive performance for the following outcomes: a 30-day mortality, b Acute Kidney Injury (AKI), c Pulmonary Embolism (PE), d Pneumonia, e Deep Vein Thrombosis (DVT), and f Delirium. The bar graph represents the means, while the error bars indicate the respective standard errors from a 5-fold cross-validation. Fine-tuning the models using their self-supervised training objectives improved predictive performance compared to using pretrained models alone. Furthermore, incorporating labels as part of the fine-tuning objective further enhanced prediction performance. The foundation fine-tuning strategy performed best, where the model was fine-tuned using a multi-task learning objective across all outcomes in the dataset.

Back to article page