Fig. 1: Comparison of distinct fine-tuning strategies. | npj Digital Medicine

Fig. 1: Comparison of distinct fine-tuning strategies.

From: The foundational capabilities of large language models in predicting postoperative risks using clinical notes

Fig. 1

An illustration comparing architectures utilizing distinct fine-tuning strategies experimented with in our study. a Compares the use of pretrained models alone versus fine-tuning the models using their self-supervised objectives. Self-supervised fine-tuning involves refining the pretrained model’s weights through its objective loss function(s) using the provided clinical notes. b Illustrates the differences between semi-supervised fine-tuning and foundation fine-tuning. Semi-supervised fine-tuning focuses on optimizing the model for a specific outcome of interest, whereas foundation fine-tuning employs a multi-task learning (MTL) objective, incorporating all available postoperative labels in the dataset.

Back to article page