Fig. 3: Performance of fine-tuned embedding models. | npj Digital Medicine

Fig. 3: Performance of fine-tuned embedding models.

From: Forecasting adverse surgical events using self-supervised transfer learning for physiological signals

Fig. 3

a The convergence of fine-tuned models. The top eight plots fix OR0 as the target dataset (we plot eight out of the total fifteen signals). Dark green lines show the convergence of a randomly initialized LSTM trained in OR0 and light green show the convergence of an LSTM trained in OR0 initialized using weights from the best model in OR1 (fine-tuning). The bottom two rows show the analogous plots with OR1 as the target dataset. Because deep models are typically trained iteratively using some variant of stochastic gradient descent, convergence plots are used to assess the convergence of deep models as a function of the number of iterations (epochs) based on the performance on a held out validation set (validation loss). b The performance of GBT models trained on embeddings from standard embedding models (next), transferred embedding models (next'), and fine-tuned embedding models (nextft) (best models from light green in (a)). We report the average precision value of the raw model in parenthesis on the x-axis.

Back to article page