Figure 3

Loss curve plots of best performing training condition. (a) Training loss (binary cross entropy) is plotted against number of epochs for each model trained on the dataset of condition 4 (see Table 1) with Monte-Carlo cross-validation. (b) Validation loss (binary cross entropy) is plotted against number of epochs for each model trained on the dataset of condition 4 (see Table 1) with Monte-Carlo cross-validation. Both plots show decrease in loss over model training time, and start to converge around 120 epochs. Early stopping was employed when validation loss would not improve over 10 epochs. The loss function includes L2-regularization.