Fig. 6: Comparison of optimization results for the large-scale cancer model, for different hyperparameters.
From: Mini-batch optimization enables training of ODE models on large-scale datasets

a–c Boxplots of the ten best optimization runs out of 20, started at the same random parameters, for Ipopt (at three different stages of the optimization process to compare performance over computation time) and for mini-batch optimization with different learning rates (LR), with rescue interceptor only (rescue) and with additional line-search (LS). Boxes extend from 25th to 75th percentiles, whiskers show the ranges of the data, and thick lines indicate medians. a Final objective functions values. b Correlation of model simulation with measurement data (training set). c Correlation of model simulation with measurement data (test set). d Total computation time for all 20 optimization runs (lower panel). e–g Boxplots of the ten best optimization runs out of 20, started at the same random parameters, for Ipopt (at four different stages of the optimization process to compare performance over computation time) and for mini-batch optimization with mini-batch sizes (10, 100, 1000 and full-batch, i.e., 13,000). Specifications as described for subfigures a–c. e Final objective functions values. f Correlation of model simulation with measurement data (training set). g Correlation of model simulation with measurement data (test set). h Total computation time for all 20 optimization runs (lower panel).