Fig. 2: Benchmarking AMNs with different training sets and mechanistic layers.
From: A neural-mechanistic hybrid approach improving the predictive power of genome-scale metabolic models

All results were computed on 5-fold cross-validation sets. Plotted is the mean and standard error (95% confidence interval) over the five validation sets of the cross-validation. Top panels show the custom mechanistic loss values, and bottom panels plot the Q² values for the growth rate, over learning epochs (Q² is the regression coefficient on cross-validation datapoints not seen during training). All AMNs have the architecture given in Fig. 1c, with Vin as input, and a neural layer composed of one hidden layer of size 500. For all models, dropout = 0.25, batch size = 5, the optimizer is Adam, the learning rate is 10−3. The architecture for ANN (a classical dense network) is given in the “Methods” section it takes as input the uptake fluxes bounds Vin and produce a vector Vout composed of all fluxes with which the loss is computed. a–c show results for different training sets: a, b for 1000 simulations training sets generated with the E. coli core model, respectively with UB and EB as inputs, whereas c is for a 1000 simulations training set generated with the iML1515 model, with UB as input (for more details on the training set generations, refer to “Methods”). As mentioned in subsection “Alternative mechanistic models to surrogate FBA”, AMN-Wt cannot be used to make predictions when exact bounds (EB) are used and is therefore not plotted in (b). Source data are provided as a Source Data file (cf. “Data availability”).