Fig. 1: Computing and learning frameworks for FBA, alternative mechanistic models, AMN, and AMN-Reservoir.
From: A neural-mechanistic hybrid approach improving the predictive power of genome-scale metabolic models

a Computing framework for classical FBA. The process is repeated for each medium, computing the corresponding steady state fluxes. Blue circles represent different bounds on metabolites uptake fluxes and each red circle represents a flux value at steady-state. b Computing framework for MM methods surrogating FBA. The methods can handle multiple growth media at once. Disregarding the solver (Wt, LP and QP), the MM layer takes as input an arbitrary initial flux vector, V0, respecting uptake flux bounds for different media, and computes all steady-state fluxes values (Vout) through an iterative process. c Learning framework for AMN hybrid models. The input (for multiple growth media) can be either a set of bounds on uptake fluxes (Vin), when using simulation data (generated as in a), or a set of media compositions, Cmed, when using experimental data. The input is then passed to a trainable neural layer, predicting an initial vector, V0, for the mechanistic layer (a MM method of b). In turn, the mechanistic layer computes the final output of the model, Vout. The training is based on a custom loss function (cf. “Methods”) ensuring the reference fluxes are fitted (i.e., Vout matches simulated or measured fluxes) and that the mechanistic constraints (on flux bounds and stoichiometry) are respected. d Learning framework for an AMN-Reservoir. The first step is to train an AMN on FBA-simulated data (as in c), after which parameters of this AMN are frozen. This AMN model, which purpose is to surrogate FBA, is named non-trainable AMN-Reservoir. In the second step, a neural layer is added prior to Vin taking as input media compositions, Cmed, and learning the relationship between the compositions and bounds on uptake fluxes.