Fig. 2: Impact of loss-function choice on model complexity. | Nature Communications

Fig. 2: Impact of loss-function choice on model complexity.

From: Designing accurate emulators for scientific processes using calibration-driven deep models

Fig. 2

Comparing the performance of emulators designed using conventional deep neural networks (DNN) with MSE as the optimization objective and the proposed approach that utilizes a calibration objective: a airfoil self-noise dataset, b reservoir model dataset. We find that regardless of the complexity of the model (varying depth), the proposed approach produces improved emulators. Though Learn-by-Calibrating (LbC) uses an additional network for estimating the intervals during training, at inference time, the predictions are obtained using only the network f whose number of parameters are exactly the same as that of the DNN baseline.

Back to article page