Fig. 4: Illustration of the uncertainty quantification (UQ) methods used. | npj Computational Materials

Fig. 4: Illustration of the uncertainty quantification (UQ) methods used.

From: Single-model uncertainty quantification in neural network potentials does not consistently outperform model ensembles

Fig. 4

x denotes the input to the neural networks (NNs), while y is the predicted property. In the case of NNIPs, x generally represents the positions and atomic numbers of the input structure, whereas y is the energy and/or forces of the corresponding input structure. In addition, red texts indicate the variables used as the uncertainty estimates. a Multiple NNs trained in an ensemble are made to predict the desired property of the same structure. The mean and variance of the property can then be calculated, where higher variation in the property implies higher uncertainty34,60. b In the mean-variance estimation (MVE) method, a Gaussian prior distribution is applied to the input data and the NN is made to predict the mean and variance describing the Gaussian distribution. A higher variance parameter indicates higher uncertainty42,60. c In the deep evidential regression method, an evidential prior distribution is applied to the input data and the NN predicts the desired property and the parameters to describe both the aleatoric and epistemic uncertainties43,44. d In the Gaussian mixture model (GMM) method, the input data is assumed to be drawn from multiple Gaussian distributions. The negative log-likelihood (NLL) function is calculated from a fitted GMM on the learned feature vectors, ξx of the structures. A higher NLL value denotes higher uncertainty41.

Back to article page