Fig. 1: Schematic architecture of NEP4 model and multi-loss evolutionary training algorithm. | Nature Communications

Fig. 1: Schematic architecture of NEP4 model and multi-loss evolutionary training algorithm.

From: General-purpose machine-learned potential for 16 elemental metals and their alloys

Fig. 1: Schematic architecture of NEP4 model and multi-loss evolutionary training algorithm.

a Schematic illustration of the architecture of the neuroevolution potential version 4 (NEP4) model with distinct sets of neural network (NN) parameters for different atom types, A (yellow), B (green), and C (blue). For a central atom of type A, the descriptor qA involves the cAJ parameters (J can be of any type), while the weight and bias parameters wA are specific for type A. The hidden layer in each NN is represented by x. Similar rules apply to the central atoms of other types. The total energy U is the sum of the site energies for all the atoms in a given structure. By contrast, in neuroevolution potential version 3 (NEP3) all atom types share a common set of NN parameters w, which restricts the regression capacity. b Schematic illustration of the multi-loss evolutionary training algorithm. For example, in a 3-component system, the optimization of the parameters related to atom type A (including wA, cAA, cAB, and cAC) is only driven by a loss function defined using the structures with the chemical compositions of A, AB, and AC. In the conventional evolutionary algorithm, which is used in NEP3, a single loss function is used to optimize all parameters, which is less effective for training general-purpose models for many-component systems.

Back to article page