Table 3 Summary of accuracy for all trained models (cutoff 0.6 nm) on the OE62 dataset compared to other long-range modeling approaches

From: Learning non-local molecular interactions via equivariant local representations and charge equilibration

Model

U MAE [meV]

U RMSE [meV]

# Mio. Params.

Allegro (–)

Baseline S

63.4

123.7

0.17

Baseline S+

60.0

114.5

0.20

Baseline L

61.1

120.9

0.19

Baseline L+

61.8

116.6

0.22

CELLI S

55.3

116.7

0.21

CELLI L

55.1

114.3

0.29

MACE (2)

Baseline

48.1

90.1

2.37

CELLI

48.0

88.3

2.52

DimeNet++ (3)

Baseline

42.1 (53.8*)

108.4

2.78

Ewald12

48.1

4.8

Neural P3M21

41.5

PaiNN (4)

Ewald12

59.7

15.7

Neural P3M21

52.9

  1. The Small (S) versions of CELLI and Allegro were used to compute the benchmarks and use fewer irreps, a lower rotational order for the spherical harmonics, and a smaller hidden size for the charge-embedding networks than the Large (L) version. The versions S+ and L+ of Allegro contain one additional Interaction Layer compared to the CELLI version. The number of message-passing steps of each model, if applicable, is reported in brackets behind the model name. Results for Ewald and Neural P3M on DimeNet++ and PaiNN were taken directly from the refs. 12,21. The lowest errors are reported in bold. (*) Reported by Kosmala et al.