Fig. 8: Comparison of models performance on Jarvis-DFT formation energy dataset. | npj Computational Materials

Fig. 8: Comparison of models performance on Jarvis-DFT formation energy dataset.

From: DenseGNN: universal and scalable deeper graph neural networks for high-performance property prediction in crystals and molecules

Fig. 8

This figure compares the performance of baseline models, DenseGNN, and DenseGNN-Lite on the Jarvis-DFT formation energy dataset. It displays crystal graph parameters (total edges, average edges per graph, total nodes, average nodes per graph) for each model using their optimal edge selection methods. Additionally, it shows the total model parameters, trainable parameters, MAE results on the test set, and the training and inference time per epoch. The nested graph networks, whose parameter counts exceed those of DenseGNN and DenseGNN-Lite. All tests were conducted on a 4090 GPU with consistent batch size, learning rate, and other settings. The average time per epoch for training and inference was calculated using the mean from 20 epochs.

Back to article page