Fig. 5: Pretraining & fine-tuning. | npj Computational Materials

Fig. 5: Pretraining & fine-tuning.

From: De novo design of polymer electrolytes using GPT-based and diffusion-based generative models

Fig. 5

a Performances of models pretrained on PI1M and HTP-MD datasets. b Loss curves comparing pretraining + fine-tuning and training from scratch. c Model performance with varying numbers of data (hypervolume calculated with respect to validity & uniqueness). d Model performance with varying numbers of data (hypervolume calculated with respect to similarity & diversity). Each point in the plot corresponds to a different set of hyperparameters. e Pareto front comparing pretraining + fine-tuning and training from scratch (validity & uniqueness). f Pareto front comparing pretraining + fine-tuning and training from scratch (similarity & diversity).

Back to article page