Fig. 4: The effect of the size of self-distillation in training. | Nature Communications

Fig. 4: The effect of the size of self-distillation in training.

From: NuFold: end-to-end approach for RNA tertiary structure prediction with flexible nucleobase center representation

Fig. 4

Loss functions are shown when NuFold was trained using different amounts of self-distillation data: 100% (blue), 50% (orange), 33% (green), 0% (red). In each batch, the ratio of PDB entries to the distillation data was kept at 1:3 as in the baseline NuFold training. Loss values were computed on the validation data set at each step during training. a FAPE Loss (the lower, the better); b RMSD (Å) (the lower, the better); c LDDT Cα (the higher the better); d GDT-TS (the higher the better).

Back to article page