Table 1 Notations for the different scratch (SC) and transfer learning (TL) modeling configurations used in this work.
Notation | Description |
---|---|
Base | Naive model that simply uses the average property value of the training data as the predicted value |
SC : ML(EF) | ML model trained from scratch using elemental fractions (EF) as input |
SC : ML(PA) | ML model trained from scratch using physical attributes (PA) as input |
SC : DL(EF) | DL model trained from scratch using EF as input |
SC : DL(PA) | DL model trained from scratch using PA as input |
TL : ML(FeatExtr) | ML model trained from the activations extracted from the source model (except for last layer) |
TL : DL(FeatExtr) | DL model trained from the activations extracted from the source model (except for last layer) |
TL : FineTune | Fine-tuning on the same DL framework using the pre-trained weights of source model |
TL : ModFineTune | Fine-tuning on the same DL framework using the pre-trained weights of source model except for the last layer which has randomly initialized weights |
TL : freezing | DL model trained from the activations extracted from the last layer of the source model |