Fig. 6: A diagram to explain how the experiments are carried out.
From: Disruption prediction for future tokamaks using parameter-based transfer learning

The bottom layers which are closer to the inputs (the ParallelConv1D blocks in the diagram) are frozen and the parameters will stay unchanged at further tuning the model. The layers which are not frozen (the upper layers which are closer to the output, long short-term memory (LSTM) layer, and the classifier made up of fully connected layers in the diagram) will be further trained with the 20 EAST discharges. In cases 1-b, 2-b, and 3-b, the unfrozen layers are first replaced with new layers untrained, and are then trained with the 20 discharges.