Fig. 5: Architecture of the Predicting Abnormality with GCRN and LSTM (PAGL) framework. | npj Computational Materials

Fig. 5: Architecture of the Predicting Abnormality with GCRN and LSTM (PAGL) framework.

From: Learning to predict rare events: the case of abnormal grain growth

Fig. 5

a Voxel grains are represented by multiple time-varying microstructures as an example. Each grain is assigned a unique color and a number. b Microstructures are transferred to graph representations that describe adjacency relations between grains and the dynamic features (rectangles at right). c Graphs are input into three graph-based convolutional network (GCN) layers to learn the local topology from one-hop, two-hop, and three-hop neighbors, and update the individual node (grain) features. d The long short-term memory (LSTM) module with multiple cells learns the topological and featured variation of different time steps. e Finally, the learned node (grain) representations in the graph are fed into an multilayer perceptron (MLP classifier) with a sigmoid function to predict the probability of becoming abnormal. f The framework outputs the prediction that a grain will become abnormal or not. (Predicting Abnormality with LSTM) (PAL) is a simplification of this diagram: It does not build dynamic graphs in (b) and it passes dynamic features directly to the LSTM module (d) to learn grain representations that are fed into an MLP classifier to predict abnormality.

Back to article page