Extended Data Fig. 1: Processing flow of the fingerprinting graph block (FGNN). | Nature Machine Intelligence

Extended Data Fig. 1: Processing flow of the fingerprinting graph block (FGNN).

From: Geometric deep learning reveals the spatiotemporal features of microscopic motion

Extended Data Fig. 1

FGNN, similar to other flavours of GNN layers, comprises three fundamental steps: edge feature update, edge feature aggregation, and node update. a, Input graph structure. Nodes contain features encoding the object’s position and relevant descriptors. Edges encode relational features between neighbouring nodes. In this example, the node of interest, labelled with the subindex i, receives information from connected nodes, labelled with the subindex j. b, Each edge in the graph is updated by applying a multilayer perceptron (MLP) to the concatenation of the features of two nodes and the edge connecting them (equation (1)). c, During the aggregation of edge features to a node, the contribution of each edge has a weight that is determined by the distance between linked nodes using a function with free parameters, fw (equation (2)). d, fw is a super-Gaussian and defines a learnable local receptive field that allows the FGNN to adapt to heterogeneous dynamics. e, The current state of the nodes and the aggregate of the weighted edge features are concatenated and linearly transformed to obtain a local representation for each neighbourhood (equation (3)). Furthermore, the FGNN prepends a learnable node embedding \(\bf{U}\) to the local representation matrix, whose features provide global system-level insights. f, The nodes are updated using gated self-attention layers. The matrix resulting from the concatenation of \(\bf{U}\) with the local features is transformed by the trainable linear transformation matrices Q(z), K(z), P(z) to obtain queries, key, and values, respectively. z denotes the index of the attention head. The self-attention weights are calculated by the dot-product of the queries with the key matrix. Softmax normalizes the weights to be positive and to add up to 1 (equation (4)). Finally, the weighted values are multiplied by the gatings and passed through an MLP to account for nonlinear interactions between nodes to obtain the updated node features.

Back to article page