Fig. 6: MAGIK attention mechanism.
From: Geometric deep learning reveals the spatiotemporal features of microscopic motion

a, Exemplary frame from a video reproducing particles moving with different diffusion coefficients and their corresponding trajectories (segmented lines). Edges and nodes corresponding to detections of the two particles at the current (large circles) and at two previous (small circles) frames are also shown. Scale bar, 20 px. b, The message-passing steps only propagate information to a target node (\(\bf{v}_{i}^{t}\)) from a limited number (in this example 2) of previous frames. c, The gated self-attention mechanism further encodes information from all nodes. The different attention heads zi spatiotemporally cluster other nodes and differentially consider their influence on the target node. d, Example of a ground-truth graph. The edges depict the network of associations used to infer dynamic properties without direct linking. The nodes are colour-coded according to the value of the target feature, that is, the displacement scaling factor \(\sqrt{2D}\). The green circle highlights the target node, with respect to which the attention values shown in e and g are calculated. e, Attention map (heads 1–6) corresponding to the graph in b calculated with respect to the target node. f, Zoom-in of the rectangular region in b. g, Ground-truth trajectories (symbols and lines) corresponding to the graph in f. The symbols are colour-coded according to the value of the attention head 6 calculated with respect to the target node. Independently from the spatial distance, nodes from the same trajectory have a larger influence on the reference node.