Fig. 3 | Scientific Reports

Fig. 3

From: GTAT: empowering graph neural networks with cross attention

Fig. 3

The structure of GCA layer. Inputs is a set of node feature representations, \({H}_l \in \mathbb {R}^{N\times F_1}\) and a set of node topology representations, \({T}_l \in \mathbb {R}^{N\times F_2}\), where N is the number of nodes at layer l. After computing two attention matrices, denoted as \(\alpha\) and \(\beta\), we employ message passing(M.P.) mechanism to get the new representations \({H}_{l+1} \in \mathbb {R}^{N\times F_3}\) and \({T}_{l+1} \in \mathbb {R}^{N\times F_2}\).

Back to article page