Fig. 3
From: Enhancing cross view geo localization through global local quadrant interaction network

Comparison of our Integrated Global-Local Attention Module (IGLAM) with existing interaction mechanisms. (a) Self-attention concatenates local and global features before passing them through a self-attention block. (b) Cross-attention fuses features via a cross-attention layer. (c) Co-attention applies a cross-attention layer followed by a self-attention block. (d) Our merged attention first concatenates global and local features, then processes them through a single cross-attention block, enabling effective cross-view interaction.