Table 1 Distribution of attention changes after attacks on ImageNet.

From: Universal attention guided adversarial defense using feature pyramid and non-local mechanisms

Attack algorithm

\(\sigma _{t} >{\bar{\sigma }} _{t}\)

\(\sigma _{t} \le {\bar{\sigma }} _{t}\)

FGSM

46.96%

53.04%

I-FGSM

69.99%

30.01%

PGD

71.59%

28.41%

MI-FGSM

69.60%

30.40%

\(\hbox {DI}^2\)-FGSM

77.86%

22.14%

TI-FGSM

73.62%

26.38%

Deepfool

29.18%

70.82%

C&W

26.52%

73.48%

Square

35.15%

64.85%

  1. \(\sigma _t > {\bar{\sigma }}_t\): Corresponds to Attention-shifting Attacks, where adversarial perturbations cause significant dispersion in the model’s attention.
  2. \(\sigma _t \le {\bar{\sigma }}_t\): Corresponds to Attention-attenuation Attacks, where adversarial perturbations reduce attention intensity in key regions without significantly shifting the attention position