Table 6 Comparison of adversarial texts’ grammar error count. In each row, the minimum value for soft-label attacks is marked by underlining, and the minimum value for hard-label attacks is emphasized in bold.

From: Hard label adversarial attack with high query efficiency against NLP models

Data

Model

Attack

TextFooler

GA-A

PSO-A

LSHA

HLBB

TextHoaxer

LeapAttack

SSPAttack

QEAttack

AG

BERT

0.28

0.42

0.17

0.09

0.39

0.37

0.50

0.29

0.35

WordCNN

0.35

0.51

0.19

-0.19

0.51

0.43

0.51

0.41

0.40

WordLSTM

0.46

0.48

0.21

-0.23

0.50

0.39

0.46

0.38

0.37

MR

BERT

0.15

0.22

0.04

0.16

0.26

0.22

0.31

0.24

0.23

WordCNN

0.16

0.22

0.06

0.19

0.31

0.27

0.28

0.26

0.25

WordLSTM

0.13

0.24

0.03

0.15

0.29

0.19

0.28

0.21

0.18

Yelp

BERT

0.48

0.77

0.13

0.38

0.75

1.21

1.48

0.97

0.94

WordCNN

0.61

0.78

0.45

0.40

0.97

0.91

1.28

0.90

0.85

WordLSTM

0.74

0.72

0.35

0.39

0.89

1.06

1.21

0.78

0.83

Yahoo

BERT

0.34

0.37

0.08

0.17

0.71

0.72

0.87

0.79

0.58

WordCNN

0.50

0.45

0.24

0.01

0.77

0.83

1.06

0.74

0.68

WordLSTM

0.74

0.45

0.20

0.06

0.85

0.99

1.14

0.84

0.78

IMDB

BERT

0.00

0.78

0.16

0.34

0.95

0.99

1.48

0.76

0.86

WordCNN

0.41

0.79

0.70

0.41

0.88

0.91

1.19

0.87

0.83

WordLSTM

0.32

0.63

0.25

0.27

0.84

0.77

1.08

0.64

0.59

Average

0.38

0.52

0.22

0.17

0.66

0.68

0.88

0.61

0.58