Table 7 Comparison of adversarial texts’ perplexity. In each row, the minimum value for soft-label attacks is marked by underlining, and the minimum value for hard-label attacks is emphasized in bold.

From: Hard label adversarial attack with high query efficiency against NLP models

Data

Model

Attack

TextFooler

GA-A

PSO-A

LSHA

HLBB

TextHoaxer

LeapAttack

SSPAttack

QEAttack

AG

BERT

44.64

32.08

46.91

51.50

251.20

352.06

438.77

234.46

247.96

WordCNN

46.66

35.50

46.87

52.66

206.12

266.23

307.54

182.65

176.05

WordLSTM

48.01

31.73

42.67

48.15

254.12

364.78

416.77

234.76

230.50

MR

BERT

3.70

36.45

59.78

60.19

2719.32

3068.97

1856.51

1575.14

1128.96

WordCNN

23.67

56.74

67.86

6.84

504.77

722.04

481.62

478.45

191.70

WordLSTM

22.68

63.56

86.78

40.86

729.04

957.83

755.15

372.46

147.09

Yelp

BERT

20.65

21.36

20.89

24.23

110.49

124.75

173.90

97.62

85.85

WordCNN

17.39

16.49

23.07

16.66

122.68

130.14

135.89

86.55

94.44

WordLSTM

15.39

20.29

19.60

16.51

68.87

101.88

130.09

79.82

98.72

Yahoo

BERT

23.18

25.57

17.31

29.17

118.54

116.31

162.63

110.75

108.03

WordCNN

30.77

23.16

16.27

27.53

103.57

94.42

138.49

92.38

90.23

WordLSTM

36.00

23.05

22.74

25.99

121.06

129.23

164.33

111.97

130.77

IMDB

BERT

8.43

11.53

10.62

9.76

24.45

36.32

44.24

20.45

19.35

WordCNN

6.76

12.72

9.48

8.43

21.93

27.05

31.90

17.86

15.95

WordLSTM

5.85

11.32

9.20

8.44

22.00

25.23

33.15

16.97

15.40

Average

23.59

28.10

33.34

28.46

358.54

434.48

351.40

247.49

185.40