Table 6 Image classification performance of re-trained ShuffleNet and ViT with varying training epochs

From: Large language models driven neural architecture search for universal and lightweight disease diagnosis on histopathology slide images

Training epochs

BreakHis

Diabetic

ShuffleNet

Prec@1 (%)↑

FLOPs (M)↓

Params (M)↓

Prec@1 (%)↑

FLOPs (M)↓

Params (M)↓

10

94.50

326.09

2.82

66.94

328.00

2.81

20

99.98

213.30

1.80

73.22

240.25

2.10

30

96.37

305.47

2.76

68.86

250.79

2.23

40

95.32

327.45

2.82

63.93

258.03

2.40

ViT

Prec@1 (%)↑

FLOPs (G)↓

Params (M)↓

Prec@1 (%)↑

FLOPs (G)↓

Params (M)↓

10

97.63

5.35

27.19

48.63

4.95

25.12

20

98.08

4.95

25.12

70.38

4.13

20.99

30

96.78

4.77

24.24

66.12

4.13

20.99

40

96.90

5.00

25.42

64.75

4.95

25.12

  1. We adjust the fine-tuning epochs with coverage from 10 to 40. The best metrics are highlighted in bold. For both ShuffleNet and ViT search, Pathology-NAS generally achieves the optimal re-training performance when fine-tuning 20 epochs for each search iteration.