Table 1 Image classification performance compared with SOTA models

From: Large language models driven neural architecture search for universal and lightweight disease diagnosis on histopathology slide images

Models

BreakHis

Diabetic

 

Prec@1 (%)↑

FLOPs↓

Params (M)↓

Prec@1 (%)↑

FLOPs↓

Params (M)↓

EfficientNet

88.63

384.60M

3.97

69.52

384.61M

3.97

ResNet

95.10

4.13G

23.51

70.48

4.13G

23.52

Pathology-NAS ShuffleNet(ours)

99.98

213.30M

1.80

73.22

240.25M

2.10

ViT-small

87.33

4.25G

25.19

67.71

4.25G

21.59

Swin-Transformer

83.59

15.17G

86.68

54.69

15.17G

86.68

Pathology-NAS ViT(ours)

98.08

4.95G

25.12

70.38

4.13G

20.99

  1. For ShuffleNet backbone, Pathology-NAS is compared with EfficientNet and ResNet in terms of Top-1 accuracy, FLOPs and Params. For ViT backbone, Pathology-NAS is compared with ViT-small and Swin-Transformer in terms of Top-1 accuracy, FLOPs and Params. The best performance is highlighted in bold. Pathology-NAS achieves the highest performance with the lowest FLOPs and parameters.