Table 7 Image segmentation performance of re-trained U-Net with varying training epochs

From: Large language models driven neural architecture search for universal and lightweight disease diagnosis on histopathology slide images

Training epochs

BCSS

PanNuke

 

Dice (%)↑

IoU (%)↑

FLOPs (G)↓

Params (M)↓

Dice (%)↑

IoU (%)↑

FLOPs (G)↓

Params (M)↓

10

69.58

53.95

14.53

6.64

89.28

81.31

16.23

11.39

20

74.33

59.68

10.58

11.37

89.24

81.25

14.33

8.34

30

70.14

54.65

12.63

12.76

89.31

81.35

12.63

12.76

40

71.93

56.76

8.67

10.43

89.04

80.93

12.63

12.76

  1. We adjust the fine-tuning epochs with coverage from 10 to 40. The best metrics are highlighted in bold. Pathology-NAS achieves the optimal dice score and IoU score when fine-tuning for 20 epochs on BCSS and 30 epochs on PanNuke. FLOPs and Params of different fine-tuning epochs are on the same order of magnitude.