Table 2 Detailed parameter configurations of the models used for SHL

From: Mitigating data bias and ensuring reliable evaluation of AI models with shortcut hull learning

Model

Layer

Size

Params (M)

FLOPs (G)

ResNet-5026

layer3.0

14 × 14

23.51

4.12

ViT-B/1641

layers.8

14 × 14

88.17

16.86

RepVGG-A257

stage_3.0

14 × 14

25.50

5.12

Swin-T58

stages.2.blocks.0

14 × 14

27.52

4.36

PViG-S59

stages.2.0

14 × 14

29.02

4.57

ResNeXt-5068

layer3.0

14 × 14

25.03

4.27

Inception-V363

Mixed_6a

12 × 12

23.83

5.75

ConvMixer-1024/1069

stages.8

16 × 16

24.38

5.55

EfficientNet-B470

layers.4.0

14 × 14

19.34

4.66

RegNetX-4.0GF71

layer3.0

14 × 14

22.12

4.00

SE-ResNet-5072

layer3.0

14 × 14

28.09

4.13

  1. The naming convention for the layers follows that of MMPreTrain67.