Table 1 A comparison between ours and the baselines in model ranking

From: Network properties determine neural network performance

Dataset

CIFAR10

CIFAR100

SVHN

Fashion MNIST

Birds

LLC

5

10

5

10

5

10

5

10

5

10

Ours

0.93

0.98

0.77

0.80

0.84

0.88

0.95

0.89

0.74

0.79

BSV

0.86

0.89

0.55

0.80

0.74

0.78

0.53

0.60

0.52

0.61

LSV

0.85

0.87

0.55

0.80

0.73

0.70

0.49

0.45

0.48

0.45

BGRN

0.74

0.78

0.45

0.60

0.63

0.65

0.57

0.59

0.53

0.52

LC

0.85

0.85

0.50

0.58

0.44

0.10

0.55

0.61

0.50

LogME

0.593

0.716

−0.400

0.010

0.132

LEEP

0.635

0.593

0.338

0.159

−0.243

NCE

0.743

0.816

0.152

−0.029

0.049

Imprv (%)

9.1

10.2

−5.7

−2.0

12.4

13.3

65.3

49.2

40.1

30.6

  1. The notation LLC represents the length of the learning curve, and Imprv represents the relative improvement of our approach to the best baseline. The TMs are evaluated based on https://github.com/thuml/LogME repository. Due to the failure of the https://github.com/tdomhan/pylearningcurvepredictor supporting package of LC, there is a missing ρ at LLC of 10, which does not affect our conclusions.