Table 6 Dataset-level agreement of the crowd-annotated LC reference dataset. Images annotated by only one unique annotator are excluded.

From: A national-scale land cover reference dataset from local crowdsourcing initiatives in Indonesia

Cases

Number of samples

Observed agreement

Expected agreement

Krippendorff’s Alpha

Samples belonging to all classes prior label

83769

0.78

0.67

0.34

Samples belonging to “Undisturbed Forest” prior label

24360

0.93

0.92

0.12

Samples belonging to “Logged Over Forest” prior label

18002

0.65

0.63

0.05

Samples belonging to “Oil Palm Monoculture” prior label

5495

0.81

0.48

0.63

Samples belonging to “Tree Based Not Oil Palm” prior label

17236

0.74

0.73

0.03

Samples belonging to “Shrub” prior label

4730

0.79

0.79

0.02

Samples belonging to “Grass or Savanna” prior label

1679

0.65

0.54

0.23

Samples belonging to “Cropland” prior label

12267

0.72

0.51

0.42

Samples with VHR image chips in RGB

47141

0.79

0.68

0.34

Samples with VHR image chips in grayscale

36628

0.78

0.66

0.34

Samples with VHR image chips acquired in 2010 (larger background image was not displayed)

21234

0.77

0.65

0.34

Samples with VHR image chips acquired in 2015 or 2018

62535

0.79

0.68

0.34

Annotations made by annotators with expert agreement worse than expected chance agreement are excluded

76425

0.80

0.63

0.45

Annotations made by annotators with inter-annotator agreement (majority agreement) worse than expected chance agreement are excluded

58916

0.73

0.60

0.33

Annotations made by annotators with either expert agreement or inter-annotator agreement (majority agreement) worse than expected chance agreement are excluded

51345

0.71

0.51

0.41

  1. See Table 3 for number of samples with consensus response “Yes”, and thus the LC label is known, i.e., the prior LC label is accepted/verified.