Table 3 Class specific mAP and IoU scores for artefact detection for top 30% participants.

From: An objective comparison of detection and segmentation algorithms for artefacts in clinical endoscopy

Team name

Class specific detection

Blur

Contrast

Specularity

Saturation

IA

Bubbles

Instrument

mAP

IoU

mAP

IoU

mAP

IoU

mAP

IoU

mAP

IoU

mAP

IoU

mAP

IoU

yangsuhui

0.28

0.45

0.44

0.29

0.48

0.30

0.48

0.33

0.32

0.32

0.06

0.77*

0.26

0.46

ZhangPY

0.33

0.41

0.41

0.41

0.35

0.34

0.45

0.38

0.20

0.40

0.20

0.27

0.24

0.62

Keisecker

0.31

0.50

0.40

0.38

0.36

0.29

0.38

0.43

0.23

0.37

0.18

0.26

0.30

0.56

michaelqiyao

0.37

0.22

0.47

0.25

0.48

0.22

0.52

0.29

0.31

0.26

0.24

0.08

0.30

0.33

ilkayoksuz

0.25

0.33

0.32

0.34

0.27

0.30

0.35

0.36

0.24

0.38

0.19

0.25

0.29

0.45

swtnb

0.34

0.23

0.44

0.21

0.28

0.27

0.32

0.36

0.23

0.33

0.17

0.30

0.25

0.52

Faster R-CNN

0.17

0.35

0.33

0.21

0.21

0.37

0.33

0.15

0.15

0.19

0.11

0.10

0.21

0.45

Retinanet

0.21

0.20

0.32

0.25

0.12

0.17

0.39

0.32

0.12

0.24

0.18

0.15

0.16

0.27

Merged

0.32

0.37

0.45

0.37

0.37

0.31

0.43

0.41

0.26

0.39

0.23

0.30

0.27

0.51

  1. Off-the-shelf Faster R-CNN20, RetinaNet16 and a super detector, ‘Merged’ constructed by merging all consensus detections among participants are reported as baselines for comparison. Teams are presented in decreasing order of detection score, (scored). The better the method, the higher the mAP and IoU.