Table 1 Boundary position errors (in pixels) for each of the patch-based methods with comparison to the baseline. Mean error.

From: Automatic choroidal segmentation in OCT images using supervised deep learning methods

Method

ILM

RPE

CSI

ME

MAE

ME

MAE

ME

MAE

RNN (32 × 32)

−0.08 (0.25)

0.48 (0.10)

−0.19 (0.18)

0.46 (0.12)

1.79 (4.84)

4.44 (4.23)

RNN (32 × 32) [CE]

0.01 (0.24)

0.45 (0.12)

−0.41 (0.18)

0.57 (0.13)

0.69 (3.06)

3.69 (2.27)

RNN (64 × 32)

−0.08 (0.24)

0.47 (0.12)

−0.35 (0.22)

0.54 (0.14)

0.35 (3.54)

3.81 (2.93)

RNN (64 × 32) [CE]

−0.22 (0.23)

0.52 (0.09)

−0.28 (0.22)

0.52 (0.13)

0.13 (2.57)

3.43 (1.97)

RNN (64 × 64)

−0.19 (0.24)

0.51 (0.09)

−0.43 (0.20)

0.59 (0.14)

0.42 (3.03)

3.36 (2.52)

RNN (64 × 64) [CE]

−0.16 (0.25)

0.50 (0.09)

−0.28 (0.22)

0.51 (0.15)

−0.02 (2.71)

3.23 (2.08)

RNN (128 × 32)

−0.08 (0.35)

0.50 (0.28)

−0.25 (0.22)

0.50 (0.13)

0.78 (3.31)

3.66 (2.85)

RNN (128 × 32) [CE]

−0.14 (0.30)

0.54 (0.19)

−0.19 (0.22)

0.49 (0.12)

0.07 (2.89)

3.48 (2.13)

CNN 1 (32 × 32)

−0.04 (0.31)

0.50 (0.25)

0.03 (0.25)

0.50 (0.14)

2.17 (5.42)

5.05 (4.75)

CNN 1 (32 × 32) [CE]

−0.25 (0.25)

0.54 (0.08)

−0.31 (0.22)

0.53 (0.13)

1.10 (3.11)

3.73 (2.35)

CNN 1 (64 × 32)

−0.20 (0.25)

0.52 (0.09)

−0.30 (0.25)

0.54 (0.14)

1.18 (5.43)

4.56 (4.75)

CNN 1 (64 × 32) [CE]

−0.09 (0.25)

0.48 (0.10)

−0.40 (0.28)

0.61 (0.16)

−0.35 (2.68)

3.55 (1.95)

CNN 1 (64 × 64)

−0.09 (0.24)

0.49 (0.11)

0.14 (0.23)

0.48 (0.13)

1.17 (4.01)

3.67 (3.51)

CNN 1 (64 × 64) [CE]

−0.14 (0.24)

0.50 (0.10)

−0.35 (0.30)

0.58 (0.17)

0.01 (3.29)

3.35 (2.79)

CNN 1 (128 × 32)

−0.31 (0.25)

0.57 (0.09)

0.08 (0.30)

0.51 (0.13)

0.43 (3.07)

3.61 (2.61)

CNN 1 (128 × 32) [CE]

−0.04 (0.24)

0.47 (0.10)

−0.22 (0.33)

0.57 (0.16)

1.04 (2.45)

3.43 (1.89)

CNN 2 (32 × 32)

0.01 (0.55)

0.51 (0.51)

−0.44 (0.18)

0.59 (0.13)

1.26 (4.69)

4.72 (3.95)

CNN 2 (32 × 32) [CE]

−0.14 (0.23)

0.49 (0.09)

−0.23 (0.22)

0.51 (0.13)

0.20 (3.35)

3.78 (2.39)

CNN 2 (64 × 32)

−0.13 (0.24)

0.50 (0.09)

−0.40 (0.20)

0.57 (0.14)

1.32 (4.24)

4.25 (3.58)

CNN 2 (64 × 32) [CE]

−0.26 (0.23)

0.53 (0.09)

−0.65 (0.25)

0.77 (0.18)

0.25 (2.86)

3.57 (2.14)

CNN 2 (64 × 64)

−0.30 (0.23)

0.55 (0.10)

−0.39 (0.20)

0.57 (0.14)

0.77 (4.60)

4.12 (3.97)

CNN 2 (64 × 64) [CE]

−0.19 (0.25)

0.54 (0.09)

−0.53 (0.27)

0.68 (0.18)

0.54 (3.36)

3.54 (2.69)

CNN 2 (128 × 32)

−0.15 (0.23)

0.49 (0.10)

−0.40 (0.19)

0.57 (0.13)

1.78 (4.10)

4.18 (3.38)

CNN 2 (128 × 32) [CE]

−0.19 (0.29)

0.56 (0.18)

−0.31 (0.24)

0.58 (0.13)

0.86 (2.98)

3.61 (2.34)

Baseline37

−0.27 (0.41)

0.58 (0.36)

−1.14 (0.65)

1.23 (0.60)

−3.64 (8.62)

5.82 (7.77)

  1. (ME) and mean absolute error (MAE) are reported in terms of mean value and (per B-scan standard deviation) for each of the three boundaries. [CE] indicates that the network was trained and tested with images pre-processed using contrast enhancement. CNN 1: Cifar CNN, CNN 2: Complex CNN. The best result for each boundary is highlighted in bold text.