Table 1 Architectures and hyperparameters of 2D U-Net, 2.5Da U-Net, and 3D U-Net structures.

From: Improving performance of deep learning models using 3.5D U-Net via majority voting for tooth segmentation on cone beam computed tomography

  

2D U-Net

2.5Da U-Net

3D U-Net

Architecture

Convolution

Size = 3 × 3

Stride = 1

Zero-padding

Size = 3 × 3

Stride = 1

Zero-padding

Size = 3 × 3 × 3

Stride = 1

Zero-padding

Down sampling maxpooling

Size = 2 × 2

Stride = 1

Size = 2 × 2

Stride = 1

Size = 2 × 2 × 2

Stride = 1

Up sampling

Size = 2 × 2

Stride = 1

Size = 2 × 2

Stride = 1

Size = 2 × 2 × 2

Stride = 1

Activation function

ReLu

ReLu

ReLu

U-Net layers

4

4

4

First layer features

32

32

32

Hyper parameter

Input data size

512 × 512 × 1

512 × 512 × 3

64 × 64 × 128

Optimizer

Adam

Adam

Adam

Loss function

BCE

BCE

BCE

Initial learning rate

0.0001

0.0001

0.0001

Batch size

12

12

6

Epoch

150

150

200

Callback function

Reduce learning rate (newLR = LR × 0.95 when val_loss in 10 epochs are no better)

Early stopping (training stop when val_loss in 50 epochs are no better)

  1. Adam adaptive moment estimation, BCE binary cross entropy, ReLU rectified linear unit.