Table 2 Training protocol for the Spinal Cord Gray Matter Challenge dataset.

From: Spinal cord gray matter segmentation using deep dilated convolutions

Resampling and Cropping

All volumes were resampled to a voxel size of 0.25 × 0.25 mm, the highest resolution found between acquisitions. All the axial slices were center-cropped to a 200 × 200 pixels size.

Normalization

We performed only mean centering and standard deviation normalization of the volume intensities.

Train/validation split

For the train/validation split, we used 8 subjects (2 from each site) for validation and the rest for training. The test set was defined by the challenge. We haven’t employed any external data or used the vertebral information from the provided dataset. Only the provided GM masks were used for training/validation.

Batch size

We used a small batch size of only 11 samples.

Optimization

We used Adam48 optimizer with a small learning rate \(\eta =0.001\).

Batch Normalization

We used a momentum \(\varphi =0.1\) for BatchNorm due to the small batch size.

Dropout

We used a dropout rate of 0.4.

Learning Rate Scheduling

Similar to the work21, we used the “poly” learning rate policy where the learning rate is defined by \(\eta ={\eta }_{{t}_{0}}\ast {(1-\frac{n}{N})}^{p}\) where \({\eta }_{{t}_{0}}\) is the initial learning rate, N is the number of epochs, n the current epoch and p the power with p = 0.9.

Iterations

We trained the model for 1000 epochs (w/ 32 batches at each epoch).

Data augmentation

We applied the following data augmentations: rotation, shift, scaling, channel shift, flipping and elastic deformation40. The data augmentation parameters were chosen using random search. More details about the parameters of the data augmentation are presented in Table 3.