Fig. 5: DL-OCT inference time as a function of the B-Scan batch size for blind testing. | Light: Science & Applications

Fig. 5: DL-OCT inference time as a function of the B-Scan batch size for blind testing.

From: Neural network-based image reconstruction in swept-source optical coherence tomography using undersampled spectral data

Fig. 5

a With increasing batch size, the average inference time per B-Scan (512 A-lines) rapidly decreases owing to the parallelizable nature of the neural network computation. The average inference time converged to ~6.73 ms per B-Scan for a batch size of 128. If the number of channels in the neural network’s first layer is reduced from 48 down to 16, the average inference time further improved to ~1.69 ms per B-Scan. Our GPU memory size limited further reduction of the average inference time of DL-OCT. By using 8 NVIDIA Tesla A100 GPUs in parallel, the inference time was further reduced to ~1.42 ms and ~0.59 ms per B-scan for the 48-channel and 16-channel networks, respectively. All inference times were obtained by averaging 1000 independent runs, computed on a desktop computer (see “Materials and methods” section). b Sample fields-of-view are shown for network input, network output (using 48 channels vs. 16 channels in the first layer), and ground truth images. PSNR and SSIM values are also displayed for each one of these fields-of-view.

Back to article page