Table 5 Execution times of training and inference on GPU and CPU.
Approach | Inference GPU | Inference CPU | Train one fold | GFLOPs | RAM | VRAM |
---|---|---|---|---|---|---|
2.5D Tiramisu 17 | 7 s | 5.5min | 12.5 h | 23.9 | 6 GB | 19.3 GB |
nnU-Net19 | 15 s (5 s) | 207 min (5.2 min) | 34.7 h (10.4 h) | 2.74 e+03 | 22 GB | 9.2 GB |
nnU-Net small20 | 12 s (4 s) | 164 min (4.2 min) | 29.4 h (9.2 h) | 2.21 e+03 | 22 GB | 8.9 GB |
Multi-branch U-Net (proposed) | 4 s | 67 s | 4.2 h | 53.2 | 4GB | 9.4 GB |