Table 1 Summary of brain tumor segmentation and classification methods in the literature.
From: A novel hybrid vision UNet architecture for brain tumor segmentation and classification
Authors | Dataset | Approach | Results / Metrics | Limitations / Future Work |
---|---|---|---|---|
Mehta & Arbel et al.17 | BraTS2018 | 3D UNet | Dice scores: ET: 0.706 WT: 0.871 TC: 0.771 | Need to improve testing accuracy; limited generalizability |
Cicek et al. (2016)18 | Xenopus kidney | 3D UNet from sparse annotation | IoU: 0.863 | Performance may vary with different dataset characteristics; sensitive to annotation quality |
Gitonga et al.19 | BraTS2021 | 3D Attention-based UNet | Dice Coefficient: 0.9864 | Computationally intensive |
Asiri et al. (2023)20 | TCGA-LGG, TCIA MRI | ResNet50 + UNet | IoU: 0.91, DSC: 0.95, SI: 0.95 | Limited to LGG class |
Shedbalkar & Prabhushetty et al.21 | Figshare MRI | UNet + chopped VGGNet | Accuracy: 98.93%, Sensitivity: 0.98, Precision: 0.9833, F1-score: 0.9833 | Limited validation and generalization |
Pravitasari et al.22 | Custom | UNet-VGG16 | Accuracy: 96.1% | Need to explore different architecture |
Kolarik et al.24 | Custom + MICCAI 2016 MRI | 3D Dense-U-Net | SSIM: 0.78547, PSNR: 24.09 dB | Need to explore different datasets |
Chen et al.27 | Synapse multi-organ segmentation dataset | TransUNet | DSC: 77.48, HD: 31.69 | Need to evaluate on different dataset |
Wang et al.28 | BraTS 2019 | TransBTS | Dice scores: ET: 78.92 WT: 90.23 TC: 81.19 | Computationally intensive |
Hatamizadeh et al.29 | BraTS 2021 | Swin UNETR | ET DSC: 0.858, HD: 6.016 WT DSC: 0.926, HD: 5.831 TC DSC: 0.885, HD: 3.770 | High memory usage |
Cao et al.30 | Synapse multi-organ segmentation dataset | Swin-Unet | DSC: 79.13 HD: 21.55 | Pure transformer; still evolving for medical images |
Aloraini et al.31 | BraTS 2018 and Figshare | ViT-CNN | Accuracy: 96.75% (BraTS), 99.10% (Figshare) | Need to explore with lightweight CNN model |
Khushi et al.32 | Multiclass brain tumor Kaggle dataset | EfficientNetB7 | Accuracy: 98.97% | Need to evaluate on real medical image dataset |