Table 5 Comparison of evaluation performance between existing studies and our work. The upper portion presents the performance of individual models, while the lower portion presents results for ensembled methods. ‘–’ denotes results that are not available.
Method | Accuracy | Precision | Recall | F1 | MCC |
---|---|---|---|---|---|
ResNet509 | 0.72 | – | – | 0.84 | – |
VGG1910 | 0.74 | – | – | 0.84 | – |
InceptionV38 | 0.84 | – | – | 0.89 | – |
DenseNet12111 | 0.87 | – | – | 0.91 | – |
Majority class7 | 0.72 | – | – | 0.84 | – |
Swin-base (ours) | 0.90 | 0.90 | 0.90 | 0.89 | 0.77 |
DL-ensemble4 | 0.80 | 0.83 | 0.90 | 0.86 | 0.47 |
TL-ensemble4 | 0.90 | 0.98 | 0.89 | 0.93 | 0.76 |
ViT weighted voting ensemble (ours) | 0.91 | 0.91 | 0.91 | 0.90 | 0.76 |
ViT soft voting ensemble (ours) | 0.93 | 0.94 | 0.93 | 0.93 | 0.77 |