Table 6 Comparison of the proposed CSHG-CervixNet with other state-of-the-art techniques.
Method/model | Accuracy | Precision | Recall | F1-score |
---|---|---|---|---|
BiNext-Cervix (CNN and Transformer-based modules)5 | 91.82 | 91.40 | 91.59 | 91.50 |
Progressive resizing + PCA57 | 98.47 | 98.72 | 98.97 | 99 |
UNet + GCN62 | 98.61% | 97.33% | 97.11% | 97.56% |
CCanNet (mobile transformer based model)64 | 98.58 | 98 | 100 | 99 |
Swin transformer65 | 95.50 | - | - | - |
Vision transformer66 | 97.247 | 97.253 | 97.247 | 97.239 |
Vision transformer 63 | 99.02 | 99.03 | 99.04 | 99.02 |
13 pre-trained deep CNN models (DenseNet201)67 | 87.02% | - | - | - |
Densenet12168 | 86.14 | 86.90 | 85.58 | 86.24 |
HDFCN (Fine-tuned pre-trained models + Fully connected network)60 | 97.45 | 97.94 | 98.08 | 98.01 |
CervixFormer (Swin Transformer)36 | 91.56 | 91.12 | 91.32 | 91.22 |
VisionCervix (ViT + CNN)45 | 91.66 | 91.23 | 91.42 | 91.33 |
BiFormer (Bi-level Routing Attention based CNN)69 | 91.62 | 91.28 | 91.52 | 91.40 |
CNN based feature extraction + Cubic SVM classifier70 | 98.26 | 98.27 | 98.28 | 98.28 |
MLP49 | 96.54 | 96.87 | 96.15 | 96.93 |
CVM-Cervix (CNN, Visual Transformer + MLP)20 | 91.70 | 91.27 | 91.45 | 91.36 |
CytoBrain (Compact Visual Geometry Group (VGG))55 | 88.30 | - | 92.83 | 87.04 |
Graph convolutional network (GCN)54 | 98.37 | 99.80 | 99.60 | 99.80 |
ViT71 | 88.95 | 88.53 | 88.82 | 88.68 |
CSHG-CervixNet- compound scaling convolutional neural network + k-dimensional-based hypergraph convolutional neural network (ours) | 99.31 | 98.97 | 99.38 | 99.34 |