Table 15 Performance comparison of existing works with proposed work.
Author(s) | Proposed model | Accuracy |
|---|---|---|
Islam et al. (2024)16 | Generative adversarial networks (GANs) and Variational Autoencoders (VAEs) | 94% |
Saha et al. (2024)17 | VER-Net | 91% |
Rainio and Klén (2024)18 | Convolutional Neural Network (CNN) | 92.6% |
Kukreja and Sabharwal (2024)19 | Convolutional Neural Network (CNN) | 96.11% |
Zhang et al. (2024)20 | DenseNet-CNN Integration | 96% |
Gai et al. (2023)21 | Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) | 93.4% |
Quasar et al. (2023)22 | Ensemble Model (BEiT, DenseNet, Sequential CNN with ensemble methods) | 96.34% |
Raza et al. (2023)23 | Lung-EffNet (EfficientNet with modified top layers) | 96.10% |
Gautam et al. (2023)24 | Ensemble (ResNet-152, DenseNet-169, EfficientNet-B7 with weight optimization) | 97.23% |
Dritsas and Trigka (2022)25 | Rotation Forest | 97.1% |
Tsou et al. (2021)26 | eXtreme Gradient Boosting (XGBoost) | 92% |
Our Work | CNN with DA | 98.78% |