Table 1 A summary of the related works with advantages and disadvantages.
Reference | Year | Methods | Datasets | Performance results | Advantages | Disadvantages |
|---|---|---|---|---|---|---|
Altantawy et al.24 | 2024 | Deep attentive model | Faisalabad | Accuracy = 99.2% | (i) It effectively handles high dimensional data (ii) Better Handling of Imbalanced Data | (i) It has high computational complexity (ii) Susceptible to over-fitting problem |
CVD | Accuracy = 92.7% | |||||
Heart failure | Accuracy = 97% | |||||
Jafar et al.25 | 2023 | HyperOpt optimizer-LASSO optimizer | Cleveland | Accuracy = 97.32% | (i) The model effectively handles the linear and non-linear data (i) It increases the Prediction performance | (i) It produces poor performance on noisy data (ii) It is an inefficient algorithm, it takes a longer time to process |
CVD | Accuracy = 97.72% | |||||
Omkari et al.26 | 2024 | Integrated TLV framework | UCI | Accuracy = 99.03% | (i) A maximum accuracy is achieved (ii) Improved Handling of Imbalanced Data | (i) Increase computational overhead (ii) Needs a proper feature selection algorithm (iii) High model complexity |
CVD | Accuracy = 88.09% | |||||
Mandava27 | 2024 | IDRSNet | UCI | Specificity = 98.95% Sensitivity = 98.90% Accuracy = 99.12% | (i) It predicts heart disease with higher accuracy (ii) The probability of over-fitting is decreased | (i) It has a higher amount of missing values (ii) It requires an efficient feature selection technique (iii) It has poor data generalization |
Tata et al.28 | 2024 | Deep VAE AEO | Framingham | Accuracy = 97% Precision = 98% Recall = 87% F1-score = 82% | (i) Efficient handling of imbalanced data (ii) Enhanced feature extraction with VAE (iii) Robustness to noisy and incomplete data | (i) Parameter tuning can be complex and time-consuming (ii) High model complexity |
Nandakumar et al.29 | 2024 | Inception-resNet-V2 | UCI Cleveland | Accuracy = 98.77% Precision = 87% F1-score = 90% Specificity = 85% Sensitivity = 93% | (i) Fast convergence and solving local optima problems (ii) It produces an improved accuracy (iii) Highly efficient for handling noisy data | (i) The training and testing run times are roughly longer than for other models. (ii) Need for regularization techniques (iii) Sensitivity to Class Imbalance |
Revathi et al.30 | 2024 | OCI-LSTM | UCI | Accuracy = 97.11% Precision = 98% Recall = 87% F1-score = 82% | (i) Insensitive to irrelevant features (ii) Manages both continuous and discontinuous data | (i) It requires significant resources for training and tuning. (ii) High dimensionality feature space and uneven sample sizes for the target classes. |
Elsedimy et al.31 | 2024 | QPSO-SVM | Cleveland | Accuracy = 96.31% Precision = 94.23% Recall = 96.13% F1-score = 95% | ii) It produces an improved accuracy ii) Highly efficient for handling noisy data | i) The training and testing run times are roughly longer than for other models. (ii)It requires significant resources for training and tuning |
Torthi et al.32 | 2024 | BAPSO-RF | UCI | Accuracy = 98.71% Precision = 98.67% Recall = 98.23% F1-score = 98.45% | (i) Ensures high classification stability and robustness (ii) Suitable for large-scale biomedical datasets | (i) Particle swarm variants may get trapped in local optima. (ii) Computationally expensive when tuning both BAPSO and RF parameters |
Kumar et al.33 | 2023 | CapsNet-B-KHA | Cleveland | Accuracy = 95% Precision = 94% Recall = 97% F1-score = 95% | i) Captures spatial hierarchies in feature representation. (ii) Better generalization and interpretability via capsule structures. | (i) High training time and memory usage compared to CNNs. (ii) Capsule networks are still relatively new—lack of framework maturity. |
Kumar et al.34 | 2023 | The sample-based neural network | CVD | Accuracy = 96% Precision = 97% Recall = 95% F1-score = 95% | i) Capable of handling imbalanced datasets via sample reweighting. (ii) Simplifies network complexity for small datasets. (iii) Reduces over-fitting with fewer trainable parameters. | i) May suffer from lower performance on large datasets. (ii) Sensitive to sample selection strategies. |
Arunachalam et al.35 | 2022 | Ensemble model | UCI | Accuracy = 96% Precision = 97% Recall = 95% F1-score = 95% | (i) Aggregates multiple weak learners to improve prediction accuracy. (ii) High robustness to noisy data. (iii) Effective in handling high-dimensional feature spaces. | (i) Increased model complexity and interpretability issues. (ii) Requires more training time and computation resources. |
Saranya et al.36 | 2025 | DenseNet-ABiLSTM | ECG signal data | Accuracy = 89.14% F1-score = 87.74% | (i) DenseNet provides effective feature reuse and gradient flow. (ii) ABLSSTM enhances temporal sequence understanding in ECG signals. (iii) Strong performance in real-time sequential prediction tasks. | (i) High memory requirements for DenseNet layers. (ii) Difficult to optimize due to multiple deep components. |