Table 1 Comparison of existing model in literature.

From: Optimizing brain stroke detection with a weighted voting ensemble machine learning model

Key technique

Model

Research performance

Limitation

References

Machine learning and deep learning

CNN, LSTM, KNN, XGB, and majority voting ensemble

Proposed model obtained the highest classification performance based on all evaluation metrics on all datasets

Potential limitations in generalizability to other populations or datasets, need for further validation, May require significant computational resources

30

Deep learning

CNN-GRU, SMOTE Method

Higher classification accuracy compared to other existing models

Potential limitations in generalizability to other datasets or environments

31

Ensemble learning, data mining techniques

Weighted ensemble model using genetic algorithm

Improved performance compared to individual classifiers

May require significant computational resources

32

Remote monitoring

Web application for remote monitoring and management, Real-time monitoring and alerts

Effective in monitoring and managing high-risk pregnancies

Limited to healthcare professionals, not designed for patient use

32

Ensemble-based deep learning model

CNN, LSTM, XGBoost, KNN

Outperformed existing models, demonstrating superiority in cardiovascular disease prediction

Lack of interpretability of the model’s predictions due to the complexity of the ensemble architecture

33

Semantic relatedness and similarity measures

natural language, machine learning algorithms

Using students’ answers as feedback considerably improved the accuracy and performance of these measures

The dataset used is relatively small

34

Machine learning

Neural networks, SVM, KNN

remarkable accuracy and minimal loss

Limited to a single dataset, potential variation with other datasets

35

Machine learning

Nomogram prediction model

Successfully identified several parameters associated with stroke risk, demonstrated superior predictive accuracy

Potential limitations in generalizability to other populations, need for further validation

36

Machine learning (ML)

Random forest (RF), KNN, DT, AdaBoost, XGBoost, SVM, ANN

RF achieved highest performance

Potential limitations in generalizability to other populations or datasets, need for further validation, May require significant computational resources

37

Ensemble Machine Learning

Soft Voting Classifier (Random Forest, Extremely Randomized Trees, Histogram-Based Gradient Boosting)

Achieved an accuracy of 96.88%, improved accuracy and robustness compared to single classifiers

Potential limitations in handling complex interactions between features, need for further optimization

18

Face Detection using Yolo v8

Stroke monitoring strategy

Achieved high accuracy of 98.43%

Limited availability of stroke patient data

38

Modified Vision Transformer (ViT) integrated approach

End to end ViT Architecture, CNN

87.51% classification accuracy for brain CT scan slices

Improvement needed for stroke diagnosis

43

A deep-learning-based Microwave-induced thermo acoustic tomography MITAT (DL-MITAT) Technique

A residual attention U-Net (ResAttU-Net)

effectively eliminated image artifacts and accurately restored hemorrhage spots as small as 3 mm

No performance metrics for increased accuracy; training sets are constructed only using the simulation approach

44

AutoML

A combination of AutoML, Vision Transformers (ViT), and CNN

The model achieved 87% accuracy for single-slice level predictions and 92% accuracy for patient-wise predictions

Small sample size, complexity of the integrated architecture

45