Table 1 Summary of common methods for DR detection.

From: An enhanced diabetic retinopathy detection approach using optimized deep learning technique

Category

Methodology

Pros

Cons

Representative examples

Traditional Machine Learning (Hand-crafted Features)

Manual extraction of texture, color, morphological, or vessel-based features; classification using SVM, k-NN, Random Forest.

- Simple and interpretable.

-Requires smaller datasets.

- Low computational cost

-Cannot capture complex patterns

-Depends heavily on feature engineering

-Limited generalization on diverse datasets

SVM + morphological features;

Random Forest + texture features;

k-NN + histogram descriptors

Convolutional Neural Networks (CNNs)

End-to-end deep learning models that automatically learn hierarchical features; includes transfer learning and attention models.

-High accuracy

-No manual feature extraction

-Strong representational power

-Requires large labeled datasets

-High computational demands

-Less interpretable

Sensitive to image quality

InceptionV3 for DR grading;

ResNet-based lesion detection;

EfficientNet DR classifiers

Hybrid (Feature Selection + Classifier)

Features (hand-crafted or deep) reduced using GA, PSO, ACO, GWO, WOA before classification.

-Removes noisy/irrelevant features.

- Improves classifier performance

- Works with smaller datasets

- Risk of local optima

- Requires tuning optimizer parameters

- Moderate computation cost

GA + SVM;

PSO + Random Forest;

ACO + k-NN;

GWO + SVM

Advanced Meta-heuristic Optimization

Uses advanced/chaotic optimizers such as WOA, MVO, SCA, DGOA for high-dimensional feature selection.

- Strong exploration/exploitation

- More robust in high-dimensional search

- Less premature convergence

- Higher computational complexity

- Sensitive to parameters

- Variable performance across datasets

WOA + deep features;

MVO for DR staging;

SCA + RF;

DGOA + Ensemble

Ensemble Learning

Combines multiple classifiers using bagging, boosting, or stacking (RF, AdaBoost, XGBoost, meta-learners).

- High robustness and accuracy

- Reduces variance and bias

- Better generalization

-Less interpretable

- More computationally intensive

- Requires careful design of base learners

Random Forest;

XGBoost + deep features;

Stacking (SVM + RF + CNN features)