Table 12 Comparative computational performance and scalability of ML models.
Model | Training time (s) | Computation load | Scalability | Remarks |
|---|---|---|---|---|
AdaBoost | 25–30 | Low | High | Fast convergence with shallow learners; less accurate for nonlinear patterns |
AVOA | 90–120 | High | Moderate | Metaheuristic optimization increases computational cost; it has good exploratory ability |
CatBoost | 45–60 | Moderate | High | Balanced accuracy and efficiency; GPU support enhances scalability |
LGBMR | 30–40 | Low–moderate | Very high | Fastest boosting algorithm; slightly less interpretable than CatBoost |