Table 4 Performance comparison of ML models.
From: Adversarial susceptibility analysis for water quality prediction models
Model | Accuracy (Mean ± Std) | F1-score (Mean ± Std) | Interpretation |
|---|---|---|---|
Random Forest | 0.9857 ± 0.0045 | 0.9857 ± 0.0045 | Highest and most stable performance |
MLP | 0.9495 ± 0.0063 | 0.9494 ± 0.0063 | Good, slightly less consistent than RF |
HistGradientBoosting | 0.9802 ± 0.0051 | 0.9798 ± 0.0054 | Very strong and consistent performer |
AdaBoost Classifier | 0.9600 ± 0.0082 | 0.9580 ± 0.0078 | Moderate performance, slightly variable |
Bagging Classifier | 0.9832 ± 0.0038 | 0.9829 ± 0.0040 | Very high, almost on par with RF |
Decision Tree | 0.9560 ± 0.0075 | 0.9542 ± 0.0073 | Decent performance, more variability |
LSTM | 0.9190 ± 0.0000 | 0.9190 ± 0.0000 | Lowest and static performance |
MLP | 0.9495 ± 0.0063 | 0.9494 ± 0.0063 | Good performance, slightly below RF |
TabNet | 0.5002 ± 0.0882 | 0.4169 ± 0.1466 | Poor performance; |