Table 6 Comparative analysis of limitations in prior methods and improvements introduced by the proposed pipeline.
Aspect | Limitations in previous approaches | Improvements in proposed method |
|---|---|---|
Noise robustness | Often absent; models fail under noisy or adversarial settings | Gaussian noise (\(\sigma = 0.05\)) improves robustness and class separability |
Dimensionality reduction | Rarely applied; high-dimensional features increase overfitting | PCA (90% retained variance) reduces feature noise and boosts stability |
Ensemble learning | Applied partially; without effective aggregation strategies | Stacked ensemble of SVM, RF, KNN, MLP, XGBoost |
Meta-classifier strategy | Typically absent; single model reliance increases bias | XGBoost as meta-learner ensures flexible, regularized decision boundary |
False positive rate (FPR) | Often above 0.04%, unsuitable for critical deployments | Reduced to FPR below 0.02% (Table 4) |
Generalization stability | Higher variance under cross-validation | Demonstrated low std-dev across 5-fold CV (see Table 3) |
Multiclass detection capability | Limited to binary classification in many cases | 3-class classification (Normal, Kr00k, Krack) with high fidelity |
Resource efficiency (Deployability) | Rarely evaluated; few report inference cost for IoT environments | Inference time: \(\sim\)28Â ms/sample; suitable for IoT edge deployment |