Table 8 Computational and robustness trade-offs of CSS approaches.

From: Enhanced sensing performance through the integration of denoising autoencoder and ensembling techniques

Method

Complexity (FLOPs)

Latency (ms)

Overfitting Risk

Attack Robustness

IGS/HGS

\(O(m)\)

0.5

None

Low (fails at >20% attacks)

CNN30

\(O(L \cdot f \cdot m)\)

4.1

Moderate (no dropout)

Medium (fails at 50% YFS)

DRL26

\(O(\Vert A\Vert \cdot \Vert S\Vert )\)

5.8

High (sparse rewards)

Low (needs retraining)

DE-ML35

\(O(m \cdot k + p)\)

3.9

Medium

Partial (MUs only)

DAEEC

\(\mathbf {O(m)_{\textrm{DAE}} + O(30 \cdot d \cdot m)_{\textrm{EC}}}\)

3.2

Low (regularized)

High (100% attacks)

  1. IGS: Identical Gain Scheme; HGS: Highest Gain Scheme; CNN: Convolutional Neural Network; DRL: Deep Reinforcement Learning; DE-ML: Differential Evolution based Machine Learning m: Number of users; k: Samples/measurement; L: Number of CNN layers; f: Number of CNN filters; p: Population size in DE-ML; \(\Vert A\Vert\), \(\Vert S\Vert\): Sizes of action/state spaces in DRL; d: Typical decision tree depth (\(\approx \log (n)\)). Complexities are for inference per sensing report.