Table 17 Standard evaluation type categories in IoT/SIoT research.
Evaluation type | Description/examples |
|---|---|
Simulation | Experiments conducted using simulators such as MATLAB, NS-3, OMNeT++, CloudSim, iFogSim, or Google Colab. Typically used for performance studies (e.g., latency, scalability, throughput, energy) |
Prototype/testbed | Hardware-based implementations (e.g., Raspberry Pi, Arduino, FPGA, edge/fog nodes, or small-scale IoT deployments). Demonstrates feasibility in realistic IoT/SIoT environments |
Real-world dataset | Evaluation performed on public datasets (e.g., UNSW-NB15, CICIDS, IoT-23, UCI IoT datasets) or custom sensor/IoT data collected in the field. Used to validate detection accuracy, trust prediction, etc |
Emulation | Virtualized or cloud-based test environments (e.g., Mininet, containerized clusters, digital twins). Offers controlled experiments closer to deployment scenarios. |
Analytical/theoretical | Formal analysis, mathematical modeling, security proofs, or purely theoretical validation without experimental deployment |