Table 2 BiLSTM model parameters and Significance.
From: An IoT-enabled AI framework for sustainable product design optimizing eco-efficiency using BiLSTM
Parameter | Value / Description | Significance & Justification |
|---|---|---|
Input Size | Number of input features | Matches the dimensionality of sensor data at each time step. |
Hidden Units | 128 | Identified via grid search (64, 96, 128, 256). 128 offered the best accuracy without overfitting. |
Number of Layers | 5 | Tested configurations (2ā6 layers). 5 layers captured deep temporal patterns with stable training. |
Activation Functions | Tanh/ReLU (hidden), Softmax (output) | Tanh/ReLU improved nonlinear feature extraction; Softmax provided probabilistic classification. |
Dropout Rate | 0.3 | Tuned between 0.1ā0.5; 0.3 minimized overfitting while maintaining learning capacity. |
Learning Rate | 0.001 | Determined via learning-rate scheduling (0.0001ā0.01). 0.001 offered the most stable convergence. |
Optimizer | Adam | Selected due to adaptive gradient handling suitable for noisy IoT data. |
Batch Size | 32 | Evaluated batch sizes of 16, 32, 64; 32 achieved the best balance of speed and stability. |
Epochs | 10 to 100 (increment of 10) | Optimal epoch selected using early stopping on validation accuracy to prevent overtraining. |
Weight Initialization | Xavier Initialization | Ensured stable gradient propagation during deep training. |
Loss Function | Categorical Cross-Entropy | Suitable for multi-class classification and probability-based outputs. |
Train/Test Split | 80% Training / 20% Testing | Ensures fair generalization evaluation and prevents data leakage. |
Cross-Validation Strategy | Five-Fold Cross-Validation | Improves statistical reliability and robustness of performance estimates. |
Random Seed | 42 | Guarantees experiment reproducibility and consistent results across runs. |
Framework & Version | TensorFlow 2.x / Python 3.8 | Ensures software reproducibility and compatibility for replication. |