Table 2 Optimal hyperparameters of optimized LSTM models.
Parameter | Range | LSTM-GWO | LSTM-HHA | LSTM-DBO | LSTM-APO |
|---|---|---|---|---|---|
Hidden units | 20–200 | 120 | 110 | 90 | 70 |
Number of LSTM layers | 1–3 | 2 | 1 | 2 | 2 |
Learning rate | 0.0001–0.5 | 0.002 | 0.003 | 0.003 | 0.002 |
Batch size | 16–256 | 64 | 32 | 32 | 64 |
Epochs | 50–200 | 100 | 100 | 90 | 80 |
Dropout rate | 0.05–0.6 | 0.20 | 0.15 | 0.20 | 0.25 |
Activation function | Tanh/Sigmoid | Tanh | Tanh | Tanh | Tanh |
Optimizer | Adam/Adadelta | Adam | Adam | Adam | Adam |
Loss function | RMSE | ||||