Table 3 Selected optimal hyperparameters for different ML models.
Regression model | Hyperparameters | Search space | Optimized hyperparameters |
|---|---|---|---|
1. Bagging | n_estimators | 10–300, step = 10 | 100 |
max_samples | 0.1–1.1, step = 0.1 | 0.5 | |
max_features | 0.1–1.1, step = 0.1 | 1 | |
2. RF | n_estimators | 10–300, step = 10 | 200 |
max_depth | 5–50, step = 5 | 30 | |
max_features | ‘sqrt’, ‘log2’ | ‘log2’ | |
min_samples_leaf | 1–10, step = 1 | 1 | |
min_samples_split | 1–10, step = 1 | 5 | |
3. XGBoost | n_estimators | 10–300, step = 10 | 100 |
max_depth | 1–10, step = 1 | 7 | |
learning_rate | 0.01–0.3, setp = 0.05 | 0.1 | |
gamma | 0–10, step = 1 | 5 | |
subsample | 0.1–1.0.1.0, step = 0.1 | 0.8 | |
colsample_bytree | 0.1–1.0.1.0, step = 0.1 | 1 | |
4. ANN | |||
Hidden layer | Hidden layer number | 1–5 | 1 |
Neurons per layer | 32–512, step = 32 | 352 | |
Activation function | ‘ReLU’, ‘tanh’, ‘sigmoid’, ‘linear’ | ‘ReLU’ | |
Kernel initializer | ‘uniform’, ‘glorot_uniform’, ‘he_normal’ | ‘glorot_uniform’ | |
Output layer | Kernel initializer | ‘uniform’, ‘glorot_uniform’, ‘he_normal’ | ‘he_normal’ |
Activation function | ‘linear’ | ‘linear’ | |