Table 6 Optimal hyperparameters of machine learning models.
ML/DL Model | Optimal hyperparameters |
|---|---|
XGBoost | n_estimators = 1172, learning_rate = 0.19767316614730276 |
LightGBM | n_estimators = 1482, learning_rate = 0.6992380456312691 |
GBM | n_estimators = 1087, learning_rate = 0.28980775639493667 |
RF | n_estimators = 108, max_depth = 15, min_samples_split = 2, min_samples_leaf = 1, bootstrap = False |
CATBoost | iterations = 1000, learning_rate = 0.1, depth = 6, verbose = 0 |
AdaBoost | n_estimators = 241, learning_rate = 0.09257405215270152, loss = ‘square’, max_depth = 10, min_samples_split = 4, min_samples_leaf = 1 |
KNN | n_neighbors = 5, weights = ‘distance’, p = 1, algorithm = ‘auto’ |
BR | n_estimators = 97, max_samples = 0.9996785133314219, max_features = 0.8255801182352335, bootstrap = False, bootstrap_features = False, max_depth = 12, min_samples_split = 2, min_samples_leaf = 1 |
DT | max_depth = 8, min_samples_split = 2, min_samples_leaf = 1, max_features = None, splitter = best, criterion = absolute_error |
SVM | kernel = rbf, C = 75.62164653446044, epsilon = 0.025523589047330056, degree = 3 |
ANN | hidden_layer_sizes = 10, activation = logistic, max_iter = 100,000, solver = lbfgs, random_state = 42 |
DNN | n_layers = 3, units_0 = 113, units_1 = 169, units_2 = 254, learning_rate = 0.002089738484827969, dropout_rate = 0.003010226821045029 |
CNN | n_conv_layers = 2, filters_0 = 106, filters_1 = 121, kernel_size = 4, learning_rate = 0.003535056792769612, dropout_rate = 0.041472436005474664 |
RNN | K-Fold MSE Scores: [np.float64(2.265767), np.float64(5.107743), np.float64(4.706106), np.float64(2.874644), np.float64(1.123151)] Mean K-Fold MSE: 3.215483 Best Validation MSE: 1.123151 |
FFNN | n_layers = 2, units_0 = 43, units_1 = 57, learning_rate = 0.008968117531970703, dropout_rate = 4.784803011470551e-05 |