Table 2 Optimized hyperparameters for machine learning and deep learning models.
Model | Hyperparameter search space | Optimized hyperparameters |
---|---|---|
Random forest | n_estimators: [100, 200, 300]; max_depth: [10, 20, 30]; min_samples_split: [2, 5] | n_estimators = 200; max_depth = 30; min_samples_split = 2 |
Gradient boosting | n_estimators: [100, 200]; learning_rate: [0.01, 0.1]; max_depth: [3, 5] | n_estimators = 200; learning_rate = 0.1; max_depth = 5 |
Decision tree | max_depth: [5,10,20]; criterion: [‘squared_error’, ‘friedman_mse’] | max_depth = 20; criterion = ‘squared_error’ |
K-nearest neighbors | n_neighbors: [3, 5, 7]; weights: [‘uniform’, ‘distance’] | n_neighbors = 7; weights = ‘distance’ |
AdaBoost | n_estimators: [50, 100, 200]; learning_rate: [0.01, 0.1, 1] | n_estimators = 200; learning_rate = 0.1; loss = ‘exponential’ |
XGBoost | n_estimators: [100, 200, 300]; learning_rate: [0.01, 0.05, 0.1]; max_depth: [3, 6, 9] | n_estimators = 300; learning_rate = 0.1; max_depth = 6 |
Extra trees classifier | n_estimators: [100, 200, 300, 500], max_depth: [10, 20, 30, None], min_samples_split: [2, 5, 10], | n_estimators = 300; max_depth = 20; min_samples_split = 2 |
MLP | units: [64, 128]; dropout: [0.2, 0.3]; learning_rate: [0.001, 0.01]; batch_size: [32, 64]; epochs: [50, 100] | units = 128; num_layers = 2; dropout = 0.3; optimizer = ‘adam’; learning_rate = 0.01; batch_size = 64; epochs = 100 |
LSTM | units: [64, 128]; dropout: [0.2, 0.3]; learning_rate: [0.001, 0.01]; sequence_length: [30, 50]; batch_size: [32, 64]; epochs: [50, 100] | units = 128; num_layers = 2; dropout = 0.3; optimizer = ‘adam’; learning_rate = 0.001; batch_size = 64; epochs = 100; sequence_length = 50 |