Table 4 Optimal parameters using hyperparameter tuning.
Num | ML & DP models | Hyper-Parameters |
---|---|---|
1 | CNN | Conv1D (filters = 64, kernel_size = 5, MaxPoolingD (pool_size = 3), Dense (2) |
2 | LSTM | LSTM Units = 64, Activation Function= ‘sigmoid’, Optimizer=’adam’, Loss Function=’binary_crossentropy’, Epochs = 20, Batch Size = 128. |
3 | DNN | Layers and Neurons = 84, 42, 1, Activation Functions=’relu’, ‘relu’, ‘sigmoid’,Optimizer=’adam’, Loss Function= ‘binary_crossentropy’, Epochs = 20, Batch Size = 128. |
4 | FNN | Hidden Layer Size= (128,), Max Iterations = 2000, Random State = 42, Activation Function=’relu’, Solver=’adam’, Learning Rate=’constant’, Batch Size=’128’. |
5 | RF | n_estimators = 400, criterion=’ entropy’, max_depth = None, min_samples_split = 2, min_samples_leaf = 1, max_features=’auto’, bootstrap = True, oob_score = False, random_state = 42. |
6 | KNN | Test_size = 0.2, Random_state: 42, CV = 10 |
7 | XGB | learning_rate = 0.05, n_estimators = 500, max_depth = 10, min_child_weight = 5, subsample = 0.9, colsample_bytree = 0.8, gamma = 0, reg_lambda = 0. |
8 | SVM | Probability=’True’, C=’ 5.0’, kernel = ‘rbf’, degree=’ 0 ‘ gamma = 0.1 |