Table 6 Results of hyper-parameter optimization for machine learning models.
Model | Best Parameter | Accuracy | AUC | Precision | Recall | F1 |
---|---|---|---|---|---|---|
Extra Tree | {‘et__bootstrap’: False, ‘et__max_depth’: 15, ‘et__min_samples_leaf’: 4, ‘et__min_samples_split’: 2, ‘et__n_estimators’: 300} | 88.04 | 92.30 | 88 | 88 | 88 |
Random Forest | {‘rf__max_depth’: 10, ‘rf__min_samples_split’: 10, ‘rf__n_estimators’: 200} | 87.50 | 92.57 | 88 | 88 | 88 |
Ada Boost | {‘ada__algorithm’: ‘SAMME’, ‘ada__learning_rate’: 1, ‘ada__n_estimators’: 100} | 85.87 | 92.40 | 87 | 86 | 86 |
Gradient Boosting | {‘gb__learning_rate’: 0.1, ‘gb__max_depth’: 3, ‘gb__n_estimators’: 100} | 89.22 | 94.31 | 89 | 89 | 89 |
CatBoost Before Fine-tunning | (iterations = 500, learning_rate = 0.03, depth = 8, l2_leaf_reg = 5, bagging_temperature = 1, border_count = 128, random_strength = 1, od_type = ’Iter’, od_wait = 50, verbose = 0, random_state = 42) | 88 | 88 | 88 | 88 | 88 |
CatBoost after Fine-tunning | (iterations = 200, learning_rate = 0.1, depth = 6,verbose = 0) | 99.02 | 95 | 99.04 | 99.02 | 99.02 |