Table 8 Hyper parameters tuned for selected classifiers.

From: Mitigating class imbalance in churn prediction with ensemble methods and SMOTE

Classifier

Parameters tuned

MLP

{‘hidden_layer_sizes’: [(s, ), (s, )*2, (s, )*4, (s, )*6], ‘solver’: [‘lbfgs’, ‘adam’], ‘alpha’: [0, 0.01, 0.1, 1, 10]}

GB

parameters = {‘max_depth’: [2, 3, 4, 6, 10, 15], ‘n_estimators’: [50, 100, 300, 500]}

XGB

(max_depth = 6, learning_rate = 0.1, n_estimators = 100, reg_lambda = 0.5, reg_alpha = 0, verbosity = 1, n_jobs = -1,).fit(train[features], train[target])

LGBM

{‘num_leaves’: 21, ‘num_trees’: 100, ‘objective’: ‘binary’, ‘lambda_l1’: 1, ‘lambda_l2’: 1, ‘learning_rate’: 0.1, ‘seed’: 1}