Table 10 Hyper parameters tuned for selected classifiers.
From: Mitigating class imbalance in churn prediction with ensemble methods and SMOTE
Classifier | Parameters tuned |
---|---|
MLP | parameters = {‘hidden_layer_sizes’: [(s, ), (s, )*2, (s, )*4, (s, )*6], solver’: [‘lbfgs’, ‘adam’], ‘alpha’: [0, 0.01, 0.1, 1, 10]} |
GB | parameters = {‘max_depth’: [2, 3, 4, 6, 10, 15], ‘n_estimators’: [50, 100, 300, 500]} |
Ada | {‘base_estimator__max_depth’:[i for i in range(2,11,2)], ‘base_estimator__min_samples_leaf’:[5,10], ‘n_estimators’:[10,50,250,1000],‘learning_rate’:[0.01,0.1]} |
XGB | max_depth = 6, learning_rate = 0.1,n_estimators = 100, reg_lambda = 0.5, reg_alpha = 0, verbosity = 1, n_jobs = -1, tree_method = ‘gpu_exact’ |
CAT | { ‘depth’ : [4,5,6,7,8,9, 10],‘learning_rate’ : [0.01,0.02,0.03,0.04], ‘iterations’ : [10, 20,30,40,50,60,70,80,90, 100]} |