Table 7 Hyperparameters for the hybrid stacking and voting models.
Constituent model | Hyperparameters |
|---|---|
LR | C = 1.0, dual = false, fit_intercept = True, intercept_scaling = 1, l1_ratio = None, max_iter = 1000, multi_class = auto, n_jobs = None, penalty = l2, random_state = 42, solver = lbfgs, tol = 0.0001, verbose = 0, warm_start = False |
RF | Bootstrap = True, ccp_alpha = 0.0, class_weight = None, criterion = gini, max_features = sqrt, min_impurity_decrease = 0.0, min_samples_leaf = 1, min_samples_split = 2, min_weight_fraction_leaf = 0.0, n_estimators = 100, n_jobs = −1, oob_score = False, random_state = 42, verbose = 0, warm_start = False |
ET | Bootstrap = False, ccp_alpha = 0.0, criterion = gini, max_features = sqrt, min_impurity_decrease = 0.0, min_samples_leaf = 1, min_samples_split = 2, min_weight_fraction_leaf = 0.0, n_estimators = 100, n_jobs = −1, oob_score = False, random_state = 42, verbose = 0, warm_start = False |
GB | ccp_alpha = 0.0, criterion = Friedman’s_mse, learning_rate = 0.1, loss = log_loss, max_depth = 3, min_impurity_decrease = 0.0, min_samples_leaf = 1, min_samples_split = 2, min_weight_fraction_leaf = 0.0, n_estimators = 100, random_state = = 42, subsample = 1.0, tol = 0.0001, validation_fraction = 0.1, verbose = 0, warm_start = False |
XGB | Objective = multi: softprob, booster = gbtree, device = cpu, enable_categorical = False, n_jobs = −1, random_state = 42, tree_method = auto, verbosity = 0 |