Table 3 Summary of best hyperparameter configurations for each machine learning algorithm evaluated in the study: logistic regression, support vector machine, gradient Boosting, decision tree, XGBoost, and random forest. The hyperparameters were optimized to enhance classification accuracy in predicting student stress levels.
From: Explainable artificial intelligence for predictive modeling of student stress in higher education
Algorithm | Grid search space | Best hyperparameters |
|---|---|---|
Logistic regression | penalty = [‘l1’, ‘l2’, ‘elasticnet’, ‘none’] C = [0.001, 0.01, 0.1, 1, 10, 100] l1_ratio = [0, 0.25, 0.5, 0.75, 1] | penalty = ‘l2’ C = 10 |
Support vector machine | C = [0.01, 0.1, 1, 10, 100] kernel = [‘linear’, ‘rbf’, ‘poly’, ‘sigmoid’] gamma = [‘scale’, ‘auto’] degree = [2, 3, 4] | C = 10 kernel = ‘linear’ gamma = ‘scale’ |
Gradient boosting | n_estimators = [50, 100, 150] learning_rate = [0.01, 0.05, 0.1, 0.2] max_depth = [3, 5, 7] min_samples_split = [2, 5, 10] min_samples_leaf = [1, 2, 4] subsample = [0.6, 0.8, 1.0] | n_estimators = 50 learning_rate = 0.01 max_depth = 7 min_samples_split = 2 min_samples_leaf = 1 subsample = 1.0 |
Decision tree | max_depth = [3, 5, 10, 15, 20, 25, None] min_samples_split = [2, 5, 10, 15, 20] min_samples_leaf = [1, 2, 4, 6, 8] max_features = [None, ‘sqrt’, ‘log2’] max_leaf_nodes = [None, 10, 20, 30, 40] criterion = [‘gini’, ‘entropy’] | max_depth = 10 min_samples_split = 20 min_samples_leaf = 6 max_features = ‘sqrt’ max_leaf_nodes = None criterion = ‘gini’ |
XGBoost | n_estimators = [150, 200, 250] max_depth = [6, 8, 10] learning_rate = [0.01, 0.05, 0.1] subsample = [0.8, 1.0] colsample_bytree = [0.8, 1.0] | n_estimators = 150 max_depth = 6 learning_rate = 0.05 subsample = 0.8 colsample_bytree = 1.0 |
Random forest | n_estimators = [150, 200, 250] max_depth = [10, 12, 15, 18] min_samples_split = [2, 5, 10] min_samples_leaf = [1, 2, 3] max_features = [‘sqrt’, ‘log2] | n_estimators = 150 max_depth = 10 min_samples_split = 10 min_samples_leaf = 1 max_features = None |