Table 8 Hyperparameters of the XGBoost regression model.

From: Spatio-temporal patterns and driving mechanisms of ecosystem services in mountainous regions: A multi-scale analysis of the Yanshan-Taihang mountain area

Hyperparameter

Description

Value

Hyperparameter

n_estimators

Number of trees to be trained in the model, i.e., the number of boosting iterations.

250

n_estimators

max_depth

Maximum depth of each tree.

5

max_depth

learning_rate

Learning rate controlling the contribution of each tree to the final result.

0.03

learning_rate

random_state

Random seed to ensure reproducibility of results.

29

random_state

reg_lambda

L2 regularization coefficient.

6

reg_lambda

gamma

Minimum loss reduction required to make a further partition on a leaf node.

0.5

gamma

colsample_bytree

Fraction of features to be randomly sampled for each tree.

0.7

colsample_bytree

subsample

Fraction of training samples to be randomly drawn (without replacement) for each tree.

0.79

subsample

min_child_weight

Minimum sum of instance weight (Hessian) required in a child node.

20

min_child_weight