Table 3 Bayesian optimization search space and optimal values for hyperparameters in extreme gradient boosting (XGBoost) and random forest (RF) efficacy prediction models.

From: The construction of HMME-PDT efficacy prediction model for port-wine stain based on machine learning algorithms

Classifier models

Hyperparameters

Optimal value

Search space

Extreme gradient boosting

learning rate

0.351

0.01,0.5

n estimators

401

10,500

gamma

0.315

0.01,1

reg alpha

0.18

0.01,1

reg lambda

0.184

0.01,1

Random forest

n estimators

677

1, 1000

min samples split

2

2,10

min samples leaf

5

1,5

min weight fraction leaf

0.0511

0,0.5

min impurity decrease

0.1428

0,1

max samples

0.5451

0.1,1

max depth

97

1,100