Table 2 Key parameters used in model training and their set values.

From: The impact of CNN MHAM-enhanced WRF and BPNN models for user behavior prediction

Parameter name

Parameter description

Set value

n_estimators

The number of decision trees in WRF

100

max_depth

Maximum depth of decision tree

10

min_samples_split

Minimum number of samples required to segment internal nodes

2

min_samples_leaf

Minimum number of samples required for leaf nodes

1

max_features

The maximum number of features to consider when finding the best split

“auto”

criterion

The standard used when building the tree, “gini” or “entropy”

“gini”

learning_rate

Learning rate in the training process of BPNN

0.001

hidden_layers

The number of hidden layers in BPNN, in the format of “number of layers _ number of nodes”

“2_50”

epochs

The number of iterations in the training process

200

batch_size

Number of samples used in each iteration

32

activation_function

Activation function for hidden layer and output layer

“relu”

optimizer

An optimizer for updating the model weights

“adam”

dropout_rate

Dropout ratio for regularization to prevent over-fitting

0.2

weight_decay

L2 regularization term, which is used to control the model complexity

0.001