Table 6 Parameter estimates of artificial neural networks (ANN), decision trees (DT), random forest (RF), support-vector machine (SVM), logistic regression (LGR), and linear regression (LNR) machine-learning models used in classification and regression analyses.
Study/analyses | Models | ||||
|---|---|---|---|---|---|
ANN | DT | RF | SVM | LGR/LNR | |
Canola/classification | Hidden layers = 1 | Pruning = none | Number of trees = 5 | Loss function = 110.0, ε = 1.0 | Regularization = ridge (L2) |
Neurons = 10 | Node splitting = 95% | Replicable training = yes | Kernel = RBF, exp(− auto|x–y|2) | Cost strength = 5 | |
Activation function = tanh | Tree depth = unlimited | Tree depth = unlimited | Numerical tolerance = 0.0001 | ||
α (learning rate) = 0.5 | Max number of considered features = unlimited | Iteration = unlimited | |||
Max iteration = 100 | |||||
Dry bean/classification | Hidden layers = 1 | Pruning = none | Number of trees = 16 | Loss function = 0.8, ε = 0.9 | Regularization = ridge (L2) |
Neurons = 5 | Node splitting = 95% | Replicable training = yes | Kernel = RBF, exp(− auto|x–y|2) | Cost strength = 50 | |
Activation function = tanh | Tree depth = unlimited | Tree depth = unlimited | Numerical tolerance = 0.0001 | ||
α (learning rate) = 0.7 | Max number of considered features = unlimited | Iteration = unlimited | |||
Max iteration = 100 | |||||
Canola/regression | Hidden layers = 1 | Pruning = none | Number of trees = 10 | Loss function = 1.0, ε = 0.8 | α (regularization parameter) = 1 |
Neurons = 200 | Node splitting = 95% | Replicable training = yes | Kernel = Linear | ||
Activation function = tanh | Tree depth = unlimited | Tree depth = unlimited | Numerical tolerance = 0.0001 | ||
α (learning rate) = 0.7 | Max number of considered features = unlimited | Iteration = unlimited | |||
Max iteration = 2000 | |||||
Dry bean/regression | Hidden layers = 2 | Pruning = none | Number of trees = 10 | Loss function = 1.0, ε = 0.8 | α (regularization parameter) = 1 |
Neurons = 20 | Node splitting = 95% | Replicable training = yes | Kernel = linear | ||
Activation function = logistic | Tree depth = unlimited | Tree depth = unlimited | Numerical tolerance = 0.0001 | ||
α (learning rate) = 1 | Max number of considered features = unlimited | Iteration = unlimited | |||
Max iteration = 2000 | |||||