Table 4 Main parameter settings for comparative models.

From: Example dependent cost sensitive learning based selective deep ensemble model for customer credit scoring

Model

Parameter settings

ECS-AdaBoost

Base classifier is a decision tree, the number of classifiers is set to 20, and the boosting algorithm used is SAMME.R.

ECSDNN

ECSDNN model parameter settings follow the study by Mehta et al. 32.

ECS-Stacking

Base classifiers used include various cost-insensitive models, such as KNN, XGBoost, RF, LR, ANN, and AdaBoost. The meta-model uses a bagging classifier based on ECS decision trees. Specific parameter settings referenced from Bhargava et al. 25

CSNNE

Base classifier is an ANN with 2 hidden layers, using ReLU activation for the hidden layers and Softmax for the output. The Adam optimizer is applied, with a batch size of 64 and 300 epochs. The ensemble includes 9 base classifiers, with majority voting as the strategy. Parameter settings are based on Yotsawat et al.10.

CSCNN

Ensemble includes 4 base CNN classifiers, each with 3 hidden layers (32, 32, and 64 neurons), ReLU activation for hidden layers, and Sigmoid for the output. The Adam optimizer is used with a batch size of 512, 100 epochs, and a dropout rate of 0.5. Bagging is employed as the ensemble strategy, with parameters based on the study by Geng and Luo37.

CCS-CNN

CNN has 3 hidden layers (32, 64, and 64 neurons) with ReLU activation and a Sigmoid output layer. Adam optimizer is used with a batch size of 128, 100 epochs, and a dropout rate of 0.5. Decision threshold is optimized through grid search, set to 0.35. Parameters are based on Vimala et al. 38.

LSTM-GRU-ANN

Base classifiers are LSTM and GRU models with Tanh activation for the hidden layers and Sigmoid for the output. Ensemble strategy uses an ANN with ReLU activation for the hidden layers and Sigmoid for the output layer. Parameters are based on Forough and Momtazi 49.

LSTM-GRU-MLP

Base classifiers are LSTM and GRU models with Tanh activation for the hidden layers and Sigmoid for the output. Ensemble strategy uses a multi-layer perceptron. Parameters are based on Mienye and Sun50.

CNN- BLSTM

Base classifier is CNN with 10 base classifiers, using ReLU for hidden layers and Sigmoid for the output. The Adam optimizer is employed. The ensemble strategy uses BiLSTM, with Tanh for hidden layers and Sigmoid for the output. Parameters are based on Haghighi and Omranpour51.

BiLSTM-CNN

Base classifiers are 5 CNNs. Each CNN has three convolutional layers, two pooling layers, a flatten layer, and a fully connected layer, using ReLU for hidden layers and Sigmoid for the output. BiLSTM is employed as the ensemble strategy. Parameters are based on Wang et al.52.

BiLSTM-Trans-CNN

Ensemble includes 5 CNN base classifiers, each with three convolutional layers, two pooling layers, a flatten layer, and a fully connected layer, using ReLU for hidden layers and Sigmoid for the output. Ensemble strategy combines BiLSTM and Transformer architectures. Parameters follow Wang et al.52.