Table 8 TEST set sample results after using Borderline-SMOTE.
From: Machine learning detection of manipulative environmental disclosures in corporate reports
Panel A. Confusion matrix of Borderline SMOTE-LR forecast | ||||
|---|---|---|---|---|
Predicted class | ||||
Manipulated | Not-Manipulated | |||
Actual Class | Manipulated | 3486 | 1777 | |
Not-manipulated | 24 | 20 | ||
Panel B. Confusion matrix of Borderline SMOTE-DT forecast | ||||
|---|---|---|---|---|
Predicted class | ||||
Manipulated | Not-Manipulated | |||
Actual Class | Manipulated | 2721 | 2535 | |
Not-manipulated | 17 | 34 | ||
Panel C. Confusion matrix of Borderline-SMOTE-RF forecast | ||||
|---|---|---|---|---|
Predicted class | ||||
Manipulated | Not-Manipulated | |||
Actual Class | Manipulated | 3951 | 1305 | |
Not-manipulated | 27 | 24 | ||
Panel D. Comparison of performance metrics | ||||
|---|---|---|---|---|
Borderline SMOTE-LR | Borderline SMOTE-DT | Borderline SMOTE-RF | ||
Accuracy | 0.6606 | 0.5191 | 0.7490 | |
Precision | 0.9931 | 0.9938 | 0.9932 | |
Recall | 0.6624 | 0.5177 | 0.7517 | |
F1-Score | 0.7941 | 0.6808 | 0.8557 | |
TPR | 0.6624 | 0.5177 | 0.7517 | |
FPR | 0.5455 | 0.3333 | 0.5294 | |
AUC Value | 0.5585 | 0.5922 | 0.6112 | |
PR-AUC | 0.68 | 0.71 | 0.78 | |
Balanced Accuracy | 0.70 | 0.73 | 0.86 | |
MCC | 0.63 | 0.66 | 0.72 | |