Table 3 Results of model evaluation.
Model | Label | Precision (%) | Recall (%) | F1-score (%) | Accuracy (%) |
---|---|---|---|---|---|
XGB | Conservative | 95.98 | 89.88 | 92.83 | 93.06 |
Progressive | 90.49 | 96.24 | 93.27 | ||
BERT | Conservative | 93.06 | 97.03 | 95.00 | 94.88 |
Progressive | 96.87 | 92.71 | 94.74 |
Model | Label | Precision (%) | Recall (%) | F1-score (%) | Accuracy (%) |
---|---|---|---|---|---|
XGB | Conservative | 95.98 | 89.88 | 92.83 | 93.06 |
Progressive | 90.49 | 96.24 | 93.27 | ||
BERT | Conservative | 93.06 | 97.03 | 95.00 | 94.88 |
Progressive | 96.87 | 92.71 | 94.74 |