Table 6 The table reports the accuracies obtained from five different machine learning classifiers in the 10-fold cross-validation and in test set, using only normalized measures as predictors. In addition to accuracies, the table reports the weight average of True Positive Rate (TP Rate), False Positive Rate (FP Rate), Precision value, Recall value, F-Measure, Receiver Operating Characteristics (ROC) Area value and Precision-Recall Curve (PRC) Area value.
Classifier | Accuracy | TP Rate | FP Rate | Precision | Recall | F-Measure | ROC Area | PRC Area |
|---|---|---|---|---|---|---|---|---|
10-fold cross-validation | ||||||||
Logistic | 90% | 0.900 | 0.100 | 0.900 | 0.900 | 0.900 | 0.946 | 0.912 |
SVM (SMO) | 92.5% | 0.925 | 0.075 | 0.935 | 0.925 | 0.925 | 0.925 | 0.897 |
LMT | 90% | 0.900 | 0.100 | 0.917 | 0.900 | 0.899 | 0.985 | 0.986 |
Random Forest | 95% | 0.950 | 0.050 | 0.950 | 0.950 | 0.950 | 0.966 | 0.961 |
Test | ||||||||
Logistic | 100% | 1.000 | 0.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |
SVM (SMO) | 90% | 0.900 | 0.100 | 0.917 | 0.900 | 0.899 | 0.900 | 0.867 |
LMT | 90% | 0.900 | 0.100 | 0.917 | 0.900 | 0.899 | 1.000 | 1.000 |
Random Forest | 100% | 1.000 | 0.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |