Table 3 Metrics evaluation.
Metric | Description | Formula |
---|---|---|
Accuracy | Measures the overall correctness of the model | \(A = \frac{tp + tn}{{tp + tn + fp + fn}}\) |
Precision | Indicates how many predicted positives are actually correct | \(P = \frac{tp}{{tp + fp}}\) |
Sensitivity (recall) | Measures the ability to correctly identify positives | \(R = \frac{tp}{{tp + fn}}\) |
Specificity | Measures the ability to correctly identify negatives | \(Specificity = \frac{tn}{{tn + fp}}\) |
F-measure (F1-score) | The harmonic mean of precision and sensitivity | \(F1 - score = 2 \times \frac{precision \times sensitivity}{{precision + sensitivity}}\) |
MCC (Matthews correlation coefficient) | Evaluates overall prediction quality, even for imbalanced data | \(MCC = \frac{(tp \times tn - fp \times fn)}{{\sqrt {(tp + fp)(tp + fn)(tn + fp)(tn + fn)} }}\) |
NPV (negative predictive value) | The probability that a predicted negative is negative | \(NPV = \frac{tn}{{tn + fn}}\) |
FPR (false positive rate) | Percentage of false positives out of total actual negatives | \(FPR = \frac{fp}{{fp + tn}}\) |
FNR (false negative rate) | Percentage of false negatives out of total actual positives | \(FNR = \frac{fn}{{tp + fn}}\) |