Table 2 Evaluation metrics.
Evaluation metrics | Calculation formula | Evaluation meaning |
---|---|---|
Precision | \({\text{Precision}} = \frac{TP}{{TP + FP}}\) | The percentage of all predictive positive samples correctly recognized as positive |
Recall/sensitivity | \({\text{Recall}} = \frac{TP}{{TP + FN}}\) | The percentage of all actual positive samples that are correctly recognized as positive |
Specificity | \({\text{Specificity}} = \frac{TN}{{TN + FP}}\) | The percentage of all actual negative samples correctly recognized as negative |
Accuracy | \({\text{Accuracy}} = \frac{TP + TN}{{TP + FP + TN + FN}}\) | The percentage of samples with correct recognition results among all samples |
F1 score | \(F1{\text{ s}}core = \frac{{{2} \times TP}}{2 \times TP + FP + FN}\) | A measure of a test’s accuracy by calculating the harmonic mean of the precision and recall |
Intersection over union (IoU) | \(IoU = \frac{A \cap B}{{A \cup B}}\) | Predicted bounding box overlap with real bounding box |
Average precision (AP) | None | The average of per-class precision |
Precision-recall curves (PR curve) | None | Relationship curves of Precision and Recall under different thresholds |