Table 2 List of metrics used for the evaluation of ANNs used for pattern classification
From: Hardware implementation of memristor-based artificial neural networks
Metric | Expression | Meaning | Applicability | Examples |
---|---|---|---|---|
Accuracy | \(\frac{{TP}}{{Total}}\) | The ratio of correctly classified patterns respect to the total number of patterns | To quantify the performance of the ANN | N/A |
Sensitivity (also called recall) | \(\frac{{TP}}{\left({FN}+{TP}\right)}\) | Ratio between how much were correctly identified as positive to how much were actually positive | Places where classification of positives are high priority | Security checks in airports |
Specificity | \(\frac{{TN}}{\left({FP}+{TN}\right)}\) | Ratio between how much were correctly classified as negative to how much was actually negative | Places where classification of negatives are high priority | Diagnosing for a health condition before treatment |
Precision | \(\frac{{TP}}{\left({TP}+{FP}\right)}\) | How much were correctly classified as positive out of all positives | N/A | How many of those who we labeled as diabetic are actually diabetic? |
F1-score | \(2\frac{{precision}*{recall}}{{precision}+{recall}}\) | It is a measure of performance of the model’s classification ability | N/A | F1 score is considered a better indicator of the classifier’s performance than the regular accuracy measure |
Κ-coefficient | \(\frac{{Acc}.-{random\; Acc}.}{100-{random\; Acc}.}\) | It shows the ratio between the Network accuracy and the random accuracy (in this case, with 10 output classes, the random accuracy would be 10%) | N/A | N/A |
Cross-Entropy | \(\mathop{\sum }\limits_{i=1}^{n}\mathop{\sum }\limits_{j=1}^{m}{y}_{i,j}\log ({p}_{i,j})\) where, yi,j is 1 if sample i belongs to class j and 0 otherwise, and pi,j is the probability predicted by the ANN of sample i belonging to class j | Difference between the predicted value by the ANN and the true value | N/A | N/A |