Table 6 Resulting average classification accuracy together with precision, recall and F1-score for each SVM model tested in our approach to performing the symbolic anchoring functionalities in nuScene, MOTFront and Mix datasets and in the Leon@Home scenario.
From: SAILOR: perceptual anchoring for robotic cognitive architectures
Accuracy | Precision | Recall | F1-score | |
---|---|---|---|---|
nuScenes dataset | ||||
nuScenes | 0.9999 | 0.9994 | 0.9966 | 0.9980 |
MOTFront | 0.8157 | 0.9034 | 0.4612 | 0.6107 |
Mix | 0.8731 | 0.9118 | 0.4862 | 0.6342 |
Leon@Home | 0.9280 | 0.9984 | 0.2962 | 0.4569 |
MOTFront dataset | ||||
nuScenes | 0.9788 | 0.9985 | 0.3740 | 0.5441 |
MOTFront | 0.9981 | 0.9980 | 0.9961 | 0.9970 |
Mix | 0.9921 | 0.9980 | 0.9670 | 0.9823 |
Leon@Home | 0.9893 | 0.9978 | 0.8970 | 0.9447 |
Mix dataset | ||||
nuScenes | 0.9862 | 0.8551 | 0.7127 | 0.7774 |
MOTFront | 0.9978 | 0.9964 | 0.9966 | 0.9965 |
Mix | 0.9942 | 0.9909 | 0.9834 | 0.9871 |
Leon@Home | 0.9792 | 0.8621 | 0.9482 | 0.9032 |