Table 4 Resulting average classification accuracy together with precision, recall and F1-score for each KNN model tested in our approach to performing the symbolic anchoring functionalities in nuScene, MOTFront and Mix datasets and in the Leon@Home scenario.
From: SAILOR: perceptual anchoring for robotic cognitive architectures
Accuracy | Precision | Recall | F1-score | |
---|---|---|---|---|
nuScenes dataset | ||||
nuScenes | 0.9934 | 0.9118 | 0.8899 | 0.9007 |
MOTFront | 0.9887 | 0.9689 | 0.9959 | 0.9822 |
Mix | 0.9901 | 0.9663 | 0.9909 | 0.9785 |
Leon@Home | 0.9935 | 0.9763 | 0.9599 | 0.9681 |
MOTFront dataset | ||||
nuScenes | 0.9886 | 1.000 | 0.66406 | 0.7981 |
MOTFront | 0.9983 | 0.9978 | 0.9967 | 0.9973 |
Mix | 0.9953 | 0.9979 | 0.9812 | 0.9895 |
Leon@Home | 0.9918 | 0.9893 | 0.9302 | 0.9588 |
Mix dataset | ||||
nuScenes | 0.9934 | 0.9196 | 0.8820 | 0.9004 |
MOTFront | 0.9982 | 0.9977 | 0.9965 | 0.9971 |
Mix | 0.9967 | 0.9942 | 0.9912 | 0.9926 |
Leon@Home | 0.9909 | 0.9764 | 0.9342 | 0.9548 |