Table 2 Resulting average classification accuracy together with precision, recall and F1-score for each SAILOR models tested in our approach to performing the symbolic anchoring functionalities in nuScene, MOTFront and Mix datasets and in the Leon@Home scenario.
From: SAILOR: perceptual anchoring for robotic cognitive architectures
Accuracy | Precision | Recall | F1-score | |
---|---|---|---|---|
nuScenes dataset | ||||
nuScenes | 0.9869 | 0.8358 | 0.7624 | 0.7974 |
MOTFront | 0.9667 | 0.9064 | 0.9967 | 0.9494 |
Mix | 0.9730 | 0.9037 | 0.9858 | 0.9430 |
Leon@Home | 0.9946 | 0.9775 | 0.9695 | 0.9735 |
MOTFront dataset | ||||
nuScenes | 0.9655 | 0.4933 | 0.7211 | 0.5858 |
MOTFront | 0.9754 | 0.9562 | 0.9656 | 0.9609 |
Mix | 0.9723 | 0.9256 | 0.9542 | 0.9397 |
Leon@Home | 0.9958 | 0.9930 | 0.9658 | 0.9792 |
Mix dataset | ||||
nuScenes | 0.9864 | 0.8854 | 0.6870 | 0.7737 |
MOTFront | 0.9857 | 0.9688 | 0.9862 | 0.9774 |
Mix | 0.9859 | 0.9658 | 0.9722 | 0.9690 |
Leon@Home | 0.9949 | 0.9905 | 0.9590 | 0.9745 |