Table 8 Recall, precision, and F1-score of the multi-modal DenseNet201 + End-to-End CNN.
From: RGB-D based multi-modal deep learning for spacecraft and debris recognition
Category | Precision | Recall | F1-score |
|---|---|---|---|
AcrimSat | 0.83 | 0.91 | 0.87 |
Aquarius | 0.79 | 0.86 | 0.82 |
Aura | 0.91 | 0.88 | 0.89 |
Calipso | 0.78 | 0.77 | 0.78 |
Cloudsat | 0.78 | 0.35 | 0.48 |
CubeSat | 0.86 | 0.96 | 0.91 |
Debris | 0.75 | 0.92 | 0.83 |
Jason | 0.82 | 0.70 | 0.76 |
Sentinel-6 | 0.81 | 0.92 | 0.86 |
Terra | 0.78 | 0.68 | 0.72 |
TRMM | 0.86 | 0.82 | 0.84 |
Average | 0.82 | 0.80 | 0.80 |