Table 5 Recall, precision, and F1-score of the End-to-End CNN with depth images.
From: RGB-D based multi-modal deep learning for spacecraft and debris recognition
Category | Precision | Recall | F1-score |
|---|---|---|---|
AcrimSat | 0.69 | 0.82 | 0.75 |
Aquarius | 0.66 | 0.74 | 0.70 |
Aura | 0.87 | 0.77 | 0.81 |
Calipso | 0.59 | 0.51 | 0.55 |
Cloudsat | 0.57 | 0.47 | 0.52 |
CubeSat | 0.87 | 0.92 | 0.90 |
Debris | 0.68 | 0.69 | 0.69 |
Jason | 0.57 | 0.51 | 0.53 |
Sentinel-6 | 0.72 | 0.81 | 0.76 |
Terra | 0.57 | 0.56 | 0.56 |
TRMM | 0.84 | 0.87 | 0.86 |
Average | 0.69 | 0.70 | 0.69 |