Table 2 Performance comparison (accuracy and \(\text {F}_1 \text { score}\)) of ResNet50 using the proposed transfer learning pipelines across the three benchmark datasets.
From: In-domain versus out-of-domain transfer learning in plankton image classification
Target dataset \(\rightarrow \) | WHOI22 | Kaggle38 | ZooScan20 | |||
|---|---|---|---|---|---|---|
\(\downarrow \) Source dataset(s) | Accuracy | \(\text {F}_1 \text { score}\) | Accuracy | \(\text {F}_1 \text { score}\) | Accuracy | \(\text {F}_1 \text { score}\) |
WHOI80 | 0.878 | 0.878 | 0.876 | 0.831 | 0.826 | 0.837 |
Kaggle83 | 0.862 | 0.862 | 0.878 | 0.834 | 0.847 | 0.863 |
ZooScan98 | 0.912 | 0.912 | 0.914 | 0.884 | – | – |
ImageNet22K | 0.946 | 0.946 | 0.930 | 0.909 | 0.887 | 0.899 |
ImageNet1K | 0.939 | 0.939 | 0.921 | 0.895 | 0.851 | 0.868 |
ImageNet22K \(\rightarrow \) WHOI80 | 0.946 | 0.946 | 0.924 | 0.905 | 0.891 | 0.898 |
ImageNet22K \(\rightarrow \) Kaggle83 | 0.938 | 0.938 | 0.929 | 0.907 | 0.877 | 0.896 |