Table 1 Performances comparison on test set of the proposed method with competing image captioning models in terms of BLEU, ROUGE, and CIDEr
From: A deep learning based automatic report generator for retinal optical coherence tomography images
Models | BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 | ROUGE | CIDEr |
|---|---|---|---|---|---|---|
NIC | 0.3449 | 0.2766 | 0.2286 | 0.1906 | 0.4673 | 1.9603 |
Progressive Model | 0.4255 | 0.3258 | 0.2427 | 0.1913 | 0.4305 | 0.7910 |
SCA-CNN | 0.5548 | 0.4868 | 0.4345 | 0.3902 | 0.6033 | 3.1521 |
Bottom-up-top-down | 0.6033 | 0.5298 | 0.4738 | 0.4258 | 0.6110 | 3.1844 |
MORG(Proposed) | 0.6099 | 0.5409 | 0.4871 | 0.4406 | 0.6310 | 3.4109 |