Table 2 Performance comparison on the test set of the proposed model with different backbones in terms of BLEU, ROUGE, and CIDEr
From: A deep learning based automatic report generator for retinal optical coherence tomography images
Models | BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 | ROUGE | CIDEr |
|---|---|---|---|---|---|---|
RETFound + LSTM | 0.3597 | 0.2892 | 0.2441 | 0.2104 | 0.4274 | 1.4662 |
ResNet34 | 0.6073 | 0.5346 | 0.4807 | 0.4352 | 0.6094 | 3.0563 |
ResNet101 | 0.5858 | 0.5176 | 0.4656 | 0.4216 | 0.6080 | 3.1049 |
ResNet50 + LSTM | 0.6125 | 0.5412 | 0.4853 | 0.4369 | 0.6220 | 3.2607 |
VGG19 + LSTM | 0.5300 | 0.4626 | 0.4114 | 0.3677 | 0.5853 | 2.8696 |
Res2Net + LSTM | 0.6083 | 0.5381 | 0.4835 | 0.4366 | 0.6255 | 3.3585 |
SeResNet50 + LSTM | 0.6008 | 0.5321 | 0.4781 | 0.4315 | 0.6262 | 3.3524 |
DenseNet + LSTM | 0.6089 | 0.5378 | 0.4825 | 0.4349 | 0.6229 | 3.2689 |
MORG(Proposed) | 0.6099 | 0.5409 | 0.4871 | 0.4406 | 0.6310 | 3.4109 |