Fig. 2: Comparison of KAD with SOTA medical image-text pre-training models under zero-shot setting on radiographic findings or diagnoses in the PadChest dataset.
From: Knowledge-enhanced visual-language pre-training on chest radiology images

We evaluate model on the human-annotated subset of the PadChest dataset (n = 39,053 chest X-rays), and mean AUC and 95% CI of KAD are shown for each radiographic finding or diagnosis (n > 50). a Results of seen classes. Note that CheXNet is a supervised model trained on the PadChest dataset. b Results of unseen classes. KAD achieves an AUC of at least 0.900 on 31 classes and at least 0.700 on 111 classes out of 177 unseen classes in the PadChest test dataset. Top 50 classes where (n > 50) in the test dataset (n = 39,053) are shown in the figure.