Table 6 AUC result overview for our experiments on the official split. In this table, we present results for our best performing architecture with different depth (i.e. ResNet38-large-meta, ResNet50-large-meta, ResNet101-large-meta) and compare them to other groups.

From: Comparison of Deep Learning Approaches for Multi-Label Chest X-Ray Classification

Pathology

Wang et al.8

Yao et al.13

Guendel et al.14

“-large-meta”

ResNet-38

ResNet-50

ResNet-101

Cardiomegaly

0.810

0.856

0.883

0.875

0.877

0.865

Emphysema

0.833

0.842

0.895

0.895

0.875

0.868

Edema

0.805

0.806

0.835

0.846

0.842

0.828

Hernia

0.872

0.775

0.896

0.937

0.916

0.855

Pneumothorax

0.799

0.805

0.846

0.840

0.819

0.839

Effusion

0.759

0.806

0.828

0.822

0.818

0.818

Mass

0.693

0.777

0.821

0.820

0.810

0.796

Fibrosis

0.786

0.743

0.818

0.816

0.800

0.778

Atelectasis

0.700

0.733

0.767

0.763

0.755

0.747

Consolidation

0.703

0.711

0.745

0.749

0.742

0.734

Pleural Thicken.

0.684

0.724

0.761

0.763

0.742

0.739

Nodule

0.669

0.724

0.758

0.747

0.736

0.738

Pneumonia

0.658

0.684

0.731

0.714

0.703

0.694

Infiltration

0.661

0.673

0.709

0.694

0.694

0.686

Average

0.745

0.761

0.807

0.806

0.795

0.785

No Findings

0.727

0.725

0.720

  1. Additionally we provide an average AUC over all pathologies in the last row. Bold text emphasizes the overall highest AUC value.