Fig. 2: The usage of dual batch norm boosts the classification performance of neural networks. | Nature Communications

Fig. 2: The usage of dual batch norm boosts the classification performance of neural networks.

From: Advancing diagnostic performance and clinical usability of neural networks via adversarial training and dual batch normalization

Fig. 2: The usage of dual batch norm boosts the classification performance of neural networks.

Three models were compared: blue: neural network without adversarial training, green: neural network with adversarial training and red: neural network with adversarial training employing dual batch norms. The models' performances were tested on three distinct datasets: a Rijeka knee magnetic resonance imaging (MRI) dataset. b Luna16 dataset containing computed tomography (CT) slices of malignant tumors and (c) CheXPert thoracic X-ray dataset. We found that robustness and good performance appear to be incompatible when data is limited. In both experiments (a and b), the AUC of naively adversarially trained models (green) dropped significantly as compared to models trained in a standard fashion (blue). However that performance gap was reduced when the models were trained on a large dataset of 191,027 radiographs (c). Adversarially trained models performed best when employing dual batch norm (red), no significant difference in performance to the naively trained models were found. As reflected by the red curves, the performance of robust models was boosted across different datasets when dual batch norm training was employed (a–c).

Back to article page