Fig. 7: Linear centered kernel alignment (CKA) reveals representations are influenced by batch norms. | Nature Communications

Fig. 7: Linear centered kernel alignment (CKA) reveals representations are influenced by batch norms.

From: Advancing diagnostic performance and clinical usability of neural networks via adversarial training and dual batch normalization

Fig. 7: Linear centered kernel alignment (CKA) reveals representations are influenced by batch norms.

To explore the learned hidden representations, the linear CKA between convolutional layers of the models was computed on the CheXpert test set: a model trained with a single batch norm in a conventional setting with real examples (a), a model trained with a single batch norm with real and adversarial examples (b), and a model trained with a dual batch norm with real and adversarial examples when the respective CKA was evaluated separately with the batch norm used for real (c) and adversarial (d) examples. It should be noted that the observed grid pattern in a was due to the residual connections in the ResNet architecture39. When employing adversarial training with a single batch norm, layers of the network seem to get more similar to each other, as visualized by the block-like structure arising from the high degree of similar neural activations in (b). This indicates, that the neural network loses complexity due to adversarial training which might contribute to a loss in performance. When employing a dual batch norm for original and adversarial examples respectively, the complexity of the network seems to be preserved (note the similarity between a and c), when presented real examples using the first batch norm, while simultaneously robustness to adversarial examples arises due to the same changes when employing the second batch norm (d) that the network from (b) underwent (note the similarity between b and d).

Back to article page