Fig. 5: Factors that determine the similarity between human and VGG-16 embedding dimensions. | Nature Machine Intelligence

Fig. 5: Factors that determine the similarity between human and VGG-16 embedding dimensions.

From: Dimensions underlying the representational alignment of deep neural networks with humans

Fig. 5

a, RSMs reconstructed from human and VGG-16 embedding. Each row represents an object, with rows sorted into 27 superordinate categories (for example, animal, food and furniture) from ref. 40 to better highlight similarities and differences in representation. b, Pairwise correlations between human and VGG-16 embedding dimensions. c, Cumulative RSA analysis that shows the amount of variance explained in the human RSM as a function of the number of DNN dimensions. The black line shows the number of dimensions required to explain 95% of the variance. df, Intersection (red and blue regions) and differences (orange and green regions) between three highly correlating human and DNN dimensions. The pink circles denote the intersection of the red and blue regions, that is, where the same image scores highly in both dimensions. For this figure, we filtered the embedding by images from the public domain76. For three images without a public domain version, visually similar replacements were used. Images in df reproduced with permission from ref. 76, Springer Nature Limited.

Back to article page