Extended Data Fig. 7: RSMs for self-supervised AlexNet. | Nature Neuroscience

Extended Data Fig. 7: RSMs for self-supervised AlexNet.

From: Infants have rich visual categories in ventrotemporal cortex at 2 months of age

Extended Data Fig. 7

(a) The self-supervised model was trained with Instance Protocol Contrastive Learning (https://github.com/harvard-visionlab/open_ipcl). The algorithm learned only from natural image structure by bringing together augmented versions of an image closer to an average “prototype” of the image in embedding space, while keeping the image’s representation distinct from recently encountered images in memory. Activation vectors were calculated in response to each of the 36 images, while keeping training weights frozen. (b) Unsupervised RSMs for each layer, calculated using pairwise correlations between each image’s activation vector. X and Y axes are the 36 objects, nested by tripartite class, category and exemplar. We plot the z-scored correlation to highlight representational content rather than strength, and to aid comparison to brain-derived RSMs. Raw correlation ranges: all diagonal values are 1, off-diagonal ranges: conv1: (0,0.3), conv2: (0,0.5), conv3: (0.1,0.5), conv4: (0.2,0.6), conv5: (0.8,0.9), fc6: (−0.1,0.8), fc7: (0,0.7). conv, convolutional. fc, fully connected.

Source data

Back to article page