Extended Data Fig. 9: Different random initialization methods yield similar encoding performance for the untrained convolutional neural network. | Nature Machine Intelligence

Extended Data Fig. 9: Different random initialization methods yield similar encoding performance for the untrained convolutional neural network.

From: Convolutional architectures are cortex-aligned de novo

Extended Data Fig. 9: Different random initialization methods yield similar encoding performance for the untrained convolutional neural network.

The effects of initializing the random features of our untrained convolutional network using different methods were explored for monkey IT (a) and human ventral visual stream (b). These plots illustrate encoding performance for identical architectures with different random initialization types. As in Fig. 2, the x-axis plots the number of random features in the output layer, and the y-axis shows the encoding score for predicting image-evoked cortical responses. There is variation in encoding performance for different initialization methods in models with low dimensionality (on the left side of the x-axis). However, at higher levels of dimensionality, these performance differences diminish. This indicates that the type of initialization has minimal impact on encoding performance in the presence of model expansion.

Back to article page