Fig. 4: Role of L5-to-L2/3 feedback connections in self-supervised predictive learning.
From: Self-supervised predictive learning accounts for cortical layer-specificity

a L2/3 learns to predict the input in the presence of random feedback (left) but fails to do so without L5-to-L2/3 feedback (right). b L5 learns to represent the inputs accurately with random feedback (left) but shows lower decoding accuracy without feedback (right). c Two main principal components of L2/3 representations for random (left) and no feedback (right) across different top-down contexts (symbols) and input Gabor orientations (colors). d Two main principal components of L5 representations for random (left) and no feedback (right) across different top-down contexts (symbols) and input Gabor orientations (colors). e L2/3 (top) and L5 (bottom) decoding accuracy for different degrees of L5-to-L23 feedback. f Explained variance of L2/3 (top) and L5 (bottom) learnt representations. Error bars represent the standard error of the mean over five different initial conditions.