Fig. 4: ROCN methodology and surface maps.

a, During the postlearning room–video object recall task, participants watched a video of a room and verbally recalled the object that was paired to it. In a leave-one-participant-out cross-validation procedure, the characteristic object patterns of the N − 1 group—evoked during a separate phase of the study in which participants viewed object videos—were used to train a multinomial logistic classifier. This classifier was then applied to each timepoint on the left-out participant’s room–video object recall data. In the pictured example, the left-out participant, Fernando, is recalling the carrot object that was paired with the hexagon room currently being presented. The object classifier, trained on patterns evoked when other participants viewed the objects, was applied to each timepoint of Fernando’s recall. We then measured the fraction of timepoints during the hexagon-room video that were classified as activating the carrot representation. b, For each searchlight, object classification accuracies for both room–video object recall videos for each participant were averaged together and then averaged across participants and z-scored relative to a null distribution. The 50 top-performing searchlights were then selected to form the ROCN. c, Average object classification accuracy during room–video object recall. The colour map shows the relative classification accuracy across all searchlights (thresholded to show only searchlights with above-chance accuracy). d, ROCN. The top 50 searchlights that were most sensitive to object reinstatement (yellow) were defined as the ROCN for subsequent analyses.