Fig. 3

Design for EEG experiment and time-resolved multivariate decoding. In the EEG experiment participants were asked to create word-object associations (panel a), and to later reconstruct the object as vividly as possible when cued with the word, and to indicate with a button press when they had a vivid image back in mind. EEG was recorded during learning and recall, with the aim to perform time-series decoding analyses that can detect at which moment, within a single trial, a classifier is most likely to categorise perceptual and semantic features correctly. Coloured time lines under object and cue time windows represent our reversal hypothesis regarding the temporal order of maximum semantic (pink) and perceptual (blue) classification during the perception (encoding) and retrieval of an object. All EEG analyses were aligned to the object onset during encoding, and to the button press during retrieval. b Decoding analyses were performed independently per participant at each time point. For each given time point during a trial, two linear discriminant analysis (LDA)-based classifiers were trained on the EEG signal: one perceptual classifier discriminating photographs from line drawings, and one semantic classifier discriminating animate from inanimate objects. Classifiers were tested using a leave-one-out procedure, which allowed us to obtain a time series of confidence values (d values, reflecting the distance from the separation hyperplane) for each single trial. c Our main interest was to compare the time points of maximal fidelity of the perceptual (blue) and semantic classifiers (pink) on each trial, to test the hypothesis that the perceptual maximum (blue) precedes the semantic one (pink) during perception, and importantly that this order is reversed during memory recall