Fig. 2: Schematic illustration of our representational similarity analysis. | Nature Communications

Fig. 2: Schematic illustration of our representational similarity analysis.

From: Fast hierarchical processing of orthographic and semantic parafoveal information during natural reading

Fig. 2

1. For each trial, at each time point t (from −200 to 500 ms relative to the fixation onset on the pre-target word, e.g., “clever”), we extracted the MEG signals across sensors to create a vector, representing the brain activity pattern at time t. 2. We quantified representational similarity between each pair by computing the Pearson correlation coefficient (R) between their corresponding vectors. 3. Then we averaged the R-values across all pairs within each condition to obtain the average correlations for three conditions at t: Rorth(t), Rsema(t), and Rbetween(t). 4. We repeated the procedure at every millisecond after the fixation onset on the pre-target; this yielded 3 time series of pairwise correlations: Rorth, Rsema, and Rbetween. Note that Rorth, Rsema, and Rbetween denote the average correlations for orthographic within pairs, semantic within pairs, and unrelated between pairs, respectively.

Back to article page