Fig. 3: Linear encoding model used to predict the neural responses to each word in the narrative before and after word-onset. | Nature Neuroscience

Fig. 3: Linear encoding model used to predict the neural responses to each word in the narrative before and after word-onset.

From: Shared computational principles for language processing in humans and deep language models

Fig. 3: Linear encoding model used to predict the neural responses to each word in the narrative before and after word-onset.

a, Brain coverage consisted of 1,339 electrodes (across nine participants). The words are aligned with the neural signal; each word’s onset (moment of articulation) is designated at lag 0. Responses are averaged over a window of 200 ms and provided as input to the encoding model. b, A series of 50 coefficients corresponding to the features of the word embeddings is learned using linear regression to predict the neural signal across words from the assigned embeddings. The model was evaluated by computing the correlation between the reconstructed signal and the actual signal for a held-out test word. This procedure was repeated for each lag and each electrode, using a 25-ms sliding window. The dashed horizontal line indicates the statistical threshold (q < 0.01, FDR corrected). Lags of −100 ms or more preceding word onset contained only neural information sampled before the word was perceived (yellow). c, Electrodes with significant correlation at the peaked lag between predicted and actual word responses for semantic embeddings (GloVe). LH, left hemisphere; RH, right hemisphere.

Back to article page