Extended Data Fig. 9: Figure S9. Comparison of GPT-2 and concatenation of static embeddings. | Nature Neuroscience

Extended Data Fig. 9: Figure S9. Comparison of GPT-2 and concatenation of static embeddings.

From: Shared computational principles for language processing in humans and deep language models

Extended Data Fig. 9: Figure S9. Comparison of GPT-2 and concatenation of static embeddings.

The increased performance of GPT-2 based contextual embeddings encoding may be attributed to the fact that it consists of information about the previous words’ identity. To examine this possibility, we concatenated the GloVe embeddings of the 10 previous words and current word, and reduced their dimensionality to 50 features. GPT-2 based encoding outperformed mere concatenation before word onset, suggesting that GPT-2’s ability to compress the contextual information improves the ability to model the neural signals before word onset. The error bars indicate the standard error of the encoding models across electrodes.

Back to article page