Fig. 1: Contextual processing in the brain vs. large language models. | Nature Communications

Fig. 1: Contextual processing in the brain vs. large language models.

From: Incremental accumulation of linguistic context in artificial and biological neural networks

Fig. 1: Contextual processing in the brain vs. large language models.

a The neuroanatomical hierarchical organization according to multiple timescales of processing. Partially adapted from ref. 18 with the authors’ permission. b Our proposed neural mechanism of integrating long-term contextual information at the top level of the timescale hierarchy. c The baseline implementation of contextual integration via Large Language Models (LLMs). The model is exposed to the entire incoming context window and processes it in a parallel manner. d Our proposed alternative model of contextual integration via LLM. Instead of processing the entire context window at once, the incremental-context LLM is applied sequentially along the story. The LLM accumulates long-term contextual information by generating a concise summary of the past, and, at each step, integrating this summary with the incoming context window and updating the summary to be used in the next step (see more details in Fig. 3 and S3).

Back to article page