Extended Data Fig. 6: ANN and RSC coding transitions dynamically from an egocentric landmark-relative to an allocentric global reference frame based on phase in trial. | Nature Neuroscience

Extended Data Fig. 6: ANN and RSC coding transitions dynamically from an egocentric landmark-relative to an allocentric global reference frame based on phase in trial.

From: Spatial reasoning via recurrent neural dynamics in mouse retrosplenial cortex

Extended Data Fig. 6

(a) Top: Tuning curves (mean rate) for displacement from last encountered landmark for LM1 and LM2 states in ANN. Bottom: Same data, but distribution of firing rates. The network discovers that displacement from the last landmark encounter in the LM1 period is a key latent variable, and its encoding is an emergent property. Intriguingly, a similar displacement-to-location coding switch has been observed in mouse CA167, suggesting that the empirically observed switch may be related to the brain performing spatial reasoning to disambiguate between multiple location hypotheses. (b) Same as panel a but for global location, ANN neurons became more tuned to global location rather than landmark-relative information after encountering the 2nd landmark. (c) Decoding of location, displacement, and separation between landmarks from the ANN in a 2-landmark environment by a linear decoder that remains fixed across trials and environments. Top: Squared population decoding error of location (green) and displacement (blue), as a function of the number of encountered landmarks. As suggested by the well-tuned activity of ANN neurons, location can be linearly decoded in the LM2 state. Displacement can be best decoded in the LM1 state. Bottom: Square decoding error of distance between landmarks, as a function of the number of encountered landmarks. The representation is particularly accurate around the time just before and after the first landmark encounter, when location disambiguation takes place. Top: Performance was evaluated on 1000 trials from experiment configuration 2. For location, the decoder corresponded to the network location estimate. For displacement, the linear decoder was trained on 4000 separate trials. Bottom: experiment configuration 1 with 4000 trials to train the linear decoder and 1000 trials to evaluate it. Thus, the network’s encoding of these three critical variables is dynamic and tied to the different computational imperatives at each stage: displacement and landmark separation are not explicit inputs but the network estimates these and represents them in a decodable way at LM1, the critical time when this information is essential to the computation. After LM2, the network decodability of landmark separation drops, as it is no longer essential. (d) Neurons in RSC also became less well tuned to relative displacements from landmarks in LM2 relative to LM1: histogram across all RSC neurons of entropy of tuning curve for angular displacement from last seen landmark in RSC. Black: for LM2 state, Blue: for LM1 state. Red: histogram of pairwise differences. For this analysis, angular firing rate distributions were analyzed relative to either the global reference frame or the last seen landmark. (e) Same as d, but for global location. (f) The absolute change in landmark-relative displacement coding (d) is larger than that of the allocentric location tuning (e), suggesting that the latter is less affected by task state.

Back to article page