Extended Data Fig. 4: Hypothesized relationship between the population-activity and movements in the low dimensional polar state-space. | Nature

Extended Data Fig. 4: Hypothesized relationship between the population-activity and movements in the low dimensional polar state-space.

From: Population dynamics of head-direction neurons during drift and reorientation

Extended Data Fig. 4

a, b. The amount of the change of neural activity during bump movement depends on the gain of the network. x axis represents the neuronal space (assuming uniform distribution of HD cells by PFD). Mathematically, the distance between representations of internal HD from start to end of a rotation, in the Euclidean sense, is smaller at lower network gain. D: Euclidean distance, \({{\boldsymbol{r}}}_{t}^{{activity}}\): Nx1 vector of firing rates from N HD neurons, at time t for ‘high’ or ‘low’ activity levels. c. The concept of decreasing distance between internal HD representations, at lower network gain, is naturally captured in the 2D polar plane if we assume that radius reflects the level of network activity. The distance travelled in the hypothetical state-space of the HD network is greater when the radius is larger as well as when the net gain is higher, which could be quantified by the total change of firing rate across the network. Thus, we hypothesize that radius is correlated with overall population activity (that is, network gain) and that decreasing distance facilitates rotations across the HD network. Assuming that the internal HD representation lives in a 2D polar state-space where each state is defined by phase and radius, state transitions would be fastest at the lower end of the radial component because of the decreasing distance between states representing different angles, near the centre of the baseline ring. Bar-graphs are only indicative and not to scale. d. Diagram of the artificial neural network used to project high-dimensional neural activity onto 2D polar space. Numbers inside each box correspond to the unit count. All activation functions are ‘relu’ except for nodes z1,t and z2,t where the activation function is ‘tanh’. In all layers, we apply L2 regularization with regularization factor 0.001. Input data is normalized.

Back to article page