Fig. 2: The network can forecast a simple video input many frames into the future. | Nature Communications

Fig. 2: The network can forecast a simple video input many frames into the future.

From: Waves traveling over a map of visual space can ignite short-term predictions of sensory input

Fig. 2

a As in the classification example (Fig. 1), a video frame projects into the network in a spatially local manner, and a recurrent network interaction occurs, generating internal wave activity on top of the projection. The network outputs an image from its network state via a matrix of trainable weights. Training entails one-shot linear regression between a set of network states and the corresponding desired output frames (the one-step-ahead next frames). Shown: a schematic representation of the one-shot linear regression for one time step. b Once training of the readout weights is complete, closed-loop forecasting begins. To properly test how well the network model learned the underlying spatiotemporal process from the training data, it is deprived of ground-truth data of any kind during this step. Instead, the forecast next frame at one time step serves as the input frame for the following time step. c Video frames of the data: a bump tracing an orbit. d Corresponding closed-loop forecasts generated by the network model with optimal recurrence. e Network activity for the optimal-recurrence case. Cosine of phase of activation is shown. f Closed-loop forecast in the case without recurrence.

Back to article page