Fig. 7: Hierarchical inference can be performed by a biologically realistic network model.
From: Visual motion perception as online hierarchical inference

a Network model implementing the online hierarchical inference model. Linear and quadratic interactions are indicated by direct arrows and Quad boxes, respectively. In parentheses, the variables represented by each population. b Rotational stimulus in a location-indexed experiment. Besides translational (Cartesian) motion, the model also supports rotational, sφ, and radial motion, sr. c Tuning centers in a model of area MT. A local population of neurons, which share the spatial receptive field highlighted in panel b, cover all directions and speeds with their velocity tuning centers. d Response function for the neuron highlighted in panel c. The neuron responds strongly to local velocities into the upper-right direction with a speed of ~5°/sec. Max. rate = 29.5 spikes/s. e Motion structure used for the network simulation in panels f–j, including simultaneous translational, rotational and radial motion sources. f Illustration of the stimulus. After 1s of counter-clockwise rotation around the fixation cross, the rotation switches to clockwise. At t = 2 s, rightward translation is superimposed on the rotation. g Motion sources inferred by the network (solid lines: distributed population read-out; dotted lines: solution by the online model given by Eqs. (1)–(3)). Shown is μt for translational, rotational, radial and individual motion. Only 4 individual components (2 x- and 2 y-directions) are shown for visual clarity. h Firing rates of the 1-to-1 population. Rates are in arbitrary units (a.u.) because the theory supports scaling of firing rates with arbitrary factors. i Same as panel h, but for a random subset of 25 neurons of the distributed population. j Same as panel h, but for a random subset of 40 neurons of the input population, and smoothed with a 50 ms box filter for plotting. k Stimulus of a proposed neuroscience experiment. Velocities in distributed apertures follow the generative model from Fig. 1 using shared motion and individual motion. l Different trials feature different relative strengths of shared and individual motion, ranging from close-to-independent motion (left) to highly correlated motion (right). m Linear readout of the fraction of shared motion from neural activity. Seven different fractions of shared motion were presented (x-axis; noise in x-direction added for plotting, only). A linear regression model was trained on the outermost conditions (blue dots). Intermediate conditions were decoded from the network using the trained readout (red dots). Only a subset of 7 × 500 = 3500 points is shown for visual clarity. Source data are provided as a Source Data file.