Fig. 2: Recurrent motion integration. | Nature Machine Intelligence

Fig. 2: Recurrent motion integration.

From: Machine learning modelling for multi-order human visual motion processing

Fig. 2

a, The tuning properties of model units for 1D gratings and 2D plaids. The partial correlations to component and pattern tuning types are shown. Overall, the model units exhibit a trend similar to mammalian neural recordings: Stage I predominantly consists of component-selective neurons, whereas stage II is dominated by pattern-selective neurons, corresponding to the V1-MT processing hierarchy. The animal data are from ref. 32. b, Response to global motion of Gabor patches33. Local patches contain various motion directions and speeds, which are captured by stage I. Stage II then performs motion integration, linking local motion signals to resolve the aperture problem and infer global (downward) motion. The model response aligns with the human perception of adaptive pooling. c, Motion integration is sensitive to higher-order pattern cues. We used the three scenarios A, B and C detailed in ref. 34. The extents of integration were quantified by correlating the directions of motion between adjacent segments across a single circular translation cycle. Compared with scenario C, scenario B—characterized by structural constraints and depth cues—led to an increased integration index in the model, similar to human perception34. In the middle column of the right panels (for unit connections), we visualize the attention heat map derived from the motion graph, showing the connectivity of the unit (marked by a circle) with other units in stage II.

Back to article page