Fig. 9: Simulation paradigm, estimation results, and performance metrics.
From: Flow parsing as causal source separation allows fast and parallel object and self-motion estimation

a The object’s location in the visual field is determined relative to the direction of the observer’s movement (black plus). Potential object positions vary evenly in eccentricity and placement direction. While the simulation contains scenes with only one object present, the panel includes examples of three objects that vary in size and location. Remaining potential object positions are indicated with a green plus. b The scene consists of the observer’s motion T (black arrow) toward a cloud of stationary dots (white dots), presented in a top-down perspective. The self-moving object (green bar) is opaque and occludes parts of the scene. Object movement consists of two components: a horizontal movement H (red dashed arrow) at different speeds and a movement along the direction of the observer’s translation (λ ⋅ T, colored dashed arrows). This component defines the motion condition and ranges between the “receding” (blue) and the “approaching” (dark red) condition. c The scene in an observer-centered coordinate frame, again in a top-down perspective. Observer and object motion is combined and converted to relative motion between the points of the scene and the observer to compute the flow fields. d Flow fields are calculated for each combination of object size and location, the motion condition, and horizontal object speed. The example flow field shows the result of a simulated scene in the “receding” condition, in retinal coordinates. Combined flow is horizontal due to the backward motion canceling any changes in depth between the object and the observer. The black plus and the green plus indicate the observer’s translation direction and the object’s location, respectively. e The model estimates various scene parameters and whether the object is present for each flow field. The estimations of the heading direction (black x), the object location (green x), and the object’s movement direction (dashed blue arrow) are shown along the actual parameters, in a retinal coordinate frame. The green-black dashed line indicates the object offset relative to the heading direction. f Different metrics to measure model performance based on the estimations shown in (e). Heading and localization error represent the distance between estimation and the true parameter in dva. Potential heading biases are indicated by the projection (solid white arrows) of the mis-estimation vector (black dashed line) onto the object direction and the offset vector. Here, the estimation was biased in object direction, as the projection is on the direction vector, and opposite of the object’s location, as the projection lands on the backwards extension of the offset vector. The estimated object direction is measured relative to the object flow direction by calculating the angle between them.