Fig. 2: EyeReal approach to light-field generation.
From: Glasses-free 3D display with ultrawide viewing range using deep learning

a, Setup diagram for the real-world ocular modelling in light-field space. This setup follows the general principles governing how humans perceive objects located at the centre of the light field. b, The display prototype of EyeReal for a light-field delivery setup. It simply features a stacked array of liquid-crystal panels without additional tailored or complex optics. Each panel includes a colour filter, a liquid-crystal layer and a thin-film transistor. The entire stack with an RGB-D sensor is positioned between orthogonally oriented polarizers and illuminated by a white light source. For clarity, the in-device 3D content is shown separately. c, Optical modulation based on multilayer liquid-crystal phase control. The polarized light passes through multiple liquid-crystal layers, each introducing a pixel-specific phase computed by EyeReal. The final emitted intensity follows Malus’s law, enabling controlled light modulation in the ocular frustum. Here, we omitted the orthogonally oriented polarizers for simplicity. d, We reconstruct the spatial correspondence between human eyes and the light field under real-world viewing conditions. This enables precise characterization of binocular geometric information and extraction of the target visual imagery for display. e, The retinal images from eye camera imaging are decomposed into layered phase patterns by a lightweight fully convolutional network with multi-scale skip connections. Binocular poses are embedded using ocular geometric encoding. Trained with structured losses, the network outputs precise phase patterns and their frustum aggregation under Malus’s law yields the expected display results. Model of a rabbit created by Stanford University Computer Graphics Laboratory and adapted with permission.