Fig. 3: Three-dimensional imaging of real scenes by pixelated colour conversion.
From: X-ray-to-visible light-field detection through pixelated colour conversion

a, Schematic of the experimental set-up. Multiline structured light is incident on the object; lens 1 and lens 2 capture the reflected light and pass it to perovskite nanocrystal arrays. A colour CCD then measures the colour of each azimuth detector to calculate the corresponding distance to the scenes. b, Representative images of perovskite nanocrystal arrays with incident light from different directions. c, Mean depth precision plotted as a function of scene depth and radial position in the field of view. A movable, flat, white screen is used as the target object. Ten measurement trials were made for each projection angle and 20 measurement trials were performed for each depth. Data are mean ± s.e.m. d,e, 3D images of scenes placed at 0.7 m and 1.5 m. f, 3D depth image of a keyboard captured using the 3D light-field sensor. The colour map indicates the distance from the imaging point to the z axis at the origin (x = 0, y = 0). Scale bar, 150 μm (b).