Fig. 1: The neural space–time model for dynamic imaging reconstruction.
From: Neural space–time model for dynamic multi-shot imaging

a, Multi-shot computational imaging systems capture a series of images under different conditions and then computationally reconstruct the final image. For example, DPC captures four images with different illumination source patterns, and then uses them to reconstruct quantitative phase. Sequential capture of the raw data results in motion artifacts for dynamic samples, as the reconstruction algorithm assumes a static scene. Our proposed NSTM extends such methods to dynamic scenes, by modeling and reconstructing the motion at each time point. b, NSTM consists of two coordinate-based neural networks, one for the motion and one for the scene. Once the networks have been trained using the dataset of raw measurements, we can give the NSTM any time point as the input and it will generate the reconstruction at that time point. The network weights of NSTM are trained to match the forward model-rendered measurement with the actual raw measurement at each time point. c, The coarse-to-fine process for the reconstruction of a live C. elegans worm imaged by DPC. d, Zoom-ins for NSTM reconstruction at different time points with the recovered motion kernel overlaid, along with a comparison to conventional reconstruction.