Fig. 1: Principle and performance of FAST. | Nature Communications

Fig. 1: Principle and performance of FAST.

From: Real-time self-supervised denoising for high-speed fluorescence neural imaging

Fig. 1: Principle and performance of FAST.

a Training framework of FAST: The time-lapse raw stack acquired during imaging is divided into the network’s input and target through spatiotemporal random sampling. For example, X1 and Y1 represent a pair of images obtained after the first temporal subsampling of the raw stack. Temporal subsampling is achieved by sliding a window with a width C (temporal window width) and a shift step size S (temporal shift step) along the time axis. Here, C and S serve as trade-off factors, balancing temporal resolution and processing speed for optimal denoising performance. Spatial subsampling, denoted as G(·), randomly divides neighboring pixels in space. All possible spatial division patterns are shown in the mask pool. The snowflake symbol indicates that the random seed for spatiotemporal subsampling is fixed. This ensures consistent spatial adjacency within each sample pair, while allowing relationships between different pairs to vary. G1(·) and G2(·) represent the results obtained through spatiotemporal subsampling, where their pixels satisfy spatial adjacency relationships. The parameters of the denoising network f(·) are optimized using scale self-constraining (LSC) and spatiotemporal self-supervising (LST). b Impact of stride S on FAST’s performance using simulated data. Blue dots represent the peak signal-to-noise ratio (PSNR) and orange boxes show the structural similarity index (SSIM). Processing speeds are indicated for three S values. The noisy input has a PSNR of 12.34 dB and an SSIM of 0.03. c FAST has substantially fewer parameters than other deep learning models, reducing memory and computational requirements. d For input image sequences with dimensions 512×192×5000 (x-y-t), processing speeds are measured on an NVIDIA RTX A6000 GPU and reported in frames per second. Under consistent experimental conditions, FAST achieves a processing speed over 80 times faster than other real-time methods16. e, f Real-time, multi-threaded denoising pipeline. Three parallel threads manage image acquisition, denoising, and display. Acquired frames are buffered, processed by the trained FAST network in a first-in, first-out (FIFO) queue, and then displayed alongside the raw data for synchronized comparison and optional online analysis. g The pipeline is adaptable to various imaging samples, speeds, and supports both 2D and 3D time-lapse imaging.

Back to article page