Extended Data Fig. 2: Data processing pipeline. | Nature Methods

Extended Data Fig. 2: Data processing pipeline.

From: Reinforcing neuron extraction and spike inference in calcium imaging using deep self-supervised denoising

Extended Data Fig. 2

a, The training process. Raw data captured by the imaging system are organized in 3D (x, y, t) and saved as a temporal stack. The original noisy stack is partitioned into thousands of 3D sub-stacks (64×64×600 pixels) with about 25% overlap in each dimension. For temporal stacks with a small lateral size or short recording period, sub-stacks can be randomly cropped from the original stack to augment the training set. Then, interlaced frames of each sub-stack are extracted to form two 3D tiles (64 × 64 × 300 pixels). One of them serves as the input and the other serves as the target for network training. b, Deployment of the pre-trained model. New recordings obtained with the imaging system are partitioned into 3D sub-stacks (64 × 64 × 300 pixels) with 25% overlap in each dimension. Then, pre-trained models are loaded into memory and the sub-stacks are directly fed into the model. Enhanced sub-stacks are sequentially output from the network and overlapping regions (both the lateral and temporal overlaps) are subtracted from the output sub-stacks. The final enhanced stack can be obtained by stitching all sub-stacks.

Back to article page