Extended Data Fig. 3: Interpretability of DeepCAD model. | Nature Methods

Extended Data Fig. 3: Interpretability of DeepCAD model.

From: Reinforcing neuron extraction and spike inference in calcium imaging using deep self-supervised denoising

Extended Data Fig. 3

To demonstrate the interpretability and reliability of our pre-trained DeepCAD model, a small 3D patch (64 × 64 × 300 pixels) was fed into the model and feature maps of the convolutional layers were visualized33. Scale bar, 20 μm. Example feature maps of three intermediate convolutional layers in the decoder module (Layer 10, Layer 12, and Layer 14) are shown here, displayed as the average intensity projection (AVG) of the original 3D feature maps. The feature representations learned by DeepCAD have substantial semantic meaning, such as soma-like structures, cytoplasm-like structures, and vessel-like structures (or shadows). These interpretable semantic representations would contribute to locating neurons, restoring cytoplasmic fluorescence, and avoiding unwanted intensity fluctuations in vascular regions.

Back to article page