Fig. 1: SegCLR. | Nature Methods

Fig. 1: SegCLR.

From: Multi-layered maps of neuropil with segmentation-guided contrastive learning

Fig. 1: SegCLR.

a, In SegCLR, positive pairs (blue double-headed arrows) are chosen from proximal but not necessarily overlapping 3D views (small blue boxes) of the same segmented cell, while negative pairs (red double-headed arrows) are chosen from different cells. The SegCLR network is trained to produce an embedding vector for each local 3D view such that embeddings are more similar for positive pairs than negative pairs (cartoon of clustered points). b, The input to the embedding network is a local 3D view (4.1 × 4.1 × 4.3 μm at 32 × 32 × 33 nm resolution for human data; 4.1 × 4.1 × 5.2 μm at 32 × 32 × 40 nm resolution for mouse) from the electron microscopy volume, masked by the segmentation for the object at the center of the field of view. An encoder network based on a ResNet-18 is trained to produce embeddings, via projection heads and a contrastive loss that are used only during training. c,d, Visualization via UMAP projection of the SegCLR embedding space for the human temporal cortex (c) and mouse visual cortex (d) datasets. Points for a representative sample of embeddings are shown, colored via 3D UMAP RGB, with the corresponding 3D morphology illustrated for six locations (network segmentation mask input in black, surrounded by 10 × 10 × 10 μm context in gray; masked electron microscopy input data not shown). e,f, Embeddings visualized along the extent of representative human (e) and mouse (f) cells. Each mesh rendering is colored according to the 3D UMAP RGB of the nearest embedding for the surrounding local 3D view. Some axons are cut off to fit. Scale bars: c,d, 5 μm; e,f, 100 μm.

Back to article page