Fig. 1: LSD and network architecture overview.

a, EM data imaged with FIB-SEM at 8 nm isotropic resolution (FIB-25 dataset8). Arrows point to example individual neuron plasma membranes. Dark blobs are mitochondria. Scale bar, 300 nm. b, Label colors correspond to unique neurons. c, LSD mean offset schematic. A Gaussian (G) with fixed sigma (σ) is centered at voxel (v). The Gaussian is then intersected with the underlying label (colored region) and the center of mass of the intersection (cm) is computed. The mean offset (mo) between the given voxel and center of mass is calulated (among several other statistics), resulting in the first three components of the LSD for voxel (v). d, Predicted mean offset component of LSDs (LSD[0:3]) for all voxels. A smooth gradient is maintained within objects while sharp contrasts are observed across boundaries. Three-dimensional vectors are RGB color encoded. e, Network architectures used. The ten-dimensional LSD embedding is used as an auxiliary learning task for improving affinities. In a multitask approach (MTLSD), LSDs and affinities are directly learnt. In an auto-context approach, the predicted LSDs are used as input to a second network to generate affinities both without raw data (ACLSD) and with raw data (ACRLSD).