Fig. 1: Key features of the Krakencoder architecture. | Nature Methods

Fig. 1: Key features of the Krakencoder architecture.

From: Krakencoder: a unified brain connectome translation and fusion tool

Fig. 1

a, Autoencoder pathii: connectivity flavor i to connectivity flavor i. This path begins by stacking the upper triangular portion of each subject’s connectivity matrix into input \({X}_{i}\in {{\mathbb{R}}}^{{n}_{{\rm{edges}}}\times {n}_{{\rm{subj}}}}\). A precomputed, fixed PCA transformation normalizes the data and reduces dimensionality to \({X}_{i}^{{\prime} }\in {{\mathbb{R}}}^{256\times {n}_{{\rm{subj}}}}\), equalizing the size of disparate input flavors. A single fully connected layer Encoderi, followed by L2 normalization, transforms \({X}_{i}^{{\prime} }\) into a latent hypersphere surface \({z}_{i}\in {{\mathbb{R}}}^{128\times {n}_{{\rm{subj}}}}\). Batch-wise encoding loss Lz(zi) controls inter-subject separation in latent space. A single fully connected layer Decoderi transforms zi to \({\hat{X}}_{i}^{{\prime} }\), and batch-wise reconstruction loss \({L}_{r}({X}_{i}^{{\prime} },{\hat{X}}_{i}^{{\prime} })\) and Lz(zi) are backpropagated to optimize Encoderi and Decoderi. b, Transcoder pathij: connectivity flavor i to connectivity flavor j. This path converting input flavor i to output flavor j begins the same as pathii, transforming \({X}_{i}\to {X}_{i}^{{\prime} }\to {z}_{i}\), then Decoderj transforms \({z}_{i}\to {\hat{X}}_{j}^{{\prime} }\), and reconstruction loss \({L}_{r}({X}_{j}^{{\prime} },{\hat{X}}_{j}^{{\prime} })\) is backpropagated to optimize Encoderi and Decoderj. c, Cross-path latent-space similarity optimization. The latent similarity loss Lz.sim(zi, zj, zk, …) provides explicit control to ensure that the latent representations of each subject are consistent across connectivity flavors. See Extended Data Table 1 for details about the loss terms. d, Multimodality fusion predictions from averaged latent space vectors. For these predictions we average the encoded latent vector from all input flavors (as in the fusion model), or a subset of input flavors (as in the fusionSC model, which averages latent vectors from only the SC inputs), and then decode this average vector to all output flavors. The fusion-parc model demonstrates cross-parcellation prediction, by averaging inputs from only the other parcellations.

Back to article page