Fig. 1: Input, DeepTaskGen architecture, and model evaluation on training and independent samples.
From: Generating synthetic task-based brain fingerprints for population neuroscience using deep learning

a Computation of connectomes: We utilized voxel-to-ROI rs-fMRI connectomes as input. A connectome was constructed for each subject by calculating the full correlation between the averaged time series from 50 ICA-based ROIs and the time series of individual voxels. b DeepTaskGen architecture: Task-contrast maps for various tasks were predicted from rs-fMRI connectomes using our proposed DeepTaskGen architecture. DeepTaskGen is a volumetric U-Net model with attention mechanism that processes the input resting-state connectome through a series of convolutional blocks, each comprising a 3D convolution layer, batch normalization, and a non-linear activation function. By utilizing max pooling, the model compresses images while preserving task-relevant patterns, and then, it up-samples the images to align with the output task contrast maps. The numbers below each block represent the output shape of each block and the number of feature maps (above). The details of the architecture are presented in Supplementary Table 26. c Training sample: We trained and evaluated DeepTaskGen on the HCP Young Adult dataset (n = 958). The figure above shows the reconstruction performance computed by taking Pearson’s correlation between predicted and actual contrast maps for representative contrasts from seven distinct tasks. The figure below displays the diagonality index (the difference between the on-diagonal and the mean off-diagonal elements in a correlation matrix, normalized by the mean on-diagonal values) on a symmetrical log scale (symlog, threshold = 0.10), evaluating models’ discriminability performance. We compared DeepTaskGen with methods like group-averaged contrast maps, retest scans, and a linear model (each depicted in distinct colors). d Transfer sample: We further fine-tuned the trained DeepTaskGen model on the HCP Development dataset (n = 637) using either task contrasts (e.g., GAMBLING REWARD), and predicted the other contrast (e.g., EMOTION FACES-SHAPES). The fine-tuned model was compared to the non-fine-tuned DeepTaskGen and linear models (shown in distinct colors). Reconstruction performance and discriminability were again used to assess the models’ performance for each task contrast. In boxplots, the box ranges from the first quartile to the third quartile, with a line inside indicating the median. The “whiskers” extend to the most extreme values within 1.5 times the interquartile range, which are not considered outliers. Any points outside this range are plotted individually as outliers.