Fig. 1: Training schematic for fMRI models of intent and inference.
From: Neural signatures of emotional intent and inference align during social consensus

a Targets recorded themselves telling emotionally significant personal stories, then rated their videos on a continuous bivalent scale. Target self-ratings (red) served as the intent of the socioemotional signals conveyed in the videos. Observers viewed these 24 videos in the MRI and rated what they thought the target felt, moment-by-moment, on the same scale. These comprised the inference ratings (blue). There were three types of trials: audiovisual, audio-only, and visual-only; however, the 8 audiovisual trials were the trials of interest (see Fig. S3). Both the intent and inference ratings were transformed into valence-independent intensity quintiles to model the fMRI data. b, c Participant-level GLM predicting intent and inference quintiles. A predictor was constructed for each rating quintile (q = 5) for each stimulus (s = 24) for each participant (N = 100) and applied to each participants’ voxel timeseries, yielding a set of five whole-brain coefficient maps for each participant, for each model type (intent and inference). Maps from audiovisual trials were used for model training. Maps from unimodal trials were held-out and used for external validation (see Figs. S4, 5 for details). d, e Multivariate model training. Two models were trained from the same audiovisual stimuli: One aimed to characterize signal intent and the other aimed to characterize the observer’s inferences. First, brain activity for each intensity quintile, within each participant, was averaged into a single beta map (these voxels comprised the model’s features). Next, two multivariate LASSO-PCR models were trained to predict intent and inference intensity quintiles (Y = 1 to 5, 5 being the highest intensity) from their corresponding coefficient maps (features) across all participants using leave-one-participant-out cross-validation (LOO-CV). Plotted on the surface maps are the unthresholded normalized predictive Z-weights for each model. f External validation. Both models were applied to held-out intent and inference coefficient maps developed on held-out auditory-only and visual-only trials (see Method detail). The intent model was expected to accurately predict intent ratings, but not inference ratings, and vice versa.