Fig. 1: Decoding autobiographical mental image features from fMRI data with a general semantic model.
From: Neural decoding of autobiographical mental image features with a general semantic model

A Autobiographical mental imagery data: data analyzed corresponded to fifty participants, who imagined themselves in 20 loosely defined common natural scenarios (e.g., “a dancing scenario”), mentally simulating their perception, action and feelings. Participants then rated each imagined scenario in terms of twenty sensory, motor, spatial, social, cognitive and affective experiential features. Later, they underwent fMRI as they reimagined the same 20 autobiographical scenarios, one by one, when cued by written prompts. B Sentence semantics data: to build a general semantic decoding model, we used an fMRI dataset acquired from a different group of 14 participants as they read 240 third-party sentences. Sentence semantics were modeled from crowdsourced ratings of the individual words on the same twenty experiential features as above. Each sentence’s meaning was modeled as the feature-wise sum of content word ratings. All fMRI data from A and B were represented in a common neuroanatomical space (Schaefer-1000). C An fMRI decoding model was pre-trained to map the sentence reading fMRI data to the crowdsourced feature ratings using ridge regression. D The pre-trained fMRI decoder was transferred to reconstruct feature ratings from the mental image fMRI data (see results in Fig. 2A). To then evaluate whether the reconstructed ratings reflected idiosyncratic mental image content as opposed to generic group-level semantics, an individual differences analysis was run. This decoded participant identity by matching the reconstructed feature ratings to the observed ratings provided by the same and different participants, with the expectation that the strongest match would be for the same participant (see Figs. 2B and 3B). Some icons were adapted from public domain resources on ClipArtMax (https://www.clipartmax.com/).