Fig. 1: Task schematic, sample income distribution, and income−cognition relationships.

a During the Picture Sequence Memory Test, participants first encoded a series of thematically related images that appeared one at a time in the center of a computer screen (dark purple). After each image was displayed, it moved to a unique spatial location in the order in which it was presented. After all images were presented, the retrieval phase began (light purple). The images appeared in a scrambled position and participants were asked to move each image back to its original spatial location. This procedure repeated three times. b The Picture Vocabulary Test was used to assess vocabulary. Participants listened to an audio recording of a word on each trial while viewing four pictures. Participants were asked to select the picture that best matched the meaning of the word. c Income distribution of the sample (blue) versus the US population in 2012 (pink), retrieved from: https://www.census.gov/data/tables/time-series/demo/income-poverty/cps-finc/finc-01.2012.html. Y-axis reflects percentages. Solid line reflects the United States poverty line for a family of four. Dashed line marks the average threshold for very low income status used for determining eligibility for assisted housing in 2012 within participants’ metropolitan areas; retrieved from: https://www.huduser.gov/portal/datasets/il/il2012/select_Geography.odn. d, e Linear regressions showed income (log) was related to memory (p < 0.001) and vocabulary scores (p < 0.001), n = 690 participants. Income is plotted on a log scale reflecting our use of log transformed income. f, g The relationship between income and memory and vocabulary scores plotted on a linear scale, separately for lower and higher income subsamples. Linear regression interaction models showed income in raw dollars had a stronger relationship to cognitive scores in the lower (≤$75k) than higher income subsample (>$75k; interaction for memory: p = 0.003, interaction for vocabulary: p < 0.001), n = 690 participants. For (d, e–g), the residuals of cognitive scores were calculated by removing variance related to age and sex and were transformed to z-scores for plotting. Gray shading reflects 95% confidence intervals around the mean. False discovery rate adjusted p values are reported in the main text.