Abstract
We must often infer latent properties of the world from noisy and changing observations. Complex, probabilistic approaches to this challenge such as Bayesian inference are accurate but cognitively demanding, relying on extensive working memory and adaptive processing. Simple heuristics are easy to implement but may be less accurate. What is the appropriate balance between complexity and accuracy? Here we model a hierarchy of strategies of variable complexity and find a power law of diminishing returns: increasing complexity gives progressively smaller gains in accuracy. The rate of diminishing returns depends systematically on the statistical uncertainty in the world, such that complex strategies do not provide substantial benefits over simple ones when uncertainty is either too high or too low. In between, there is a complexity dividend. In two psychophysical experiments, we confirm specific model predictions about how working memory and adaptivity should be modulated by uncertainty.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$32.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to the full article PDF.
USD 39.95
Prices may be subject to local taxes which are calculated during checkout








Similar content being viewed by others
Data availability
The experimental data used in Figs. 7 and 8 and Supplementary Figs. 6–10 are available on GitHub under the open-source license GNU-GPL-v3: https://github.com/gaiat/Modeling-Human-Inference. The data were collected using psiTurk (v.2.3.0)66 (http://psiturk.org/, https://github.com/NYUCCL/psiTurk), jsPsych (v.6.0)67 (https://www.jspsych.org/, https://github.com/jspsych/jsPsych/), chart.js (v.2.8.0) (https://www.chartjs.org/, https://github.com/chartjs/Chart.js) and papaparse.js (v.5.0) (https://www.papaparse.com/, https://github.com/mholt/PapaParse).
Code availability
The codes are available on GitHub under the open-source license GNU-GPL-v3: https://github.com/gaiat/Modeling-Human-Inference.
References
Rao, R. P. N. Bayesian computation in recurrent neural circuits. Neural Comput. 16, 1–38 (2004).
Bogacz, R., Brown, E., Moehlis, J., Holmes, P. & Cohen, J. D. The physics of optimal decision making: a formal analysis of models of performance in two-alternative forced-choice tasks. Psychol. Rev. 113, 700–765 (2006).
Fearnhead, P. & Liu, Z. On-line inference for multiple changepoint problems. J. R. Stat. Soc. B 69, 589–605 (2007).
Shi, L. & Griffiths, T. L. Neural implementation of hierarchical Bayesian inference by importance sampling. Adv. Neural Inf. Process. Syst. 22, 1669–1677 (2009).
Brown, S. D. & Steyvers, M. Detecting and predicting changes. Cogn. Psychol. 58, 49–67 (2009).
Gigerenzer, G. & Gaissmaier, W. Heuristic decision making. Annu. Rev. Psychol. 62, 451–482 (2011).
Wilson, R., Nassar, M. & Gold, J. A mixture of delta-rules approximation to Bayesian inference in change-point problems. PLoS Comput. Biol. 9, e1003150 (2013).
Legenstein, R. & Maass, W. Ensembles of spiking neurons with noise support optimal probabilistic inference in a dynamically changing environment. PLoS Comput. Biol. 10, e1003859 (2014).
Gershman, S. J., Horvitz, E. J. & Tenenbaum, J. B. Computational rationality: a converging paradigm for intelligence in brains, minds, and machines. Science 349, 273–278 (2015).
Ortega, P. A. & Braun, D. A. Thermodynamics as a theory of decision-making with information-processing costs. Proc. R. Soc. A 469, 20120683 (2013).
Glaze, C. M., Filipowicz, A. L. S., Kable, J. W., Balasubramanian, V. & Gold, J. I. A bias–variance trade-off governs individual differences in on-line learning in an unpredictable environment. Nat. Hum. Behav. 2, 213–224 (2018).
Adams, R. & MacKay, D. Bayesian online changepoint detection. Preprint at https://doi.org/10.48550/arXiv.0710.3742 (2007).
Wilson, R. C., Nassar, M. R. & Gold, J. I. Bayesian online learning of the hazard rate in change-point problems. Neural Comput. 22, 2452–2476 (2010).
Nassar, M. R., Wilson, R. C., Heasly, B. & Gold, J. I. An approximately Bayesian delta-rule model explains the dynamics of belief updating in a changing environment. J. Neurosci. 30, 12366–12378 (2010).
Heilbron, M. & Meyniel, F. Confidence resets reveal hierarchical adaptive learning in humans. PLoS Comput. Biol. 15, e1006972 (2019).
Behrens, T. E. J., Woolrich, M. W., Walton, M. E. & Rushworth, M. F. S. Learning the value of information in an uncertain world. Nat. Neurosci. 10, 1214–1221 (2007).
Sutton, R. S. & Barto, A. G. Reinforcement Learning: An Introduction (MIT Press, 1998).
Balasubramanian, V. Statistical inference, Occam’s razor, and statistical mechanics on the space of probability distributions. Neural Comput. 9, 349–368 (1997).
Barron, A., Rissanen, J. & Yu, B. The minimum description length principle in coding and modeling. IEEE Trans. Inf. Theory 44, 2743–2760 (1998).
Gutenkunst, R. et al. Universally sloppy parameter sensitivities in systems biology models. PLoS Comput. Biol. 3, e189 (2007).
Transtrum, M. K. & Qiu, P. Model reduction by manifold boundaries. Phys. Rev. Lett. 113, 098701 (2014).
Fan, Y., Gold, J. I. & Ding, L. Ongoing, rational calibration of reward-driven perceptual biases. eLife 7, e36018 (2018).
Schwarz, G. Estimating the dimension of a model. Ann. Stat. 6, 461–464 (1978).
Zeng, X., Song, T., Zhang, X. & Pan, L. Performing four basic arithmetic operations with spiking neural P systems. IEEE Trans. Nanobiosci. 11, 366–374 (2012).
Shenhav, A. et al. Toward a rational and mechanistic account of mental effort. Annu. Rev. Neurosci. 40, 99–124 (2017).
Vul, E., Goodman, N., Griffiths, T. L. & Tenenbaum, J. B. One and done? Optimal decisions from very few samples. Cogn. Sci. 38, 599–637 (2014).
Schmidhuber, J. Formal theory of creativity, fun, and intrinsic motivation (1990–2010). IEEE Trans. Auton. Ment. Dev. 2, 230–247 (2010).
Gold, J. I. & Shadlen, M. N. Banburismus and the brain: decoding the relationship between sensory stimuli, decisions, and reward. Neuron 36, 299–308 (2002).
Krugel, L. K., Biele, G., Mohr, P. N. C., Li, S. C. & Heekeren, H. R. Genetic variation in dopaminergic neuromodulation influences the ability to rapidly and flexibly adapt decisions. Proc. Natl Acad. Sci. USA 106, 17951–17956 (2009).
Stephan, K. E., Penny, W. D., Daunizeau, J., Moran, R. J. & Friston, K. J. Bayesian model selection for group studies. NeuroImage 46, 1004–1017 (2009).
Mathys, C. & Weber, L. Hierarchical Gaussian filtering of sufficient statistic time series for active inference. In International Workshop on Active Inference (eds Verbelen, T. et al.) 52–58 (Springer, 2020).
Mathys, C. D. et al. Uncertainty in perception and the hierarchical Gaussian filter. Front. Hum. Neurosci. 8, 825 (2014).
Lee, S., Gold, J. I. & Kable, J. W. The human as delta-rule learner. Decision 7, 55–66 (2020).
Glaze, C. M., Kable, J. W. & Gold, J. I. Normative evidence accumulation in unpredictable environments. eLife 4, e08825 (2015).
Walton, M. E., Behrens, T. E. J., Buckley, M. J., Rudebeck, P. H. & Rushworth, M. F. S. Separable learning systems in the macaque brain and the role of orbitofrontal cortex in contingent learning. Neuron 65, 927–939 (2010).
Sul, J. H., Jo, S., Lee, D. & Jung, M. W. Role of rodent secondary motor cortex in value-based action selection. Nat. Neurosci. 14, 1202–1210 (2011).
Cover, T. M. & Thomas, J. A. Elements of Information Theory (John Wiley & Sons, 2012).
Tishby, N., Pereira, F. C. & Bialek, W. The information bottleneck method. Preprint at https://doi.org/10.48550/arXiv.physics/0004057 (2000).
Canziani, A., Paszke, A. & Culurciello, E. An analysis of deep neural network models for practical applications. Preprint at https://doi.org/10.48550/arXiv.1605.07678 (2016).
Cheeseman, P. C., Kanefsky, B. & Taylor, W. M. Where the really hard problems are. IJCAI (US) 91, 331–340 (1991).
Biroli, G., Cocco, S. & Monasson, R. Phase transitions and complexity in computer science: an overview of the statistical physics approach to the random satisfiability problem. Physica A 306, 381–394 (2002).
Mitchell, D., Selman, B. & Levesque, H. Hard and easy distributions of SAT problems. AAAI 92, 459–465 (1992).
Zdeborová, L. Statistical physics of hard optimization problems. Acta Physica Slovaca Rev. Tutor. 59, 169–303 (2009).
Wilson, R. C., Nassar, M. R., Tavoni, G. & Gold, J. I. Correction: a mixture of delta-rules approximation to Bayesian inference in change-point problems. PLoS Comput. Biol. 14, e1006210 (2018).
Gerstner, W., Kistler, W. M., Naud, R. & Paninski, L. Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition (Cambridge Univ. Press, 2014).
Schultz, W., Dayan, P. & Montague, P. R. A neural substrate of prediction and reward. Science 275, 1593–1599 (1997).
Goldman-Rakic, P. S. Cellular basis of working memory. Neuron 14, 477–485 (1995).
Gläscher, J. & Büchel, C. Formal learning theory dissociates brain regions with different temporal integration. Neuron 47, 295–306 (2005).
Hasson, U., Yang, E., Vallines, I., Heeger, D. J. & Rubin, N. A hierarchy of temporal receptive windows in human cortex. J. Neurosci. 28, 2539–2550 (2008).
Bernacchia, A., Seo, H., Lee, D. & Wang, X. J. A reservoir of time constants for memory traces in cortical neurons. Nat. Neurosci. 14, 366–372 (2011).
Scott, B. B. et al. Fronto-parietal cortical circuits encode accumulated evidence with a diversity of timescales. Neuron 95, 385–398 (2017).
Runyan, C. A., Piasini, E., Panzeri, S. & Harvey, C. D. Distinct timescales of population coding across cortex. Nature 548, 92–96 (2017).
Meder, D. et al. Simultaneous representation of a spectrum of dynamically changing value estimates during decision making. Nat. Commun. 8, 1942 (2017).
Joshi, S. & Gold, J. I. Pupil size as a window on neural substrates of cognition. Trends Cogn. Sci. 24, 466–480 (2020).
Arnsten, A. F. T., Wang, M. J. & Paspalas, C. D. Neuromodulation of thought: flexibilities and vulnerabilities in prefrontal cortical network synapses. Neuron 76, 223–239 (2012).
Yerkes, R. M. & Dodson, J. D. The relation of strength of stimulus to rapidity of habit-formation. J. Comp. Neurol. Psychol. 18, 459–482 (1908).
Cools, R. & D’Esposito, M. Inverted-U-shaped dopamine actions on human working memory and cognitive control. Biol. Psychiatry 69, e113–e125 (2011).
Aston-Jones, G. & Cohen, J. D. An integrative theory of locus coeruleus–norepinephrine function: adaptive gain and optimal performance. Annu. Rev. Neurosci. 28, 403–450 (2005).
Griffiths, T. L., Vul, E. & Sanborn, A. N. Bridging levels of analysis for probabilistic models of cognition. Curr. Dir. Psychol. Sci. 21, 263–268 (2012).
Fusi, S., Asaad, W. F., Miller, E. K. & Wang, X. J. A neural circuit model of flexible sensorimotor mapping: learning and forgetting on multiple timescales. Neuron 54, 319–333 (2007).
Kalman, R. E. & Bucy, R. S. New results in linear filtering and prediction theory. J. Basic Eng. 83, 95–108 (1961).
Welch, G. & Bishop, G. An Introduction to the Kalman Filter https://perso.crans.org/club-krobot/doc/kalman.pdf (1997).
Mathys, C., Daunizeau, J., Friston, K. J. & Stephan, K. E. A Bayesian foundation for individual learning under uncertainty. Front. Hum. Neurosci. 5, 39 (2011).
Ossmy, O. et al. The timescale of perceptual evidence integration can be adapted to the environment. Curr. Biol. 23, 981–986 (2013).
Efron, B. & Tibshirani, R. J. An Introduction to the Bootstrap (CRC Press, 1994).
McDonnell, J. V. et al. psiTurk v.1.02 (New York University, 2012).
De Leeuw, J. R. jspsych: a JavaScript library for creating behavioral experiments in a Web browser. Behav. Res. Methods 47, 1–12 (2015).
Acknowledgements
We thank A. Filipowicz for sharing the codes to run the psychophysical experiment for the Bernoulli task. We also thank K. Krishnamurthy and E. Piasini for interesting discussions, and A. Cavagna and A. Ingrosso for pointing out a possible connection between one of our results and spin-glass systems. G.T. was supported by the Swartz Foundation (award no. 575556) and the Computational Neuroscience Initiative of the University of Pennsylvania, and is currently supported by Washington University in St. Louis. V.B. and J.I.G. are supported in part by NIH BRAIN Initiative grant no. R01EB026945. J.I.G. is also supported by grant nos R01 MH115557 and NSF-NCS 1533623. The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.
Author information
Authors and Affiliations
Contributions
G.T., V.B. and J.I.G. developed the ideas and wrote the paper. All the authors designed the psychophysics experiments. T.D. and C.P. performed the experiments. G.T. developed the theory and analysed the data.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Human Behaviour thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Supplementary Information
Supplementary Methods, Figs. 1–10 and Table 1.
Rights and permissions
About this article
Cite this article
Tavoni, G., Doi, T., Pizzica, C. et al. Human inference reflects a normative balance of complexity and accuracy. Nat Hum Behav 6, 1153–1168 (2022). https://doi.org/10.1038/s41562-022-01357-z
Received:
Accepted:
Published:
Version of record:
Issue date:
DOI: https://doi.org/10.1038/s41562-022-01357-z
This article is cited by
-
Confirmation bias through selective readout of information encoded in human parietal cortex
Nature Communications (2025)
-
Understanding learning through uncertainty and bias
Communications Psychology (2025)
-
Broadscale dampening of uncertainty adjustment in the aging brain
Nature Communications (2024)
-
Individual differences in belief updating and phasic arousal are related to psychosis proneness
Communications Psychology (2024)


