Algorithms are designed to learn user preferences by observing user behaviour. This causes algorithms to fail to reflect user preferences when psychological biases affect user decision making. For algorithms to enhance social welfare, algorithm design needs to be psychologically informed.
This is a preview of subscription content, access via your institution
Relevant articles
Open Access articles citing this article.
-
High risk of political bias in black box emotion inference models
Scientific Reports Open Access 19 February 2025
-
How human–AI feedback loops alter human perceptual, emotional and social judgements
Nature Human Behaviour Open Access 18 December 2024
-
Balancing the scale: navigating ethical and practical challenges of artificial intelligence (AI) integration in legal practices
Discover Artificial Intelligence Open Access 15 April 2024
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$32.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to the full article PDF.
USD 39.95
Prices may be subject to local taxes which are calculated during checkout
References
Allcott, H., Braghieri, L., Eichmeyer, S. & Gentzkow, M. Am. Econ. Rev. 110, 629–676 (2020).
Agan, A. Y., Davenport, D., Ludwig, J. & Mullainathan, S. Automating Automaticity: How the Context of Human Choice Affects the Extent of Algorithmic Bias (No. w30981) (National Bureau of Economic Research, 2023).
Lee, D. & Hosanagar, K. Inf. Syst. Res. 30, 239–259 (2019).
GPAI. Transparency Mechanisms for Social Media Recommender Algorithms: From Proposals to Action. Tracking GPAI’s Proposed Fact Finding Study in This Year’s Regulatory Discussions (Global Partnership on AI, 2022).
Obermeyer, Z., Powers, B., Vogeli, C. & Mullainathan, S. Science 366, 447–453 (2019).
Beshears, J., Choi, J. J., Laibson, D. & Madrian, B. C. J. Public Econ. 92, 1787–1794 (2008).
Morewedge, C. K. & Kahneman, D. Trends Cogn. Sci. 14, 435–440 (2010).
Logg, J. M. Using algorithms to understand the biases in your organization. Harv. Bus. Rev., https://hbr.org/2019/08/using-algorithms-to-understand-the-biases-in-your-organization (9 August 2019).
Milkman, K. L., Rogers, T. & Bazerman, M. H. Manage. Sci. 55, 1047–1059 (2009).
Block, N. Philos. Rev. 90, 5–43 (1981).
Ray, P. P. Internet Things Cyber-Phys. Syst. 3, 121–154 (2023).
McKenna, N. et al. Preprint at arXiv, https://doi.org/10.48550/arXiv.2305.14552 (2023).
Kleinberg, J., Ludwig, J., Raghavan, M. & Mullainathan, S. Perspect. Psychol. Sci. (in the press).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Rights and permissions
About this article
Cite this article
Morewedge, C.K., Mullainathan, S., Naushan, H.F. et al. Human bias in algorithm design. Nat Hum Behav 7, 1822–1824 (2023). https://doi.org/10.1038/s41562-023-01724-4
Published:
Version of record:
Issue date:
DOI: https://doi.org/10.1038/s41562-023-01724-4
This article is cited by
-
High risk of political bias in black box emotion inference models
Scientific Reports (2025)
-
How human–AI feedback loops alter human perceptual, emotional and social judgements
Nature Human Behaviour (2024)
-
Balancing the scale: navigating ethical and practical challenges of artificial intelligence (AI) integration in legal practices
Discover Artificial Intelligence (2024)