Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
Examining human reliance on artificial intelligence in decision making
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 05 February 2026

Examining human reliance on artificial intelligence in decision making

  • Joe Pearson1,
  • Itiel E. Dror2,
  • Emma Jayes3,
  • Grace-Rose Whordley3,
  • Georgina Mason3 &
  • …
  • Sophie Nightingale1 

Scientific Reports , Article number:  (2026) Cite this article

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Human behaviour
  • Psychology

Abstract

The use of Artificial Intelligence (AI) to effectively support human decision making depends on whether humans are willing to trust in, and thus rely on, AI. Understanding human reliance on AI is critical given controversial reports of AI inaccuracy and bias. Furthermore, the erroneous belief that using technology removes biases may lead to overreliance on AI. To examine humans’ reliance on AI, human participants (N = 295, Mage = 33.79) judged the authenticity of 80 faces (40 real, 40 AI-synthesized) presented alongside guidance supposedly from humans or from AI. This guidance was correct only half of the time. Participants indicated their confidence in each judgement and completed measures to examine propensity to trust humans and general attitudes towards AI. Participants who received AI guidance and exhibited more positive attitudes towards AI showed poorer discriminability between real and synthetic faces than those with less positive attitudes towards AI. For participants who received human guidance, level of trust in humans did not affect discriminability. Therefore, AI-derived guidance may be uniquely placed to engender biases in humans, leading to less effective decision making. To ensure successful human-AI decision making partnerships, more research is needed to understand precisely how humans use AI guidance in various contexts.

Data availability

The data collected during this research, and the full, anonymised, reproducible R data tidying and analysis code is available at https://osf.io/2p3bf/?view_only=868c92c940c947b894d24ac4b4155607.

References

  1. Bogert, E., Lauharatanahirun, N. & Schecter, A. Human preferences toward algorithmic advice in a word association task. Sci. Rep. 12 (1), 14501. https://doi.org/10.1038/s41598-022-18638-2 (2022).

    Google Scholar 

  2. Hickey, A. How Coffee Meets Bagel leverages data and AI for Love. CIODIVE. (2019). https://www.ciodive.com/news/coffee-meets-bagel-dating-technology-ai-data/548395/#:~:text

  3. Mauro., G. & Schellman, H. ‘There is no standard’: investigation finds AI algorithms objectify women’s bodies. Guardian (2023). https://www.theguardian.com/technology/2023/feb/08/biased-ai-algorithms-racy-women-bodies

  4. Angwin, J., Larson, J., Mattu, S. & Kirchner, L. Machine Bias. ProPublica. Retrieved from (2022). https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (2016).

  5. Cummings, M. L. Automation bias in intelligent time critical decision support systems. In: Collection of technical papers – AIAA 1st intelligent systems technical conference, 2, 557–562; (2004). https://doi.org/10.2514/6.2004-6313

  6. Wiener, E. L. & Curry, R. E. Flight-deck automation: promises and problems. Ergonomics 23, 995–1011. https://doi.org/10.1080/00140138008924809 (1980).

    Google Scholar 

  7. Bainbridge, L. Ironies of automation. Automatica 19, 775–779. https://doi.org/10.1016/0005-1098(83)90046-8 (1983).

    Google Scholar 

  8. Parasuraman, R., Sheridan, T. B. & Wickens, C. D. A model for types and levels of human interaction with automation. IEEE Trans. Syst. man. cybernetics-Part A: Syst. Hum. 30, 286–297. https://doi.org/10.1109/3468.844354 (2000).

    Google Scholar 

  9. Endsley, M. R. From here to autonomy: lessons learned from human–automation research. Hum. Factors. 59, 5–27. https://doi.org/10.1177/0018720816681350 (2017).

    Google Scholar 

  10. Foroughi, C. K. et al. Near-perfect automation: investigating performance, trust, and visual attention allocation. Hum. Factors. 65, 546–561. https://doi.org/10.1177/0018720821103288 (2023).

    Google Scholar 

  11. Kaber, D. B., Onal, E. & Endsley, M. R. Design of automation for telerobots and the effect on performance, operator situation awareness, and subjective workload. Human factors and ergonomics in manufacturing. Service Industries. 10, 409–430. https://doi.org/10.1002/1520-6564(200023)10:4%3C409::AID-HFM4%3E3.0.CO;2-V (2000).

    Google Scholar 

  12. Goddard, K., Roudsari, A. & Wyatt, J. C. Automation bias: a systematic review of frequency, effect mediators, and mitigators. J. Am. Med. Inform. Assoc. 19, 121–127. https://doi.org/10.1136/amiajnl-2011-000089 (2012).

    Google Scholar 

  13. Romeo, G. & Conti, D. Exploring automation bias in human-AI collaboration: a review and implications for explainable AI. AI Soc. https://doi.org/10.1007/s00146-025-02422-7 (2025).

    Google Scholar 

  14. Dror, I. E. Cognitive and human factors in expert decision making: six fallacies and the eight sources of bias. Anal. Chem. 92, 7998–8004. https://doi.org/10.1021/acs.analchem.0c00704 (2020).

    Google Scholar 

  15. Logg, J., Minson, J. & Moore, D. Algorithmic appreciation: people prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 151, 90–103 (2019).

    Google Scholar 

  16. Kidd, C. & Birhane, A. How AI can distort human beliefs. Science 380, 1222–1223. https://doi.org/10.1126/science.adi0248 (2023).

    Google Scholar 

  17. Leffer, L. Humans Absorb Bias from AI – And Keep It after They Stop Using the Algorithm. Scientific American. (2023). https://www.scientificamerican.com/article/humans-absorb-bias-from-ai-and-keep-it-after-they-stop-using-the-algorithm/

  18. van der Miesen, M. M., van der Lande, G. J. M., Hoogeveen, S., Schjoedt, U. & van Elk, M. The effect of source credibility on the evaluation of statements in a spiritual and scientific context: A registered report study. Compr. Results Social Psychol. 6 (1–3), 59–84. https://doi.org/10.1080/23743603.2022.2041984 (2022).

    Google Scholar 

  19. Sabbagh, M. A. & Baldwin, D. A. Learning words from knowledgeable versus ignorant speakers: links between preschoolers’ theory of Mind and semantic development. Child Dev. 72, 1054–1070. https://doi.org/10.1111/1467-8624.00334 (2003).

    Google Scholar 

  20. Placani, A. Anthropomorphism in AI: hype and fallacy. AI Ethics. 4, 691–698. https://doi.org/10.1007/s43681-024-00419-4 (2024).

    Google Scholar 

  21. Birhane, A. & van Dijk, J. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (Association for the Advancement of Artificial Intelligence), 207–213 (2020).

  22. Omrani, N., Rivieccio, G., Fiore, U., Schiavone, F. & Agreda, S. G. To trust or not to trust? AN assessment of trust in AI-bases systems: Concerns, ethics and contexts. Technol. Forecast. Soc. Chang. 181 https://doi.org/10.1016/j.techfore.2022.121763 (2022).

  23. Sanders, T., Kaplan, A., Koch, R., Schwarts, M. & Hancock, P. A. The relationship between trust and use choice in Human-Robot interaction. Hum. Factors. 61 (4), 614–626. https://doi.org/10.1177/0018720818816838 (2019).

    Google Scholar 

  24. Schaefer, K. E., Chen, J. Y. C., Szalma, J. L. & Hancock, P. A. A Meta-Analysis of factors influencing the development of trust in automation: implications for Understanding autonomy in future systems. Hum. Factors. 58 (3), 377–400. https://doi.org/10.1177/0018720816634228 (2016).

    Google Scholar 

  25. Nickerson, J. V. & Reilly, R. R. A model for investigating the effects of machine autonomy on human behaviour. Proceedings of the 37th Annual Hawaii International Conference on System Sciences; (2004). https://doi.org/10.1109/HICSS.2004.1265325

  26. Konnikova, M. The hazards of going on autopilot. New. Yorker (2014). https://www.newyorker.com/science/mario-Konnikova/hazards-automation

  27. Dror, I. E., Wertheim, K., Fraser-Mackenzie, P. & Walajtys, J. The impact of human-technology Cooperation and distributed cognition in forensic science: biasing effects of AFIS contextual information on human experts. J. Forensic Sci. 57, 343–352 (2012).

    Google Scholar 

  28. Dror, I. E. & Mnookin, J. The use of technology in human expert domains: challenges and risks arising from the use of automated fingerprint identification systems in forensics. Law Probab. Risk. 9, 47–67 (2010).

    Google Scholar 

  29. Nightingale, S. J. & Farid, H. AI-Synthesized Faces are Indistinguishable from Real Faces and More Trustworthy. Proceedings of the National Academy of Sciences, 119; (2022). https://doi.org/10.1073/pnas.2120481119

  30. Karras, T., Laine, S. & Aila, T. A. Style-Based Generator Architecture for Generative Adversarial Networks. Computer Science: Neural and Evolutionary Computing. (2019).

  31. Frazier, M. L., Johnson, P. D. & Fainshmidt, S. Development and validation of a propensity to trust scale. J. Trust Res. 3, 76–97. https://doi.org/10.1080/21515581.2013.820026 (2013).

    Google Scholar 

  32. Glaeser, E. L., Laibson, D. I., Scheinkman, J. A. & Soutter, C. L. Measuring trust. Quart. J. Econ. 115, 811–846. https://doi.org/10.1162/003355300554926 (2000).

    Google Scholar 

  33. Rotter, J. B. A new scale for the measurement of interpersonal trust. J. Pers. 35, 651–665. https://doi.org/10.1111/j.1467-6494.1967.tb01454.x (1967).

    Google Scholar 

  34. Yamagishi, T. The provisioning of a sanctioning system as a public good. J. Personal. Soc. Psychol. 51, 110–116. https://doi.org/10.1037/0022-3514.51.1.110 (1986).

    Google Scholar 

  35. Yamagishi, T. The provision of a sanctioning system in the united States and Japan. Social Psychol. Q. 51, 265–271. https://doi.org/10.2307/2786924 (1988).

    Google Scholar 

  36. Yamagishi, T. & Yamagishi, M. Trust and commitment in the united States and Japan. Motivation Emot. 18, 129–166. https://doi.org/10.1007/BF02249397 (1994).

    Google Scholar 

  37. Nunnally, J. C. & Bernstein, I. H. Psychometric Theory 3rd edn (McGraw-Hill, 1994).

  38. Schepman, A. & Rodway, P. Initial validation of the general attitudes towards artificial intelligence scale. Computers Hum. Behav. Rep. 1, 100014. https://doi.org/10.1016/j.chbr.2020.100014 (2020).

    Google Scholar 

  39. Green, D. M. & Swets, J. A. Signal detection theory and psychophysics Vol. 1, 1969–2012 (New York: Wiley, (1966).

  40. Macmillan, N. A. & Creelman, C. D. Detection theory: A user’s guidePsychology press (2004). https://doi.org/10.4324/9781410611147

  41. Cohen, J. Statistical Power Analysis for the Behavioral Sciences 2nd edn (Lawrence Erlbaum, 1988).

  42. Makowski, D. The psycho package: an efficient and Publishing-Oriented workflow for psychological science. J. Open. Source Softw. 3, 470. https://doi.org/10.21105/joss.00470 (2018).

    Google Scholar 

  43. Lerman, D. C. et al. Applying signal-detection theory to the study of observer accuracy and bias in behavioural assessment. J. Appl. Behav. Anal. 43, 195–213. https://doi.org/10.1901/jaba.2010 (2010).

    Google Scholar 

  44. Hu, L. T. & Bentler, P. M. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct. Equ. Model. 6, 1–55 (1999).

    Google Scholar 

  45. Alavi, M. et al. Chi-square for model fit in confirmatory factor analysis. J. Adv. Nurs. 76 (9), 2209–2211. https://doi.org/10.1111/jan.14399 (2020).

    Google Scholar 

  46. Kline, R. B. Principles and practice of structural equation modeling (2nd ed.) (Guilford Press, 2005). (2005).

  47. West, S. G., Taylor, A. B. & Wu, W. Model fit and model selection in structural equation modelling in Handbook of structural equation modeling (ed. Hoyle, R. H.) 209–231Guilford Press, (2012).

  48. Maniscalco, B. & Lau, H. A signal detection theoretic approach for estimating metacognitive sensitivity from confidence ratings. Consciousness and Cognition, 21, 422–430; (2012). https://doi.org/10.1016/j.concog.2011.09.021 (2012).

  49. Maniscalco, B. & Lau, H. Signal detection theory analysis of type 1 and type 2 data: meta-d’, response-specific meta-d’, and the unequal variance SDT mode in The Cognitive Neuroscience of Metacognition (eds Fleming, M. & Frith, C.) D) 25–66 (Springer, (2014).

  50. Schoeffer, J., De-Arteaga, M., Kühl, N. & Explanations Fairness, and Appropriate Reliance in Human-AI Decision-Making. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI ’24), 1–18; (2024). https://doi.org/10.1145/3613904.3642621

  51. Wickens, C. D., Helton, W. S., Hollands, J. G. & Banbury, S. Chapter 13 | human-automation interaction in Engineering psychology and human performance 516–551Routledge, (2021).

  52. Carragher, D. J. & Hancock, P. J. B. Simulated automated facial recognition systems as decision-aids in forensic face matching tasks. J. Exp. Psychol. Gen. 152, 1286–1304. https://doi.org/10.1037/xge0001310 (2023).

    Google Scholar 

  53. Dodge, J., Liao, Q. V., Zhang, Y., Bellamy, R. K. & Dugan, C. Explaining models: an empirical study of how explanations impact fairness judgement. In Proceedings of the 24th international conference on intelligent user interfaces 275–285 (2019).

  54. Alufaisan, Y., Marusich, L. R., Bakdash, J. Z., Zhou, Y. & Kantarcioglu, M. Does Explainable Artificial Intelligence Improve Human Decision-Making? In Proceedings of the AAAI Conference on Artificial Intelligence, 35, 6618–6626; (2021). https://doi.org/10.1609/aaai.v35i8.16819

  55. Zhang, Y., Liao, Q. V. & Bellamy, R. K. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the conference on fairness, accountability, and transparency, 295–305 (2020)., 295–305 (2020). (2020).

  56. Gino, F. & Moore, D. A. Effects of task difficulty on use of advice. J. Behav. Decision-Making. 20, 21–35. https://doi.org/10.1002/bdm.539 (2007).

    Google Scholar 

  57. Bogert, E., Schechter, A. & Watson, R. T. Humans rely more on algorithms than social influence as a task becomes more difficult. Sci. Rep. 11, 8028. https://doi.org/10.1038/s41598-021-87480-9 (2021).

    Google Scholar 

  58. Madhavan, P., Wiegmann, D. A. & Lacson, F. C. Automation failures on tasks easily performed by operators undermines trust in automated aids. Hum. Factors. 48, 241–256 (2006).

    Google Scholar 

  59. Carragher, D. J., Sturman, D. & Hancock, P. J. B. Trust in automation and the accuracy of human-algorithm teams performing one-to-one face matching tasks. Cogn. Research: Principles Implications. 9, 41. https://doi.org/10.1186/s41235-024-00564-8 (2024).

    Google Scholar 

  60. Palermo, R. et al. Do people have insight into their face recognition abilities? Q. J. Experimental Psychol. 70, 218–233. https://doi.org/10.1080/17470218.2016.1161058 (2017).

    Google Scholar 

  61. Flowe, H. D. et al. An experimental examination of the effects of alcohol consumption and exposure to misleading postevent information on remembering a hypothetical rape scenario. Appl. Cogn. Psychol. 33, 393–413. https://doi.org/10.1002/acp.3531 (2019).

    Google Scholar 

  62. Kruger, J. & Dunning, D. Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J. Personal. Soc. Psychol. 77, 1121–1134. https://doi.org/10.1037/0022-3514.77.6.1121 (1999).

    Google Scholar 

Download references

Funding

This research was funded by the UK Defence Science and Technology Laboratory (Dstl) through The Alan Turing Institute’s AI Research Centre for Defence (ARC-D). All views expressed in this report are those of the authors, and do not necessarily represent the views of Lancaster University, The Alan Turing Institute or any other organisation.

Author information

Authors and Affiliations

  1. Department of Psychology, Lancaster University, Lancaster, LA1 4YF, UK

    Joe Pearson & Sophie Nightingale

  2. Cognitive Consultants International (CCI-HQ), London, UK

    Itiel E. Dror

  3. Defence Science and Technology Laboratory, Wiltshire, UK

    Emma Jayes, Grace-Rose Whordley & Georgina Mason

Authors
  1. Joe Pearson
    View author publications

    Search author on:PubMed Google Scholar

  2. Itiel E. Dror
    View author publications

    Search author on:PubMed Google Scholar

  3. Emma Jayes
    View author publications

    Search author on:PubMed Google Scholar

  4. Grace-Rose Whordley
    View author publications

    Search author on:PubMed Google Scholar

  5. Georgina Mason
    View author publications

    Search author on:PubMed Google Scholar

  6. Sophie Nightingale
    View author publications

    Search author on:PubMed Google Scholar

Contributions

E. J. and S. N. conceived the experiment. E.J., I. D., G. R. W., G. M., and S. N. designed the experiment. J. P. conducted the experiment, analysed results, and wrote the manuscript. All authors reviewed the manuscript.

Corresponding author

Correspondence to Sophie Nightingale.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pearson, J., Dror, I., Jayes, E. et al. Examining human reliance on artificial intelligence in decision making. Sci Rep (2026). https://doi.org/10.1038/s41598-026-34983-y

Download citation

  • Received: 31 March 2025

  • Accepted: 01 January 2026

  • Published: 05 February 2026

  • DOI: https://doi.org/10.1038/s41598-026-34983-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Computational social science
  • AI
  • Decision-making
  • Bias
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on Twitter
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com sitemap

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing