Abstract
The use of Artificial Intelligence (AI) to effectively support human decision making depends on whether humans are willing to trust in, and thus rely on, AI. Understanding human reliance on AI is critical given controversial reports of AI inaccuracy and bias. Furthermore, the erroneous belief that using technology removes biases may lead to overreliance on AI. To examine humans’ reliance on AI, human participants (N = 295, Mage = 33.79) judged the authenticity of 80 faces (40 real, 40 AI-synthesized) presented alongside guidance supposedly from humans or from AI. This guidance was correct only half of the time. Participants indicated their confidence in each judgement and completed measures to examine propensity to trust humans and general attitudes towards AI. Participants who received AI guidance and exhibited more positive attitudes towards AI showed poorer discriminability between real and synthetic faces than those with less positive attitudes towards AI. For participants who received human guidance, level of trust in humans did not affect discriminability. Therefore, AI-derived guidance may be uniquely placed to engender biases in humans, leading to less effective decision making. To ensure successful human-AI decision making partnerships, more research is needed to understand precisely how humans use AI guidance in various contexts.
Data availability
The data collected during this research, and the full, anonymised, reproducible R data tidying and analysis code is available at https://osf.io/2p3bf/?view_only=868c92c940c947b894d24ac4b4155607.
References
Bogert, E., Lauharatanahirun, N. & Schecter, A. Human preferences toward algorithmic advice in a word association task. Sci. Rep. 12 (1), 14501. https://doi.org/10.1038/s41598-022-18638-2 (2022).
Hickey, A. How Coffee Meets Bagel leverages data and AI for Love. CIODIVE. (2019). https://www.ciodive.com/news/coffee-meets-bagel-dating-technology-ai-data/548395/#:~:text
Mauro., G. & Schellman, H. ‘There is no standard’: investigation finds AI algorithms objectify women’s bodies. Guardian (2023). https://www.theguardian.com/technology/2023/feb/08/biased-ai-algorithms-racy-women-bodies
Angwin, J., Larson, J., Mattu, S. & Kirchner, L. Machine Bias. ProPublica. Retrieved from (2022). https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (2016).
Cummings, M. L. Automation bias in intelligent time critical decision support systems. In: Collection of technical papers – AIAA 1st intelligent systems technical conference, 2, 557–562; (2004). https://doi.org/10.2514/6.2004-6313
Wiener, E. L. & Curry, R. E. Flight-deck automation: promises and problems. Ergonomics 23, 995–1011. https://doi.org/10.1080/00140138008924809 (1980).
Bainbridge, L. Ironies of automation. Automatica 19, 775–779. https://doi.org/10.1016/0005-1098(83)90046-8 (1983).
Parasuraman, R., Sheridan, T. B. & Wickens, C. D. A model for types and levels of human interaction with automation. IEEE Trans. Syst. man. cybernetics-Part A: Syst. Hum. 30, 286–297. https://doi.org/10.1109/3468.844354 (2000).
Endsley, M. R. From here to autonomy: lessons learned from human–automation research. Hum. Factors. 59, 5–27. https://doi.org/10.1177/0018720816681350 (2017).
Foroughi, C. K. et al. Near-perfect automation: investigating performance, trust, and visual attention allocation. Hum. Factors. 65, 546–561. https://doi.org/10.1177/0018720821103288 (2023).
Kaber, D. B., Onal, E. & Endsley, M. R. Design of automation for telerobots and the effect on performance, operator situation awareness, and subjective workload. Human factors and ergonomics in manufacturing. Service Industries. 10, 409–430. https://doi.org/10.1002/1520-6564(200023)10:4%3C409::AID-HFM4%3E3.0.CO;2-V (2000).
Goddard, K., Roudsari, A. & Wyatt, J. C. Automation bias: a systematic review of frequency, effect mediators, and mitigators. J. Am. Med. Inform. Assoc. 19, 121–127. https://doi.org/10.1136/amiajnl-2011-000089 (2012).
Romeo, G. & Conti, D. Exploring automation bias in human-AI collaboration: a review and implications for explainable AI. AI Soc. https://doi.org/10.1007/s00146-025-02422-7 (2025).
Dror, I. E. Cognitive and human factors in expert decision making: six fallacies and the eight sources of bias. Anal. Chem. 92, 7998–8004. https://doi.org/10.1021/acs.analchem.0c00704 (2020).
Logg, J., Minson, J. & Moore, D. Algorithmic appreciation: people prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 151, 90–103 (2019).
Kidd, C. & Birhane, A. How AI can distort human beliefs. Science 380, 1222–1223. https://doi.org/10.1126/science.adi0248 (2023).
Leffer, L. Humans Absorb Bias from AI – And Keep It after They Stop Using the Algorithm. Scientific American. (2023). https://www.scientificamerican.com/article/humans-absorb-bias-from-ai-and-keep-it-after-they-stop-using-the-algorithm/
van der Miesen, M. M., van der Lande, G. J. M., Hoogeveen, S., Schjoedt, U. & van Elk, M. The effect of source credibility on the evaluation of statements in a spiritual and scientific context: A registered report study. Compr. Results Social Psychol. 6 (1–3), 59–84. https://doi.org/10.1080/23743603.2022.2041984 (2022).
Sabbagh, M. A. & Baldwin, D. A. Learning words from knowledgeable versus ignorant speakers: links between preschoolers’ theory of Mind and semantic development. Child Dev. 72, 1054–1070. https://doi.org/10.1111/1467-8624.00334 (2003).
Placani, A. Anthropomorphism in AI: hype and fallacy. AI Ethics. 4, 691–698. https://doi.org/10.1007/s43681-024-00419-4 (2024).
Birhane, A. & van Dijk, J. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (Association for the Advancement of Artificial Intelligence), 207–213 (2020).
Omrani, N., Rivieccio, G., Fiore, U., Schiavone, F. & Agreda, S. G. To trust or not to trust? AN assessment of trust in AI-bases systems: Concerns, ethics and contexts. Technol. Forecast. Soc. Chang. 181 https://doi.org/10.1016/j.techfore.2022.121763 (2022).
Sanders, T., Kaplan, A., Koch, R., Schwarts, M. & Hancock, P. A. The relationship between trust and use choice in Human-Robot interaction. Hum. Factors. 61 (4), 614–626. https://doi.org/10.1177/0018720818816838 (2019).
Schaefer, K. E., Chen, J. Y. C., Szalma, J. L. & Hancock, P. A. A Meta-Analysis of factors influencing the development of trust in automation: implications for Understanding autonomy in future systems. Hum. Factors. 58 (3), 377–400. https://doi.org/10.1177/0018720816634228 (2016).
Nickerson, J. V. & Reilly, R. R. A model for investigating the effects of machine autonomy on human behaviour. Proceedings of the 37th Annual Hawaii International Conference on System Sciences; (2004). https://doi.org/10.1109/HICSS.2004.1265325
Konnikova, M. The hazards of going on autopilot. New. Yorker (2014). https://www.newyorker.com/science/mario-Konnikova/hazards-automation
Dror, I. E., Wertheim, K., Fraser-Mackenzie, P. & Walajtys, J. The impact of human-technology Cooperation and distributed cognition in forensic science: biasing effects of AFIS contextual information on human experts. J. Forensic Sci. 57, 343–352 (2012).
Dror, I. E. & Mnookin, J. The use of technology in human expert domains: challenges and risks arising from the use of automated fingerprint identification systems in forensics. Law Probab. Risk. 9, 47–67 (2010).
Nightingale, S. J. & Farid, H. AI-Synthesized Faces are Indistinguishable from Real Faces and More Trustworthy. Proceedings of the National Academy of Sciences, 119; (2022). https://doi.org/10.1073/pnas.2120481119
Karras, T., Laine, S. & Aila, T. A. Style-Based Generator Architecture for Generative Adversarial Networks. Computer Science: Neural and Evolutionary Computing. (2019).
Frazier, M. L., Johnson, P. D. & Fainshmidt, S. Development and validation of a propensity to trust scale. J. Trust Res. 3, 76–97. https://doi.org/10.1080/21515581.2013.820026 (2013).
Glaeser, E. L., Laibson, D. I., Scheinkman, J. A. & Soutter, C. L. Measuring trust. Quart. J. Econ. 115, 811–846. https://doi.org/10.1162/003355300554926 (2000).
Rotter, J. B. A new scale for the measurement of interpersonal trust. J. Pers. 35, 651–665. https://doi.org/10.1111/j.1467-6494.1967.tb01454.x (1967).
Yamagishi, T. The provisioning of a sanctioning system as a public good. J. Personal. Soc. Psychol. 51, 110–116. https://doi.org/10.1037/0022-3514.51.1.110 (1986).
Yamagishi, T. The provision of a sanctioning system in the united States and Japan. Social Psychol. Q. 51, 265–271. https://doi.org/10.2307/2786924 (1988).
Yamagishi, T. & Yamagishi, M. Trust and commitment in the united States and Japan. Motivation Emot. 18, 129–166. https://doi.org/10.1007/BF02249397 (1994).
Nunnally, J. C. & Bernstein, I. H. Psychometric Theory 3rd edn (McGraw-Hill, 1994).
Schepman, A. & Rodway, P. Initial validation of the general attitudes towards artificial intelligence scale. Computers Hum. Behav. Rep. 1, 100014. https://doi.org/10.1016/j.chbr.2020.100014 (2020).
Green, D. M. & Swets, J. A. Signal detection theory and psychophysics Vol. 1, 1969–2012 (New York: Wiley, (1966).
Macmillan, N. A. & Creelman, C. D. Detection theory: A user’s guidePsychology press (2004). https://doi.org/10.4324/9781410611147
Cohen, J. Statistical Power Analysis for the Behavioral Sciences 2nd edn (Lawrence Erlbaum, 1988).
Makowski, D. The psycho package: an efficient and Publishing-Oriented workflow for psychological science. J. Open. Source Softw. 3, 470. https://doi.org/10.21105/joss.00470 (2018).
Lerman, D. C. et al. Applying signal-detection theory to the study of observer accuracy and bias in behavioural assessment. J. Appl. Behav. Anal. 43, 195–213. https://doi.org/10.1901/jaba.2010 (2010).
Hu, L. T. & Bentler, P. M. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct. Equ. Model. 6, 1–55 (1999).
Alavi, M. et al. Chi-square for model fit in confirmatory factor analysis. J. Adv. Nurs. 76 (9), 2209–2211. https://doi.org/10.1111/jan.14399 (2020).
Kline, R. B. Principles and practice of structural equation modeling (2nd ed.) (Guilford Press, 2005). (2005).
West, S. G., Taylor, A. B. & Wu, W. Model fit and model selection in structural equation modelling in Handbook of structural equation modeling (ed. Hoyle, R. H.) 209–231Guilford Press, (2012).
Maniscalco, B. & Lau, H. A signal detection theoretic approach for estimating metacognitive sensitivity from confidence ratings. Consciousness and Cognition, 21, 422–430; (2012). https://doi.org/10.1016/j.concog.2011.09.021 (2012).
Maniscalco, B. & Lau, H. Signal detection theory analysis of type 1 and type 2 data: meta-d’, response-specific meta-d’, and the unequal variance SDT mode in The Cognitive Neuroscience of Metacognition (eds Fleming, M. & Frith, C.) D) 25–66 (Springer, (2014).
Schoeffer, J., De-Arteaga, M., Kühl, N. & Explanations Fairness, and Appropriate Reliance in Human-AI Decision-Making. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI ’24), 1–18; (2024). https://doi.org/10.1145/3613904.3642621
Wickens, C. D., Helton, W. S., Hollands, J. G. & Banbury, S. Chapter 13 | human-automation interaction in Engineering psychology and human performance 516–551Routledge, (2021).
Carragher, D. J. & Hancock, P. J. B. Simulated automated facial recognition systems as decision-aids in forensic face matching tasks. J. Exp. Psychol. Gen. 152, 1286–1304. https://doi.org/10.1037/xge0001310 (2023).
Dodge, J., Liao, Q. V., Zhang, Y., Bellamy, R. K. & Dugan, C. Explaining models: an empirical study of how explanations impact fairness judgement. In Proceedings of the 24th international conference on intelligent user interfaces 275–285 (2019).
Alufaisan, Y., Marusich, L. R., Bakdash, J. Z., Zhou, Y. & Kantarcioglu, M. Does Explainable Artificial Intelligence Improve Human Decision-Making? In Proceedings of the AAAI Conference on Artificial Intelligence, 35, 6618–6626; (2021). https://doi.org/10.1609/aaai.v35i8.16819
Zhang, Y., Liao, Q. V. & Bellamy, R. K. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the conference on fairness, accountability, and transparency, 295–305 (2020)., 295–305 (2020). (2020).
Gino, F. & Moore, D. A. Effects of task difficulty on use of advice. J. Behav. Decision-Making. 20, 21–35. https://doi.org/10.1002/bdm.539 (2007).
Bogert, E., Schechter, A. & Watson, R. T. Humans rely more on algorithms than social influence as a task becomes more difficult. Sci. Rep. 11, 8028. https://doi.org/10.1038/s41598-021-87480-9 (2021).
Madhavan, P., Wiegmann, D. A. & Lacson, F. C. Automation failures on tasks easily performed by operators undermines trust in automated aids. Hum. Factors. 48, 241–256 (2006).
Carragher, D. J., Sturman, D. & Hancock, P. J. B. Trust in automation and the accuracy of human-algorithm teams performing one-to-one face matching tasks. Cogn. Research: Principles Implications. 9, 41. https://doi.org/10.1186/s41235-024-00564-8 (2024).
Palermo, R. et al. Do people have insight into their face recognition abilities? Q. J. Experimental Psychol. 70, 218–233. https://doi.org/10.1080/17470218.2016.1161058 (2017).
Flowe, H. D. et al. An experimental examination of the effects of alcohol consumption and exposure to misleading postevent information on remembering a hypothetical rape scenario. Appl. Cogn. Psychol. 33, 393–413. https://doi.org/10.1002/acp.3531 (2019).
Kruger, J. & Dunning, D. Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J. Personal. Soc. Psychol. 77, 1121–1134. https://doi.org/10.1037/0022-3514.77.6.1121 (1999).
Funding
This research was funded by the UK Defence Science and Technology Laboratory (Dstl) through The Alan Turing Institute’s AI Research Centre for Defence (ARC-D). All views expressed in this report are those of the authors, and do not necessarily represent the views of Lancaster University, The Alan Turing Institute or any other organisation.
Author information
Authors and Affiliations
Contributions
E. J. and S. N. conceived the experiment. E.J., I. D., G. R. W., G. M., and S. N. designed the experiment. J. P. conducted the experiment, analysed results, and wrote the manuscript. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Pearson, J., Dror, I., Jayes, E. et al. Examining human reliance on artificial intelligence in decision making. Sci Rep (2026). https://doi.org/10.1038/s41598-026-34983-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-026-34983-y