In cooperative games, humans are biased against AI systems even when such systems behave better than our human counterparts. This raises a question: should AI systems ever be allowed to conceal their true nature and lie to us for our own benefit?
This is a preview of subscription content, access via your institution
Relevant articles
Open Access articles citing this article.
-
The Potential of AI Chatbots as Diagnostic Tools in Mental Health: Evaluating Exercise Dependence Symptoms
Journal of Technology in Behavioral Science Open Access 23 October 2025
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$32.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Rent or buy this article
Prices vary by article type
from$1.95
to$39.95
Prices may be subject to local taxes which are calculated during checkout
References
Ishowo-Oloko, F. et al. Nat. Mach. Intell. https://doi.org/10.1038/s42256-019-0113-5 (2019).
Chikaraishi, T., Yoshikawa, Y., Ogawa, K., Hirata, O. & Ishiguro, H. Future Internet 9, 75 (2017).
Crandall, J. W. et al. Nat. Commun. 9, 233 (2018).
Axelrod, R. & Hamilton, W. D. Science 211, 1390–1396 (1981).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Rovatsos, M. We may not cooperate with friendly machines. Nat Mach Intell 1, 497–498 (2019). https://doi.org/10.1038/s42256-019-0117-1
Published:
Version of record:
Issue date:
DOI: https://doi.org/10.1038/s42256-019-0117-1
This article is cited by
-
The Potential of AI Chatbots as Diagnostic Tools in Mental Health: Evaluating Exercise Dependence Symptoms
Journal of Technology in Behavioral Science (2025)
-
The imperative of interpretable machines
Nature Machine Intelligence (2020)