Filter By:

Journal Check one or more journals to show results from those journals only.

Choose more journals

Article type Check one or more article types to show results from those article types only.
Subject Check one or more subjects to show results from those subjects only.
Date Choose a date option to show results from those dates only.

Custom date range

Clear all filters
Sort by:
Showing 1–15 of 15 results
Advanced filters: Author: Jean-François Bonnefon Clear advanced filters
  • People cheat more when they delegate tasks to artificial intelligence, and large language models are more likely than humans to comply with unethical instructions—a risk that can be minimized by introducing prohibitive, task-specific guardrails.

    • Nils Köbis
    • Zoe Rahwan
    • Iyad Rahwan
    ResearchOpen Access
    Nature
    Volume: 646, P: 126-134
  • Climate sceptics tend to avoid climate information, making it even harder to reduce scepticism. This study shows that generative AI can enhance sceptics’ engagement with climate news by tailoring headlines to their existing perspective and shift their beliefs towards the scientific consensus.

    • Bence Bago
    • Philippe Muller
    • Jean-François Bonnefon
    Research
    Nature Climate Change
    P: 1-7
  • Algorithms and bots are capable of performing some behaviours at human or super-human levels. Humans, however, tend to trust algorithms less than they trust other humans. The authors find that bots do better than humans at inducing cooperation in certain human–machine interactions, but only if the bots do not disclose their true nature as artificial.

    • Fatimah Ishowo-Oloko
    • Jean-François Bonnefon
    • Talal Rahwan
    Research
    Nature Machine Intelligence
    Volume: 1, P: 517-521
  • Governments use AI to speed up welfare decisions, raising concerns about fairness and accuracy. Here, the authors find that welfare claimants are more averse to AI and their preferences less understood by others, suggesting the risk of biased systems that neglect vulnerable voices.

    • Mengchen Dong
    • Jean-François Bonnefon
    • Iyad Rahwan
    ResearchOpen Access
    Nature Communications
    Volume: 16, P: 1-11
  • The ‘science fiction science’ method simulates future technologies and collects quantitative data on the attitudes and behaviours of participants in various future scenarios, with the aim of predicting impacts of future technologies before they arrive.

    • Iyad Rahwan
    • Azim Shariff
    • Jean-François Bonnefon
    Reviews
    Nature
    Volume: 644, P: 51-58
  • Responses from more than two million people to an internet-based survey of attitudes towards moral dilemmas that might be faced by autonomous vehicles shed light on similarities and variations in ethical preferences among different populations.

    • Edmond Awad
    • Sohan Dsouza
    • Iyad Rahwan
    Research
    Nature
    Volume: 563, P: 59-64
  • When an automated car harms someone, who is blamed by those who hear about it? Over five studies, Awad et al. find that drivers are blamed more than their automated cars when both make mistakes.

    • Edmond Awad
    • Sydney Levine
    • Iyad Rahwan
    Research
    Nature Human Behaviour
    Volume: 4, P: 134-143
  • Humans and machines are increasingly participating in mixed collectives in which they can help or hinder each other. Here the authors show the way in which people treat machines differently than humans in a stylized society of beneficiaries, helpers, punishers, and trustors.

    • Kinga Makovi
    • Anahit Sargsyan
    • Talal Rahwan
    ResearchOpen Access
    Nature Communications
    Volume: 14, P: 1-12
  • Self-driving cars offer a bright future, but only if the public can overcome the psychological challenges that stand in the way of widespread adoption. We discuss three: ethical dilemmas, overreactions to accidents, and the opacity of the cars’ decision-making algorithms — and propose steps towards addressing them.

    • Azim Shariff
    • Jean-François Bonnefon
    • Iyad Rahwan
    Comments & Opinion
    Nature Human Behaviour
    Volume: 1, P: 694-696
  • Artificial intelligence is now superior to humans in many fully competitive games, such as Chess, Go, and Poker. Here the authors develop a machine-learning algorithm that can cooperate effectively with humans when cooperation is beneficial but nontrivial, something humans are remarkably good at.

    • Jacob W. Crandall
    • Mayada Oudah
    • Iyad Rahwan
    ResearchOpen Access
    Nature Communications
    Volume: 9, P: 1-12
  • Artificial intelligence tools and systems are increasingly influencing human culture. Brinkmann et al. argue that these ‘intelligent machines’ are transforming the fundamental processes of cultural evolution: variation, transmission and selection.

    • Levin Brinkmann
    • Fabian Baumann
    • Iyad Rahwan
    Reviews
    Nature Human Behaviour
    Volume: 7, P: 1855-1868
  • An experiment shows that complex technologies need not result from our superior causal reasoning abilities but, instead, can emerge from the accumulation of improvements across generations without requiring understanding of how these technologies work.

    • Maxime Derex
    • Jean-François Bonnefon
    • Alex Mesoudi
    Research
    Nature Human Behaviour
    Volume: 3, P: 446-452
  • Köbis et al. outline how artificial intelligence (AI) agents can negatively influence human ethical behaviour. They discuss how this capacity of AI agents can cause problems in the future and put forward a research agenda to gain behavioural insights for better AI oversight.

    • Nils Köbis
    • Jean-François Bonnefon
    • Iyad Rahwan
    Reviews
    Nature Human Behaviour
    Volume: 5, P: 679-685
  • Understanding the behaviour of the machines powered by artificial intelligence that increasingly mediate our social, cultural, economic and political interactions is essential to our ability to control the actions of these intelligent machines, reap their benefits and minimize their harms.

    • Iyad Rahwan
    • Manuel Cebrian
    • Michael Wellman
    Reviews
    Nature
    Volume: 568, P: 477-486