Fig. 5
From: Human cooperation with artificial agents varies across countries

People in Japan feel worse than people in the United States about exploiting an AI agent. The relative frequencies of participants’ reported levels of emotion—guilt, anger, disappointment, happiness, victoriousness, and relief—concerning the outcome of a game that they achieved, as measured by a 7-level Likert scale ranging from 0 (“not at all”) to 6 (“very”). The results shown are for participants who exploited their co-player in a game, which includes those participants who defected in the Prisoner’s Dilemma game when their co-player cooperated and those who defected in the Trust game when they played the role of player two. The distributions on the left for each emotion are of reported levels of emotion among participants recruited in Japan; the distributions on the right are of reported levels of emotion among participants recruited in the United States. The distributions for interactions with AI and human (H) co-players are top and bottom distributions, respectively. The triangular fans indicate a statistically significant proclivity to report a greater level of emotion: *, **, ***: p < 0.05, p < 0.01, p < 0.001 in two-sided Wilcoxon-Mann–Whitney tests for difference in reported levels of emotion, adjusted using the sequentially rejective Bonferroni procedure recommended by Holm31 to account for multiple testing (namely, one test for each emotion). The number of responses (N) in each treatment, displayed in the top-left panel, is the same for all elicited emotions.