Fig. 1
From: Human cooperation with artificial agents varies across countries

The Prisoner’s Dilemma game on the road. Left: Two vehicles (blue and red) enter a narrow section of a road caused by a broken-down truck. The section is wide enough for both cars to pass one another safely if they both proceed slowly. Both drivers have to quickly decide what to do without being able to explicitly communicate their intentions to one another. What one wants to do depends on how they think the other will react to the situation. If one expects the other to slow down, creating sufficient space on the road to push through, one could push on without reducing speed. This would force the other driver to hit on breaks. If both drivers think this way, the result will be a stalemate. They will both have to hit on breaks and stop. The drivers will pass one another eventually, but the manoeuvers will take longer than it would had they both proceeded slowly to begin with. Right: In the game matrix of this scenario, the driver of the blue car chooses between the two options identified by rows; the driver of the red car—by columns. Their choices jointly determine the outcome and the numbers in each cell are payoffs to the row (blue) and the column (red) player, respectively. The cooperative outcome (identified with the white square) is for both drivers to slow down and pass one another safely. This constitutes a tacit compromise, whereby neither driver attempts to outsmart the other by exploiting a predicted cooperative maneuver (“the other will slow down and swerve”) for one’s personal benefit (“I should push on”). One such exploitative outcome is identified with the white triangle.