Figure 4 | Scientific Reports

Figure 4

From: A dual recurrent neural network model of human-like motion for artificial agents and its evaluation in a VR mirror game turing test

Figure 4

(A) Visual representation of the avatars performing the mirror game. The 3d visualization was generated using Unity 2022.3.30f1 (https://unity.com). (B) The experiment employed six conditions: Leading (blue), following (red), and improvisation (white) with either a human interaction partner (human-human interaction, HHI) or an artificial agent (human-robot interaction, HRI). The true nature of the interaction partner was unknown to the participants and had to be identified in a Turing test-like setup. In addition, participants had to rate the subjective experience in terms of synchronicity, sympathy, creativity and focus after each trial. (C) Typical profile of hand speed in 3d during HHI L/F. The red curve follows the blue with a delay. (D) The follow-net outputs joint orientations for the next frame (\(t+1\)) based on the most recent 1.3 s (every fourth of 120 frames = 30 frames) of the partner’s upper body motion. Upper body motion is represented by quaternion orientations of 10 joints. LSTM cells provided output to three feed-forward layers.

Back to article page