Figure 3 | Scientific Reports

Figure 3

From: Emotion Perception in Hadza Hunter-Gatherers

Figure 3The alternative text for this image may have been generated using AI.

Study 2 Task Conditions (a) and Performance for US (b,e), Hadza (c,f) and Hadza participants with minimized exposure to other cultural groups (Hadza-M; subset based on proxy variables of second language skill and formal schooling) (d,g). (a) Examples of vignettes (for all scenarios, see Supplementary Information Table 6), targets and foils for the four trial types. Facial configurations are examples because stimulus sets restrict publication of actual photographs. Arousal-controlled trials: the foil face differed from the target only in depicting positivity or negativity, or valence (e.g., a smiling facial configuration hypothesized to be the universal expression of happiness vs. a scowling facial configuration hypothesized to be the universal expression of anger). Valence is a descriptive feature of affect, along with a second feature, level of arousal. For example, some evidence suggests that perceivers may be able to distinguish scowling from pouting not because scowling is perceived as “anger” and pouting is perceived as “sadness” but because scowling is typically perceived as high arousal and pouting as low arousal. Valence-controlled trials: the foil face differed from the target only in depicting level of arousal (e.g., a scowling vs. a pouting configuration hypothesized to be the universal expressions of anger and sadness, respectively). Affect-uncontrolled trials: the foil face differed from the target in depicting both valence and level arousal (e.g., a smiling vs. a pouting configuration). Affect-controlled trials: the foil face matched the target in depicting valence and arousal (e.g., a scowling vs. a wide-eyed gasping facial configuration hypothesized to be the universal expressions of anger and fear, respectively). Performance for each of the 4 experimental conditions (x-axis) is plotted for US participants (b), Hadza participants (c) and Hadza-M participants (d). Performance within the affect-controlled condition, for each of the 3 target facial configurations (x-axis) is plotted for US participants (e), Hadza participants (f) and Hadza-M participants (g). Individual data points represent mean proportion agreement (i.e., selecting a target matching the presumed universal model) for a given participant within a given condition. Contours of violin plots represent density of data points at a given agreement level. Horizontal red bar represents chance-level performance, and significance against chance-level responding is noted at the top of each violin plot: ***p < 0.001 **p < 0.01 *p < 0.05 p < 0.10. Means combined with brackets represent conditions that do not statistically differ in χ2 tests (ps > 0.25). Statistically significant differences between conditions based on follow-up χ2 tests are notated using the same conventions, with the following exception: **(*) indicates statistical significance for individual tests ranged between p < 0.01 and p < 0.001.

Back to article page