Fig. 5: Evaluating the shape bias of CNNs. | Nature Communications

Fig. 5: Evaluating the shape bias of CNNs.

From: Improved modeling of human vision by incorporating robustness to blur in convolutional neural networks

Fig. 5

A Proportion of shape vs. shape-plus-texture classifications made by standard (red), weak-blur (blue) and strong-blur (purple) CNNs (8 per training condition) when tested with cue-conflict stimuli. Icons indicate category of shape cue tested and bar plots on far right show average shape bias across all 16 categories. Error bars indicate ±1 SEM. Gray dots indicate shape bias score of individual CNN models. Two-tailed paired t-tests were performed to determine statistical significance (*p < 0.05, **p < 0.01, and ***p < 0.001, uncorrected; the exact p values and raw values are provided in the Source Data). B Two examples of cue-conflict stimuli (bottle or dog shape with clock texture) from Geirhos et al., 2019 (with permission), shown with corresponding layerwise relevance propagation maps depicting the image regions that were heavily weighted by VGG-19 in determining its classification response.

Back to article page