Extended Data Fig. 2: Win Rate and Convergence Time Constant Analysis. | Nature Neuroscience

Extended Data Fig. 2: Win Rate and Convergence Time Constant Analysis.

From: Neuronal tuning aligns dynamically with object and texture manifolds across the visual hierarchy

Extended Data Fig. 2: Win Rate and Convergence Time Constant Analysis.

A. Final activation comparison post optimization in DeePSim and BigGAN space, for units in ResNet50-robust. B. Win rate of BigGAN for ResNet50-robust units. The thin black trace represents the averaged win rate for each unit across 10 repetitions. The red curve shows the average win rate curve for each layer, averaged across all units. C. Evolution trajectory in DeePSim and BigGAN Space for all CNN networks. Same format as B. Top to bottom: ResNet50-robust, ResNet50, EfficientNet-B6, EfficientNet-B6 AdvProp, 50 units were sampled from each major layers, and 10 repeated evolutions were conducted in both DeePSim and BigGAN space. Consistently, driven by the same unit, DeePSim evolution reached higher activation than BigGAN, with the gap closer for deeper units in the network. Shaded area shows SEM across all runs, sometimes too small to be seen. D. Image similarity of paired DeePSim and BigGAN prototypes from in silico evolutions. Similarity between prototypes was higher when their synthesis was driven by the same driver than when driven by shuffled pairs. Drivers were units from ResNet50. The error bar shows the 95% confidence interval of the mean, across pairs. N = 6400.

Source data

Back to article page