Fig. 3: Decoding accuracy in the video-watching task.
From: Voluntary control of semantic neural representations by imagery with conflicting visual stimulation

a \(Prj{R}^{k}({V}_{inferred},\,{V}_{true})\) was Fisher z-transformed and averaged across the 17 subjects (\(\overline{z(Prj{R}^{k}({V}_{inferred},\,{V}_{true}))}\)), which is shown in the order of principal components. For visibility, the first 25 components are shown (for all components, see Supplementary Fig. 3b). Individual values are shown with dots. Error bars denote 95% CIs among the subjects. *P < 0.5 × 10−4 (Bonferroni-adjusted α-level; 0.05/1000), two-sided permutation test. b Binary classification accuracies for all three pairs from the three categories (word, landscape, and human face) were averaged to show the subject-averaged binary classification accuracy with the bars. Individual values are shown with dots. Error bars denote 95% CIs among subjects. The accuracy was calculated based on the semantic vectors inferred from high-γ features from the higher visual area (n = 17), early visual area (n = 10), and all implanted electrodes (n = 17). There was no significant difference between the accuracies based on the higher and early visual areas (P = 0.5767, t(23.3) = −0.57, uncorrected two-sided Welch’s t-test). *P < 1.7 × 10−2 (Bonferroni-adjusted α-level; 0.05/3), one-sided one-sample t-test against chance level (50%).