Fig. 6: Impact of pseudo-linear representation on the linear decodability for individual fingers. | Nature Communications

Fig. 6: Impact of pseudo-linear representation on the linear decodability for individual fingers.

From: Pseudo-linear summation explains neural geometry of multi-finger movements in human premotor cortex

Fig. 6

A Visualizing ‘population tuning’ vectors joining the points corresponding to the flexion and extension movements for fixed movement of other fingers. Three principal components capturing the average neural activity for flexion-extension movements across fingers are computed and visualized with a two-dimensional projection that aligns the flexion-extension movements of a particular finger along the y-axis. Colored lines indicate the marginalized (average across conditions) neural activity for flexion (dotted) and extension (solid) movements of fingers. Dots indicate the average neural activity during the hold period while the participant attempted a combinatorial finger movement. Gray (black) dots indicate conditions where the target finger was flexed (extended). Lines join movement pairs from the same context (i.e., other fingers have the same cued movement but the target finger has different cued movements). B Histogram of alignment of population tuning directions (cosine of angle) across pairs of contexts in data (left) and linear-nonlinear model fits (right). Population tuning directions are more aligned for thumb and ring/little finger groups in both data and the linear-nonlinear model. By construction, the linear model exhibits complete alignment of all contexts (all values at 1). C How neural population tuning magnitude for a given finger group is affected by the movement of other fingers. Data are shown from 80 combinatorial movements of four finger groups. Contexts (specific movements of other fingers) are sorted by tuning magnitude. Error-bars indicate standard deviation across 100 resamplings of the data. D Context dependence of population tuning magnitude in a linear-nonlinear model from Fig. 5D. While the linear-nonlinear model captures the variations in tuning magnitude, a linear model has constant tuning magnitude by construction (arrows). Error-bars similar to (C). E Performance of a linear decoder (support vector classifier) when classifying between flexion/extension positions of a finger across contexts. Element at position (i,j) corresponds to training the classifier on context j and testing on context i. Mean within-context (diagonal values) accuracy was high for all finger groups, whereas across-context (off-diagonal values) accuracy was low for middle and index fingers compared to the thumb and ring-little group. F Examples of cross-context decoding performance for the thumb, little, and ring finger. Two gesture pairs are shown for each finger. For the first pair, the target finger is either flexed or extended while the other fingers are at rest. For the second pair, the target finger is either flexed or extended while all other fingers have an identical movement. A linear classifier for thumb position trained on isolated movements showed 100% accuracy when tested on the second pair of movements (Thumbs Up vs ASL sign for letter S). This accuracy dropped to nearly chance performance in a similar analysis for the little and ring fingers. Source data are provided in the Source Data file. Copyright © Meta Platforms Technologies, LLC and its affiliates. All rights reserved.

Back to article page