Fig. 5: Decoding eyes-open versus eyes-closed conditions in Parkinson’s disease and dystonia patients from STN and GPi targets. | npj Parkinson's Disease

Fig. 5: Decoding eyes-open versus eyes-closed conditions in Parkinson’s disease and dystonia patients from STN and GPi targets.

From: Decoding the impact of visual states on adaptive deep brain stimulation feedback signals in movement disorders

Fig. 5: Decoding eyes-open versus eyes-closed conditions in Parkinson’s disease and dystonia patients from STN and GPi targets.

A Combinations of different frequency band features across the 4–400 Hz range significantly outperformed the use of alpha frequency features alone. B No significant difference in decoding performance was found between Parkinson’s disease and dystonia patients. C STN channels significantly outperformed GPi channels in decoding accuracy (***PBonferroni < 0.001, Mann-Whitney U test). D Non-linear machine learning models (CatBoost) demonstrated better decoding performance compared to linear regression models using all features (***PBonferroni < 0.001, Monte Carlo Permutation test, n = 5000). E Linear regression models indicated that the most contributing frequency bands were within the alpha (8–12 Hz), low-frequency (4–12 Hz), low beta (13–20 Hz), and high beta (20–35 Hz) ranges. F In a three-class classification problem including sleep, eyes-open, and eyes-closed conditions, significant decoding performance was achieved across all targets and disease conditions.

Back to article page