Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Brain–computer interface control with artificial intelligence copilots

Abstract

Motor brain–computer interfaces (BCIs) decode neural signals to help people with paralysis move and communicate. Even with important advances in the past two decades, BCIs face a key obstacle to clinical viability: BCI performance should strongly outweigh costs and risks. To significantly increase the BCI performance, we use shared autonomy, where artificial intelligence (AI) copilots collaborate with BCI users to achieve task goals. We demonstrate this AI-BCI in a non-invasive BCI system decoding electroencephalography signals. We first contribute a hybrid adaptive decoding approach using a convolutional neural network and ReFIT-like Kalman filter, enabling healthy users and a participant with paralysis to control computer cursors and robotic arms via decoded electroencephalography signals. We then design two AI copilots to aid BCI users in a cursor control task and a robotic arm pick-and-place task. We demonstrate AI-BCIs that enable a participant with paralysis to achieve 3.9-times-higher performance in target hit rate during cursor control and control a robotic arm to sequentially move random blocks to random locations, a task they could not do without an AI copilot. As AI copilots improve, BCIs designed with shared autonomy may achieve higher performance.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: In an AI-BCI, AI copilots use task information to improve BCI performance.
Fig. 2: CNN-KF and AI-BCI decoding framework for centre-out 8 and robotic arm tasks.
Fig. 3: Performance of CNN-KF for the centre-out 8 task.
Fig. 4: An AI copilot improves centre-out 8 performance.
Fig. 5: An AI copilot improves the control of a robotic arm for pick-and-place tasks.

Similar content being viewed by others

Data availability

Data related to this article are available via Zenodo at https://doi.org/10.5281/zenodo.15165133 (ref. 74). Source data are provided with this paper.

Code availability

Experimental and model training code is available via GitHub at https://github.com/kaolab-research/bci_raspy and via Zenodo at https://doi.org/10.5281/zenodo.15164641 (ref. 75). Plotting and analysis code is available via GitHub at https://github.com/kaolab-research/bci_plot and via Zenodo at https://doi.org/10.5281/zenodo.15164643 (ref. 76).

References

  1. Hochberg, L. R. et al. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature 442, 164–171 (2006).

    Article  Google Scholar 

  2. Gilja, V. et al. Clinical translation of a high-performance neural prosthesis. Nat. Med. 21, 1142–1145 (2015).

    Article  Google Scholar 

  3. Pandarinath, C. et al. High performance communication by people with paralysis using an intracortical brain-computer interface. eLife 6, e18554 (2017).

    Article  Google Scholar 

  4. Hochberg, L. R. et al. Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature 485, 372–375 (2012).

    Article  Google Scholar 

  5. Collinger, J. L. et al. High-performance neuroprosthetic control by an individual with tetraplegia. Lancet 381, 557–564 (2013).

    Article  Google Scholar 

  6. Wodlinger, B. et al. Ten-dimensional anthropomorphic arm control in a human brain-machine interface: difficulties, solutions, and limitations. J. Neural Eng. 12, 016011 (2015).

    Article  Google Scholar 

  7. Aflalo, T. et al. Decoding motor imagery from the posterior parietal cortex of a tetraplegic human. Science 348, 906–910 (2015).

    Article  Google Scholar 

  8. Edelman, B. J. et al. Noninvasive neuroimaging enhances continuous neural tracking for robotic device control. Sci. Robot. 4, eaaw6844 (2019).

    Article  Google Scholar 

  9. Reddy, S., Dragan, A. D. & Levine, S. Shared autonomy via deep reinforcement learning. In Proc. Robotics: Science and Systems https://doi.org/10.15607/RSS.2018.XIV.005 (RSS, 2018).

  10. Laghi, M., Magnanini, M., Zanchettin, A. & Mastrogiovanni, F. Shared-autonomy control for intuitive bimanual tele-manipulation. In 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids) 1–9 (IEEE, 2018).

  11. Tan, W. et al. On optimizing interventions in shared autonomy. In Proc. AAAI Conference on Artificial Intelligence 5341–5349 (AAAI, 2022).

  12. Yoneda, T., Sun, L., Yang, G., Stadie, B. & Walter, M. To the noise and back: diffusion for shared autonomy. In Proc. Robotics: Science and Systems https://doi.org/10.15607/RSS.2023.XIX.014 (RSS, 2023).

  13. Peng, Z., Mo, W., Duan, C., Li, Q. & Zhou, B. Learning from active human involvement through proxy value propagation. Adv. Neural Inf. Process. Syst. 36, 20552–20563 (2023).

    Google Scholar 

  14. McMahan, B. J., Peng, Z., Zhou, B. & Kao, J. C. Shared autonomy with IDA: interventional diffusion assistance. Adv. Neural Inf. Process. Syst. 37, 27412–27425 (2024).

  15. Shannon, C. E. Prediction and entropy of printed English. Bell Syst. Tech. J. 30, 50–64 (1951).

  16. Karpathy, A., Johnson, J. & Fei-Fei, L. Visualizing and understanding recurrent networks. In International Conference on Learning Representations https://openreview.net/pdf/71BmK0m6qfAE8VvKUQWB.pdf (ICLR, 2016).

  17. Radford, A. et al. Language models are unsupervised multitask learners. OpenAI https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf (2019).

  18. Gilja, V. et al. A high-performance neural prosthesis enabled by control algorithm design. Nat. Neurosci. 15, 1752–1757 (2012).

    Article  Google Scholar 

  19. Dangi, S., Orsborn, A. L., Moorman, H. G. & Carmena, J. M. Design and analysis of closed-loop decoder adaptation algorithms for brain-machine interfaces. Neural Comput. 25, 1693–1731 (2013).

    Article  MathSciNet  Google Scholar 

  20. Orsborn, A. L. et al. Closed-loop decoder adaptation shapes neural plasticity for skillful neuroprosthetic control. Neuron 82, 1380–1393 (2014).

    Article  Google Scholar 

  21. Silversmith, D. B. et al. Plug-and-play control of a brain–computer interface through neural map stabilization. Nat. Biotechnol. 39, 326–335 (2021).

    Article  Google Scholar 

  22. Kim, S.-P., Simeral, J. D., Hochberg, L. R., Donoghue, J. P. & Black, M. J. Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia. J. Neural Eng. 5, 455 (2008).

    Article  Google Scholar 

  23. Sussillo, D. et al. A recurrent neural network for closed-loop intracortical brain–machine interface decoders. J. Neural Eng. 9, 026027 (2012).

    Article  Google Scholar 

  24. Sussillo, D., Stavisky, S. D., Kao, J. C., Ryu, S. I. & Shenoy, K. V. Making brain–machine interfaces robust to future neural variability. Nat. Commun. 7, 13749 (2016).

    Article  Google Scholar 

  25. Kao, J. C. et al. Single-trial dynamics of motor cortex and their applications to brain-machine interfaces. Nat. Commun. 6, 7759 (2015).

    Article  Google Scholar 

  26. Kao, J. C., Nuyujukian, P., Ryu, S. I. & Shenoy, K. V. A high-performance neural prosthesis incorporating discrete state selection with hidden Markov models. IEEE Trans. Biomed. Eng. 64, 935–945 (2016).

    Article  Google Scholar 

  27. Shenoy, K. V. & Carmena, J. M. Combining decoder design and neural adaptation in brain-machine interfaces. Neuron 84, 665–680 (2014).

    Article  Google Scholar 

  28. Lawhern, V. J. et al. EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces. J. Neural Eng. 15, 056013 (2018).

    Article  Google Scholar 

  29. Forenzo, D., Zhu, H., Shanahan, J., Lim, J. & He, B. Continuous tracking using deep learning-based decoding for noninvasive brain–computer interface. PNAS Nexus 3, pgae145 (2024).

    Article  Google Scholar 

  30. Pfurtscheller, G. & Da Silva, F. L. Event-related EEG/MEG synchronization and desynchronization: basic principles. Clin. Neurophysiol. 110, 1842–1857 (1999).

    Article  Google Scholar 

  31. Olsen, S. et al. An artificial intelligence that increases simulated brain–computer interface performance. J. Neural Eng. 18, 046053 (2021).

    Article  Google Scholar 

  32. Schulman, J., Wolski, F., Dhariwal, P., Radford, A. & Klimov, O. Proximal policy optimization algorithms. Preprint at https://arxiv.org/abs/1707.06347 (2017).

  33. Liu, S. et al. Grounding DINO: marrying DINO with grounded pre-training for open-set object detection. In 18th European Conference 38–55 (ACM, 2024).

  34. Golub, M. D., Yu, B. M., Schwartz, A. B. & Chase, S. M. Motor cortical control of movement speed with implications for brain-machine interface control. J. Neurophysiol. 112, 411–429 (2014).

    Article  Google Scholar 

  35. Sachs, N. A., Ruiz-Torres, R., Perreault, E. J. & Miller, L. E. Brain-state classification and a dual-state decoder dramatically improve the control of cursor movement through a brain-machine interface. J. Neural Eng. 13, 016009 (2016).

    Article  Google Scholar 

  36. Kao, J. C., Nuyujukian, P., Ryu, S. I. & Shenoy, K. V. A high-performance neural prosthesis incorporating discrete state selection with hidden Markov models. IEEE Trans. Biomed. Eng. 64, 935–945 (2017).

    Article  Google Scholar 

  37. Stieger, J. R. et al. Mindfulness improves brain–computer interface performance by increasing control over neural activity in the alpha band. Cereb. Cortex 31, 426–438 (2021).

    Article  Google Scholar 

  38. Stieger, J. R., Engel, S. A. & He, B. Continuous sensorimotor rhythm based brain computer interface learning in a large population. Sci. Data 8, 98 (2021).

    Article  Google Scholar 

  39. Edelman, B. J., Baxter, B. & He, B. EEG source imaging enhances the decoding of complex right-hand motor imagery tasks. IEEE Trans. Biomed. Eng. 63, 4–14 (2016).

    Article  Google Scholar 

  40. Scherer, R. et al. Individually adapted imagery improves brain-computer interface performance in end-users with disability. PLoS ONE 10, e0123727 (2015).

    Article  Google Scholar 

  41. Millan, J. d. R. et al. A local neural classifier for the recognition of EEG patterns associated to mental tasks. IEEE Trans. Neural Netw. 13, 678–686 (2002).

    Article  Google Scholar 

  42. Huang, D. et al. Decoding subject-driven cognitive states from EEG signals for cognitive brain–computer interface. Brain Sci. 14, 498 (2024).

    Article  Google Scholar 

  43. Meng, J. et al. Noninvasive electroencephalogram based control of a robotic arm for reach and grasp tasks. Sci. Rep. 6, 38565 (2016).

    Article  Google Scholar 

  44. Jeong, J.-H., Shim, K.-H., Kim, D.-J. & Lee, S.-W. Brain-controlled robotic arm system based on multi-directional CNN-BiLSTM network using EEG signals. IEEE Trans. Neural Syst. Rehabil. Eng. 28, 1226–1238 (2020).

    Article  Google Scholar 

  45. Zhang, R. et al. NOIR: neural signal operated intelligent robots for everyday activities. In Proc. 7th Conference on Robot Learning 1737–1760 (PMLR, 2023).

  46. Jeon, H. J., Losey, D. P. & Sadigh, D. Shared autonomy with learned latent actions. In Proc. Robotics: Science and Systems https://doi.org/10.15607/RSS.2020.XVI.011 (RSS, 2020).

  47. Javdani, S., Bagnell, J. A. & Srinivasa, S. S. Shared autonomy via hindsight optimization. In Proc. Robotics: Science and Systems https://doi.org/10.15607/RSS.2015.XI.032 (RSS, 2015).

  48. Newman, B. A. et al. HARMONIC: a multimodal dataset of assistive human-robot collaboration. Int. J. Robot. Res. 41, 3–11 (2022).

  49. Jain, S. & Argall, B. Probabilistic human intent recognition for shared autonomy in assistive robotics. ACM Trans. Hum. Robot Interact. 9, 2 (2019).

    Google Scholar 

  50. Losey, D. P., Srinivasan, K., Mandlekar, A., Garg, A. & Sadigh, D. Controlling assistive robots with learned latent actions. In 2020 IEEE International Conference on Robotics and Automation (ICRA) 378–384 (IEEE, 2020).

  51. Cui, Y. et al. No, to the right: online language corrections for robotic manipulation via shared autonomy. In Proc. 2023 ACM/IEEE International Conference on Human-Robot Interaction 93–101 (ACM, 2023).

  52. Karamcheti, S. et al. Learning visually guided latent actions for assistive teleoperation. In Proc. 3rd Conference on Learning for Dynamics and Control 1230–1241 (PMLR, 2021).

  53. Chi, C. et al. Diffusion policy: visuomotor policy learning via action diffusion. Int. J. Rob. Res. https://doi.org/10.1177/02783649241273668 (2024).

  54. Brohan, A. et al. RT-1: robotics transformer for real-world control at scale. In Proc. Robotics: Science and Systems https://doi.org/10.15607/RSS.2023.XIX.025 (RSS, 2023).

  55. Brohan, A. et al. RT-2: vision-language-action models transfer web knowledge to robotic control. In Proc. 7th Conference on Robot Learning 2165–2183 (PMLR, 2023).

  56. Nair, S., Rajeswaran, A., Kumar, V., Finn, C. & Gupta, A. R3M: a universal visual representation for robot manipulation. In Proc. 6th Conference on Robot Learning 892–909 (PMLR, 2023).

  57. Ma, Y. J. et al. VIP: towards universal visual reward and representation via value-implicit pre-training. In 11th International Conference on Learning Representations https://openreview.net/pdf?id=YJ7o2wetJ2 (ICLR, 2023).

  58. Khazatsky, A. et al. DROID: a large-scale in-the-wild robot manipulation dataset. In Proc. Robotics: Science and Systems https://doi.org/10.15607/RSS.2024.XX.120 (RSS, 2024).

  59. Open X-Embodiment Collaboration. Open X-Embodiment: robotic learning datasets and RT-X models. In 2024 IEEE International Conference on Robotics and Automation (ICRA) 6892–6903 (IEEE, 2024).

  60. Willett, F. R. et al. A high-performance speech neuroprosthesis. Nature 620, 1031–1036 (2023).

    Article  Google Scholar 

  61. Leonard, M. K. et al. Large-scale single-neuron speech sound encoding across the depth of human cortex. Nature 626, 593–602 (2024).

    Article  Google Scholar 

  62. Card, N. S. et al. An accurate and rapidly calibrating speech neuroprosthesis. N. Engl. J. Med. 391, 609–618 (2024).

    Article  Google Scholar 

  63. Sato, M. et al. Scaling law in neural data: non-invasive speech decoding with 175 hours of EEG data. Preprint at https://arxiv.org/abs/2407.07595 (2024).

  64. Kaifosh, P., Reardon, T. R. & CTRL-labs at Reality Labs. A generic non-invasive neuromotor interface for human–computer interaction. Nature https://doi.org/10.1038/s41586-025-09255-w (2025).

  65. Zeng, H. et al. Semi-autonomous robotic arm reaching with hybrid gaze-brain machine interface. Front. Neurorobot. 13, 111 (2019).

    Article  Google Scholar 

  66. Shafti, A., Orlov, P. & Faisal, A. A. Gaze-based, context-aware robotic system for assisted reaching and grasping. In 2019 International Conference on Robotics and Automation 863–869 (IEEE, 2019).

  67. Argall, B. D. Autonomy in rehabilitation robotics: an intersection. Annu. Rev. Control Robot. Auton. Syst. 1, 441–463 (2018).

    Article  Google Scholar 

  68. Nuyujukian, P. et al. Monkey models for brain-machine interfaces: the need for maintaining diversity. In Proc. 33rd Annual Conference of the IEEE EMBS 1301–1305 (IEEE, 2011).

  69. Suminski, A. J., Tkach, D. C., Fagg, A. H. & Hatsopoulos, N. G. Incorporating feedback from multiple sensory modalities enhances brain-machine interface control. J. Neurosci. 30, 16777–16787 (2010).

    Article  Google Scholar 

  70. Kaufman, M. T. et al. The largest response component in motor cortex reflects movement timing but not movement type. eNeuro 3, ENEURO.0085–16.2016 (2016).

    Article  Google Scholar 

  71. Dangi, S. et al. Continuous closed-loop decoder adaptation with a recursive maximum likelihood algorithm allows for rapid performance acquisition in brain-machine interfaces. Neural Comput. 26, 1811–1839 (2014).

    Article  Google Scholar 

  72. Fitts, P. M. The information capacity of the human motor system in controlling the amplitude of movement. J. Exp. Psychol. 47, 381 (1954).

    Article  Google Scholar 

  73. Gramfort, A. et al. MNE software for processing MEG and EEG data. NeuroImage 86, 446–460 (2014).

    Article  Google Scholar 

  74. Lee, J. Y. et al. Data: brain–computer interface control with artificial intelligence copilots. Zenodo https://doi.org/10.5281/zenodo.15165133 (2025).

  75. Lee, J. Y. et al. kaolab-research/bci_raspy. Zenodo https://doi.org/10.5281/zenodo.15164641 (2025).

  76. Lee, J. Y. et al. kaolab-research/bci_plot. Zenodo https://doi.org/10.5281/zenodo.15164643 (2025).

Download references

Acknowledgements

We thank J. Chan, R. Yu and V. DaSilva for participating in related pilot experiments and analyses. This work was supported by NIH DP2NS122037, NIH R01NS121097 and the UCLA-Amazon Science Hub (all to J.C.K.).

Author information

Authors and Affiliations

Authors

Contributions

J.Y.L., S.L., B.M. and J.C.K. conceived of the study. J.Y.L., S.L. and B.M. wrote code for the real-time system, training decoders and training copilots. J.Y.L., S.L., A.M., X.Y. and B.G. conducted the experiments. J.Y.L., S.L., A.M., X.Y., B.M. and B.G. performed the analyses. J.Y.L., B.M., C.K., M.Q. and C.X. wrote code to operate the robotic arm. J.Y.L., S.L., A.M., X.Y. and J.C.K. generated the figures and wrote the paper. All authors participated in the paper review. J.C.K. was involved with and oversaw all aspects of the work.

Corresponding authors

Correspondence to Johannes Y. Lee, Abhishek Mishra or Jonathan C. Kao.

Ethics declarations

Competing interests

J.C.K. is the inventor of intellectual property owned by Stanford University that has been licensed to Blackrock Neurotech and Neuralink Corp. J.Y.L., S.L., B.M. and J.C.K. have a provisional patent application related to AI-BCI owned by the Regents of the University of California. J.C.K. is a co-founder of Luke Health, is on its Board of Directors and has a financial interest in it. The other authors declare no competing interests.

Peer review

Peer review information

Nature Machine Intelligence thanks Matthew Perich and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Within-day same-class CNN training confusion matrices.

a, Confusion matrix for decoding the left class in the first half (0-10 min) or second half (10-20 min) of the open loop session in test data. Although the motor intent is the same, EEG activity can be easily decoded from the first vs second half of the session, indicating decodable shifts in EEG activity. b, Same as a, but for the right class.

Source data

Extended Data Fig. 2 CNN training confusion matrices.

Confusion matrices for the seed decoder from the open loop session that was used in the decorrelated session for a-c, healthy (H1, H2, H4) and d, SCI (S2) participants. All results are computed on test set data.

Source data

Extended Data Fig. 3 Eye gaze position was not reliably decoded from decoder hidden state replayed for the 1D decorrelated closed-loop task.

Prediction of eye tracker gaze location from decoder hidden state during the decorrelated closed-loop task for a range of temporal offsets. Data for each session were split into 5 folds without shuffling. We trained linear regression models to predict gaze location from decoder hidden state, with one model trained from data for each motor prompt (right, left, up, down) for each offset for each data split. Lines show the maximum average coefficient of determination, by averaging each value across all 5 folds, then taking the maximum over all prompts. A delay of zero means that gaze data and hidden state are aligned as recorded, and a positive delay means that the gaze data at a certain timestamp is aligned with the hidden state recorded at a future timestamp. Black lines show the maximum average values, with individual averages shown as lighter colored lines. In general, coefficient of determinations were negative, meaning that these regression models could not predict the eye gaze data better than the mean on validation data. This provides strong evidence that the CNN hidden state did not contain decodable eye movement signals.

Source data

Extended Data Fig. 4 Fitts ITR for CNN-KF control.

a, Fitts ITR computed using acquisition time minus hold time (similar to Gilja et al., 2015), Fitts ITR computed using first touch time (as in Silversmith et al., 2021), and first touch time across days. Each circle represents one 8-trial block. b, same as a but for healthy participants.

Source data

Extended Data Fig. 5 Cursor copilot learns the center-out 8 task during synthetic training.

a, PPO policy loss, value function loss, and rewards over training for the cursor copilot. The copilot increases rewards through training, as well as b, in an evaluation test environment, where the copilot was frozen every 8192 training steps and evaluated on the center-out and back task. c, Success percentage, d, trial time, e, target hit rate, and f, Fitts ITR on the center-out 8 task over the course of training. Figure c, d, e, and f represent cumulative results of 8 copilots trained under the identical hyperparameter settings. The dark gray represents the mean, while the light gray band shows the standard error of the mean (SEM). These demonstrate that the copilot learns to use the surrogate KF signals to perform the center-out 8 task. Please note that these numbers are in general lower (for example, success percentage does not reach 100%) because the copilot task was more challenging, having a 2 second target hold time to encourage goal acquisition behavior (see Methods).

Source data

Extended Data Table 1 Detailed hyperparameters of EEGNet
Extended Data Table 2 Within-day rank-sum comparisons of Fitts ITR show limited differences across sessions (days)
Extended Data Table 3 Comparison of Fitts ITR with prior work
Extended Data Table 4 Detailed hyperparameters of RL cursor copilot

Supplementary information

Supplementary Information

Supplementary Figs. 1 and 2, Tables 1 and 2 and Methods.

Reporting Summary

Supplementary Video 1

Paralysed participant S2 controls a robotic arm with EEG to perform a sequential pick-and-place task with the help of an AI copilot.

Source data

Source Data Fig. 3

Statistical source data.

Source Data Fig. 4

Statistical source data.

Source Data Fig. 5

Statistical source data.

Source Data Extended Data Fig. 1

Statistical source data.

Source Data Extended Data Fig. 2

Statistical source data.

Source Data Extended Data Fig. 3

Statistical source data.

Source Data Extended Data Fig. 4

Statistical source data.

Source Data Extended Data Fig. 5

Statistical source data.

Source Data Extended Data Table 2

Statistical source data.

Source Data Extended Data Table 3

Statistical source data.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lee, J.Y., Lee, S., Mishra, A. et al. Brain–computer interface control with artificial intelligence copilots. Nat Mach Intell 7, 1510–1523 (2025). https://doi.org/10.1038/s42256-025-01090-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue date:

  • DOI: https://doi.org/10.1038/s42256-025-01090-y

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing