Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Communications Biology
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. communications biology
  3. articles
  4. article
Brain-aligning of semantic vectors improves neural decoding of visual stimuli
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 03 January 2026

Brain-aligning of semantic vectors improves neural decoding of visual stimuli

  • Shirin Vafaei  ORCID: orcid.org/0009-0007-6925-98371,
  • Ryohei Fukuma  ORCID: orcid.org/0000-0002-3316-95611,2,
  • Takufumi Yanagisawa  ORCID: orcid.org/0000-0002-2057-06121,2,3,
  • Huixiang Yang2,
  • Satoru Oshino  ORCID: orcid.org/0000-0002-4248-23221,3,
  • Naoki Tani  ORCID: orcid.org/0000-0003-2169-135X1,3,
  • Hui Ming Khoo  ORCID: orcid.org/0000-0002-4039-05201,3,
  • Hidenori Sugano4,
  • Yasushi Iimura4,
  • Hiroharu Suzuki4,
  • Madoka Nakajima  ORCID: orcid.org/0000-0002-8496-29734,
  • Kentaro Tamura5,6 &
  • …
  • Haruhiko Kishima  ORCID: orcid.org/0000-0002-9041-23371 

Communications Biology , Article number:  (2026) Cite this article

  • 1810 Accesses

  • 8 Altmetric

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Neural decoding
  • Object vision

Abstract

The development of algorithms to accurately decode neural information has long been a research focus in the field of neuroscience. Brain decoding typically involves training machine learning models to map neural data onto a preestablished vector representation of stimulus features. These vectors are usually derived from image- and/or text-based feature spaces. Nonetheless, the intrinsic characteristics of these vectors might fundamentally differ from those that are encoded by the brain, limiting the ability of decoders to accurately learn this mapping. To address this issue, we propose a framework, called brain-aligning of semantic vectors, that fine-tunes pretrained feature vectors to better align with the structure of neural representations of visual stimuli in the brain. We trained this model with functional magnetic resonance imaging (fMRI) and then performed zero-shot brain decoding on fMRI, magnetoencephalography (MEG), and electrocorticography (ECoG) data. fMRI-based brain-aligned vectors improved decoding performance across all three neuroimaging datasets when accuracy was determined by calculating the correlation coefficients between true and predicted vectors. Additionally, when decoding accuracy was determined via stimulus identification, this accuracy increased in specific category types; improvements varied depending on the original vector space that was used for brain-alignment, and consistent improvements were observed across all neuroimaging modalities.

Similar content being viewed by others

Neural decoding of music from the EEG

Article Open access 12 January 2023

Big Field of View MRI T1w and FLAIR Template - NMRI225

Article Open access 14 April 2023

The DecNef collection, fMRI data from closed-loop decoded neurofeedback experiments

Article Open access 23 February 2021

Data availability

The datasets supporting the findings of this study include fMRI, MEG, and ECoG data. The fMRI dataset used in this study is available at15. Source data underlying the figures are available in Figshare with the identifier https://doi.org/10.6084/m9.figshare.30845336.

Code availability

The code used for data analysis in this study is available on our repository (https://github.com/yanagisawa-lab). For any inquiries, please contact the corresponding author.

References

  1. Stavisky, S. D. & Wairagkar, M. Listening in to perceived speech with contrastive learning. Nat. Mach. Intell. https://doi.org/10.1038/s42256-023-00742-1 (2023).

  2. Lebedev, M. A. & Nicolelis, M. A. L. Brain–machine interfaces: past, present and future. Trends Neurosci. 29, 536–546 (2006).

    Google Scholar 

  3. Willett, F. R. et al. A high-performance speech neuroprosthesis. Nature 620, 1031–1036 (2023).

    Google Scholar 

  4. Willsey, M. S. et al. Real-time brain-machine interface in non-human primates achieves high-velocity prosthetic finger movements using a shallow feedforward neural network decoder. Nat. Commun. 13, 6899 (2022).

    Google Scholar 

  5. Haynes, J.-D. & Rees, G. Decoding mental states from brain activity in humans. Nat. Rev. Neurosci. 7, 523–534 (2006).

    Google Scholar 

  6. Naselaris, T., Kay, K. N., Nishimoto, S. & Gallant, J. L. Encoding and decoding in fMRI. Neuroimage 56, 400–410 (2011).

    Google Scholar 

  7. Haxby, J. V. et al. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293, 2425–2430 (2001).

    Google Scholar 

  8. Yamins, D. L. K. et al. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proc. Natl. Acad. Sci. USA 111, 8619–8624 (2014).

    Google Scholar 

  9. Kellis, S. et al. Decoding spoken words using local field potentials recorded from the cortical surface. J. Neural Eng. 7, 056007 (2010).

    Google Scholar 

  10. Brouwer, G. J. & Heeger, D. J. Decoding and reconstructing color from responses in human visual cortex. J. Neurosci. 29, 13992–14003 (2009).

    Google Scholar 

  11. Sitaram, R. et al. Closed-loop brain training: the science of neurofeedback. Nat. Rev. Neurosci. 18, 86–100 (2017).

    Google Scholar 

  12. Fukuma, R. et al. Voluntary control of semantic neural representations by imagery with conflicting visual stimulation. Commun. Biol. 5, 214 (2022).

    Google Scholar 

  13. Chaudhary, U. et al. Spelling interface using intracortical signals in a completely locked-in patient enabled via auditory neurofeedback training. Nat. Commun. 13, 1236 (2022).

    Google Scholar 

  14. Cortese, A., Amano, K., Koizumi, A., Kawato, M. & Lau, H. Multivoxel neurofeedback selectively modulates confidence without changing perceptual performance. Nat. Commun. 7, 13669 (2016).

    Google Scholar 

  15. Horikawa, T. & Kamitani, Y. Generic decoding of seen and imagined objects using hierarchical visual features. Nat. Commun. 8, 15037 (2017).

    Google Scholar 

  16. Haynes, J.-D. & Rees, G. Predicting the orientation of invisible stimuli from activity in human primary visual cortex. Nat. Neurosci. 8, 686–691 (2005).

    Google Scholar 

  17. Kamitani, Y. & Tong, F. Decoding the visual and subjective contents of the human brain. Nat. Neurosci. 8, 679–685 (2005).

    Google Scholar 

  18. Thirion, B. et al. Inverse retinotopy: inferring the visual content of images from brain activation patterns. Neuroimage 33, 1104–1116 (2006).

    Google Scholar 

  19. Cox, D. D. & Savoy, R. L. Functional magnetic resonance imaging (fMRI) “brain reading”: detecting and classifying distributed patterns of fMRI activity in human visual cortex. NeuroImage 19, 261–270 (2003).

    Google Scholar 

  20. Nakai, T., Koide-Majima, N. & Nishimoto, S. Correspondence of categorical and feature-based representations of music in the human brain. Brain Behav. 11, e01936 (2021).

    Google Scholar 

  21. Koide-Majima, N., Nishimoto, S. & Majima, K. Mental image reconstruction from human brain activity: Neural decoding of mental imagery via deep neural network-based Bayesian estimation. Neural Networks 170, 349–363 (2024).

  22. Miyawaki, Y. et al. Visual image reconstruction from human brain activity using a combination of multiscale local image decoders. Neuron 60, 915–929 (2008).

    Google Scholar 

  23. Shen, G., Dwivedi, K., Majima, K., Horikawa, T. & Kamitani, Y. End-to-end deep image reconstruction from human brain activity. Front. Comput. Neurosci. 13, 21 (2019).

  24. Shen, G., Horikawa, T., Majima, K. & Kamitani, Y. Deep image reconstruction from human brain activity. PLoS Comput. Biol. 15, e1006633 (2019).

    Google Scholar 

  25. Liu, Y., Ma, Y., Zhou, W., Zhu, G. & Zheng, N. BrainCLIP: bridging brain and visual-linguistic representation Via CLIP for generic natural visual stimulus decoding. Preprint at https://doi.org/10.48550/arXiv.2302.1297 (2023).

  26. Radford, A. et al. Learning transferable visual models from natural language supervision. in 8748–8763 (PMLR, 2021).

  27. Pereira, F. et al. Toward a universal decoder of linguistic meaning from brain activation. Nat. Commun. 9, 963 (2018).

    Google Scholar 

  28. Mikolov, T., Chen, K., Corrado, G. & Dean, J. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013).

  29. Pennington, J., Socher, R. & Manning, C. GloVe: Global Vectors for Word Representation. in Proc. 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) 1532–1543 (Association for Computational Linguistics, 2014).

  30. Shirakawa, K. et al. Spurious reconstruction from brain activity. Neural Netw. 190, 107515 (2025).

    Google Scholar 

  31. Federer, C., Xu, H., Fyshe, A. & Zylberberg, J. Improved object recognition using neural networks trained to mimic the brain’s statistical properties. Neural Netw. 131, 103–114 (2020).

    Google Scholar 

  32. Muttenthaler, L. et al. Improving neural network representations using human similarity judgments. Advances in neural information processing systems 36, 50978–51007 (2023).

  33. Schneider, S., Lee, J. H. & Mathis, M. W. Learnable latent embeddings for joint behavioural and neural analysis. Nature 617, 360–368 (2023).

    Google Scholar 

  34. Kay, K. N., Naselaris, T., Prenger, R. J. & Gallant, J. L. Identifying natural images from human brain activity. Nature 452, 352–355 (2008).

    Google Scholar 

  35. Ogawa, S., Lee, T.-M., Kay, A. R. & Tank, D. W. Brain magnetic resonance imaging with contrast dependent on blood oxygenation. Proc. Natl. Acad. Sci. USA 87, 9868–9872 (1990).

    Google Scholar 

  36. Penfield, W. & Jasper, H. Epilepsy and the functional anatomy of the human brain. (Little, Brown & Co., Boston, 1954).

  37. Cohen, D. Magnetoencephalography: evidence of magnetic fields produced by alpha-rhythm currents. Science 161, 784–786 (1968).

    Google Scholar 

  38. Deng, J. et al. ImageNet: a large-scale hierarchical image database. in 2009 IEEE Conference on Computer Vision and Pattern Recognition 248–255 https://doi.org/10.1109/CVPR.2009.5206848 (2009).

  39. Kriegeskorte, N., Mur, M. & Bandettini, P. Representational similarity analysis—connecting the branches of systems neuroscience. Front. Syst. Neurosci. 2, 4 (2008).

  40. Kourtzi, Z. & Kanwisher, N. Cortical regions involved in perceiving object shape. J. Neurosci. 20, 3310–3318 (2000).

    Google Scholar 

  41. Kanwisher, N., McDermott, J. & Chun, M. M. The fusiform face area: a module in human extrastriate cortex specialized for face perception. J. Neurosci. 17, 4302–4311 (1997).

    Google Scholar 

  42. Epstein, R. & Kanwisher, N. A cortical representation of the local visual environment. Nature 392, 598–601 (1998).

    Google Scholar 

  43. Gifford, A. T., Jastrzębowska, M. A., Singer, J. J. D. & Cichy, R. M. In silico discovery of representational relationships across visual cortex. Nat. Hum. Behav. https://doi.org/10.1038/s41562-025-02252-z (2025).

  44. Kobatake, E. & Tanaka, K. Neuronal selectivities to complex object features in the ventral visual pathway of the macaque cerebral cortex. J. Neurophysiol. 71, 856–867 (1994).

    Google Scholar 

  45. Binder, J. R. et al. Toward a brain-based componential semantic representation. Cogn. Neuropsychol. 33, 130–174 (2016).

    Google Scholar 

  46. Chersoni, E., Santus, E., Huang, C.-R. & Lenci, A. Decoding word embeddings with brain-based semantic features. Comput. Linguist. 47, 663–698 (2021).

    Google Scholar 

  47. Li, Y., Yang, H. & Gu, S. Enhancing neural encoding models for naturalistic perception with a multi-level integration of deep neural networks and cortical networks. Sci. Bull. https://doi.org/10.1016/j.scib.2024.02.035 (2024).

  48. Haxby, J. V. et al. A common, high-dimensional model of the representational space in human ventral temporal cortex. Neuron 72, 404–416 (2011).

    Google Scholar 

  49. Guntupalli, J. S. et al. A model of representational spaces in human cortex. Cereb. cortex 26, 2919–2934 (2016).

    Google Scholar 

  50. Cichy, R. M. & Pantazis, D. Multivariate pattern analysis of MEG and EEG: A comparison of representational structure in time and space. NeuroImage 158, 441–454 (2017).

  51. Salmela, V., Salo, E., Salmi, J. & Alho, K. Spatiotemporal dynamics of attention networks revealed by representational similarity analysis of EEG and fMRI. Cereb. Cortex 28, 549–560 (2018).

    Google Scholar 

  52. Sereno, M. I. et al. Borders of multiple visual areas in humans revealed by functional magnetic resonance imaging. Science 268, 889–893 (1995).

    Google Scholar 

  53. Tadel, F., Baillet, S., Mosher, J. C., Pantazis, D. & Leahy, R. M. Brainstorm: a user-friendly application for MEG/EEG analysis. Comput. Intell. Neurosci. 2011, 879716 (2011).

    Google Scholar 

  54. Yoshioka, T. et al. Evaluation of hierarchical Bayesian method through retinotopic brain activities reconstruction from fMRI and MEG signals. NeuroImage 42, 1397–1413 (2008).

    Google Scholar 

  55. Glasser, M. F. et al. A multi-modal parcellation of human cerebral cortex. Nature 536, 171–178 (2016).

    Google Scholar 

  56. Dale, A. M., Fischl, B. & Sereno, M. I. Cortical surface-based analysis: I. segmentation and surface reconstruction. NeuroImage 9, 179–194 (1999).

    Google Scholar 

  57. Papademetris, X. et al. BioImage suite: an integrated medical image analysis suite: an update. Insight J. 2006, 209 (2006).

    Google Scholar 

  58. Groppe, D. M. et al. iELVis: an open source MATLAB toolbox for localizing and visualizing human intracranial electrode data. J. Neurosci. Methods 281, 40–48 (2017).

    Google Scholar 

  59. Fukuma, R. et al. Image retrieval based on closed-loop visual–semantic neural decoding. Preprint at https://doi.org/10.1101/2024.08.05.606113 (2024).

Download references

Acknowledgements

We acknowledge the use of open-source code from the Kamitani Lab. Specifically, we used the Brain Decoding Toolbox (BDPy; https://github.com/KamitaniLab/bdpy) for neuroimaging data processing and analysis, and adapted decoding algorithms from the Generic Object Decoding repository (https://github.com/KamitaniLab/GenericObjectDecoding; Horikawa & Kamitani, 2017). We thank the Kamitani Lab for making these resources publicly available. We also thank all the subjects for their participation. This research was supported by the Japan Science and Technology Agency (JST) Moonshot R&D (JPMJMS2012), the JST Core Research for Evolutional Science and Technology (CREST) (JPMJCR18A5), the JST AIP Acceleration Research (JPMJCR24U2), K Program (JPMJKP25Y7), and the Japan Society for the Promotion of Science (JSPS) Grants-in-Aid for Scientific Research (KAKENHI) (JP26560467 and JP20H05705).

Author information

Authors and Affiliations

  1. Department of Neurosurgery, Graduate School of Medicine, The University of Osaka, Suita, Japan

    Shirin Vafaei, Ryohei Fukuma, Takufumi Yanagisawa, Satoru Oshino, Naoki Tani, Hui Ming Khoo & Haruhiko Kishima

  2. Department of Neuroinformatics, The University of Osaka Graduate School of Medicine, Suita, Japan

    Ryohei Fukuma, Takufumi Yanagisawa & Huixiang Yang

  3. Epilepsy Center, The University of Osaka Hospital, Suita, Japan

    Takufumi Yanagisawa, Satoru Oshino, Naoki Tani & Hui Ming Khoo

  4. Department of Neurosurgery, Juntendo University, Tokyo, Japan

    Hidenori Sugano, Yasushi Iimura, Hiroharu Suzuki & Madoka Nakajima

  5. Department of Neurosurgery, Nara Medical University, Kashihara, Japan

    Kentaro Tamura

  6. Department of Neurosurgery, National Hospital Organization, Nara Medical Center, Nara, Japan

    Kentaro Tamura

Authors
  1. Shirin Vafaei
    View author publications

    Search author on:PubMed Google Scholar

  2. Ryohei Fukuma
    View author publications

    Search author on:PubMed Google Scholar

  3. Takufumi Yanagisawa
    View author publications

    Search author on:PubMed Google Scholar

  4. Huixiang Yang
    View author publications

    Search author on:PubMed Google Scholar

  5. Satoru Oshino
    View author publications

    Search author on:PubMed Google Scholar

  6. Naoki Tani
    View author publications

    Search author on:PubMed Google Scholar

  7. Hui Ming Khoo
    View author publications

    Search author on:PubMed Google Scholar

  8. Hidenori Sugano
    View author publications

    Search author on:PubMed Google Scholar

  9. Yasushi Iimura
    View author publications

    Search author on:PubMed Google Scholar

  10. Hiroharu Suzuki
    View author publications

    Search author on:PubMed Google Scholar

  11. Madoka Nakajima
    View author publications

    Search author on:PubMed Google Scholar

  12. Kentaro Tamura
    View author publications

    Search author on:PubMed Google Scholar

  13. Haruhiko Kishima
    View author publications

    Search author on:PubMed Google Scholar

Contributions

S.V. and T.Y. conceptualized the project. S.V. was responsible for the theory. S.V., R.F., and T.Y. were responsible for the methodology. S.V. undertook the analysis and investigation. R.F. and H.Y. were responsible for the MEG and ECoG experiments. S.V. was responsible for data preprocessing and curation. S.V. wrote the original draft and created the figures. S.V. and T.Y. edited the final version of the article. S.O., N.T., H.M.K., H.S., Y.I., H.S., M.N., H.K., and K.T. performed the neurosurgery of ECoG experiments.

Corresponding author

Correspondence to Takufumi Yanagisawa.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Peer review

Peer review information

Communications Biology thanks Marijn van Vliet and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Primary Handling Editors: Shenbing Kuang and Jasmine Pan. A peer review file is available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Transparent Peer Review file

Supplementary information

Description of Additional Supplementary Files

Supplementary data 1-56

reporting summary

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Vafaei, S., Fukuma, R., Yanagisawa, T. et al. Brain-aligning of semantic vectors improves neural decoding of visual stimuli. Commun Biol (2026). https://doi.org/10.1038/s42003-025-09482-x

Download citation

  • Received: 12 September 2024

  • Accepted: 23 December 2025

  • Published: 03 January 2026

  • DOI: https://doi.org/10.1038/s42003-025-09482-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Download PDF

Advertisement

Explore content

  • Research articles
  • Reviews & Analysis
  • News & Comment
  • Collections
  • Follow us on Twitter
  • Sign up for alerts
  • RSS feed

About the journal

  • Journal Information
  • Open Access Fees and Funding
  • Journal Metrics
  • Editors
  • Editorial Board
  • Calls for Papers
  • Referees
  • Contact
  • Editorial policies
  • Aims & Scope

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Communications Biology (Commun Biol)

ISSN 2399-3642 (online)

nature.com sitemap

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing