Collection 

Hyperdimensional Computing and Vector Symbolic Architectures

Submission status
Open
Submission deadline

This Collection supports and amplifies research related to SDG 9 - Industry, Innovation and Infrastructure

 

 

Hyperdimensional Computing (HD), or Vector Symbolic Architectures (VSA), refers to a family of computational approaches that combine high-dimensional vector representations with a small set of algebraic operations to solve computational problems. The ideas underlying HD/VSA emerged in the late 1980s and early 1990s, driven by the pioneering work of P. Kanerva, E. Kussul, E. Mizraji, T. Plate, D. Rachkovskij, and P. Smolensky. Among these efforts, T. Plate’s Holographic Reduced Representations were particularly influential within the machine learning community. HD/VSA is a growing area lying at the intersection of several disciplines, including computer science, artificial intelligence and machine learning (AI/ML), mathematics, electrical engineering, neuroscience, and cognitive science.

Departing from conventional scalar-number-based computing, HD/VSA defines computations at the level of very large populations of neurons that are represented as distributed, high-dimensional vectors. These representations, together with algebraic operations, enable computing in superposition and can provide elegant solutions to long-standing challenges in AI/ML, such as the variable binding problem: how to represent and manipulate relationships between variables in a flexible yet precise manner. The inherent robustness of distributed vector representations makes HD/VSA compatible with emerging types of stochastic hardware. Its algebraic operations allow representing and manipulating fundamental data structures in a way that is both compact and scalable. This makes it well-suited for reducing computational costs of existing AI/ML systems and for use in settings like resource-constrained devices or mobile robots, where traditional solutions might be too costly or power-hungry. Additionally, HD/VSA complements and integrates well with neural networks. Recent work in the area has shown promising results across various domains, suggesting that this computational paradigm could play a key role in the development of next-generation AI/ML systems.

This collection invites contributions reporting both theoretical and practical advances in HD/VSA, particularly those aligned with the broad theme of unconventional computing. Topics of special interest include:

  1. Intersections between HD/VSA and other unconventional computing paradigms, such as coupled oscillations, cellular automata, reservoir computing, neuromorphic computing, dynamic neural fields, and others.
  2. Novel instances of the HD/VSA family or their adaptations leading to desired computational properties.
  3. Novel algorithms that leverage the core principles of HD/VSA and potentially capture interesting properties of neural circuits in biological systems.
  4. Hardware implementations of HD/VSA systems and algorithms, particularly those using emerging computing hardware.
  5. Hybrid systems combining HD/VSA with other AI/ML approaches, including deep learning, kernel methods, neuro-symbolic AI, and others.
  6. Innovative applications of HD/VSA, supported by rigorous evaluation, benchmarking (where applicable), and comparison to state-of-the-art methods.
Submit manuscript
Manuscript editing services
illustration of hyperdimensional computing

Editors

Abbas Rahimi, PhD, IBM-Research Zurich, Switzerland

Abbas Rahimi received the BS degree in computer engineering from the University of Tehran in 2010, and the MS and PhD degrees in computer science and engineering from the University of California San Diego in 2015 and subsequently was a postdoctoral fellow at the University of California Berkeley and the ETH Zürich. In 2020, he joined the IBM Research-Zürich laboratory as a Research Staff Member. His main research focuses on sample efficiency, enabling machine learning and reasoning to be reliably generalized from as little data as possible. He is also interested in co-designing algorithms alongside emerging hardware technologies, with a strong emphasis on reducing computational complexity and energy consumption, by exploiting approximation opportunities across computation, communication, sensing, and storage systems. He has received the 2015 Outstanding Dissertation Award in the area of "New Directions in Embedded System Design and Embedded Software" from the European Design and Automation Association, and the ETH Zürich Postdoctoral Fellowship in 2017. He was a co-recipient of the Best Paper Nominations at DAC (2013) and DATE (2019), and the Best Paper Awards at BICT (2017), BioCAS (2018), and IBM's Pat Goldberg Memorial (2020).

Peer Neubert, PhD, University of Koblenz, Germany

Peer Neubert is professor of Robot Vision at the University of Koblenz, where he leads the Intelligent Autonomous Systems group. Prior to this, he held academic positions at Chemnitz University of Technology, where he also earned his degree in computer science with a specialization in artificial intelligence. He conducted research at LAAS-CNRS in Toulouse (France) and Numenta, Inc. (USA) and received his PhD focusing on machine vision and learning for camera-based localization in changing environments. His research centers on sensor data processing and interpretation, autonomous systems and applied methods of artificial intelligence. He employs methods from the fields of algorithmic and probabilistic sensor data fusion, machine learning, and vector symbolic AI. He has particular experience in the areas of place recognition in challenging and changing environments, hand crafted and deep-learned visual features, mobile robot navigation, biologically inspired perception and navigation approaches, as well as hyperdimensional computing.

Denis Kleyko, PhD, Örebro University, Sweden

Denis Kleyko is an Associate Professor in the Department of Computer Science at the Örebro University. He received his PhD in dependable communication and computation systems from the Luleå University of Technology in 2018. Following his doctoral studies, he was awarded the Marie Skłodowska-Curie Global Fellowship. As part of the fellowship, he was a postdoctoral researcher in the Redwood Center for Theoretical Neuroscience at the University of California at Berkeley (2020-2022) and subsequently joined the Intelligent Systems Laboratory at the RISE Research Institutes of Sweden (2022-2023). His primary research focuses on hyperdimensional computing, also known as vector symbolic architectures, a computational framework that exploits randomness for knowledge representation, learning, reasoning, and computation. He seeks to understand how this framework could be connected to emerging computing hardware and how could it enable the design of novel methods for neural computation. Broadly, his research interests also include numerous information processing methods both neuro- and physics-inspired such as reservoir computing, associative memories, prototype-based learning, cellular automata, sparse coding, kernel-based methods, Ising machines, sketching algorithms, and similarity-preserving embeddings.

Edward Raff, PhD, University of Maryland, USA & Booz Allen Hamilton Inc., USA  

Edward Raff is the Director of Emerging AI at Booz Allen Hamilton, where he leads the firm’s AI research team and is a Visiting Professor at the University of Maryland, Baltimore County. A Senior Member of the IEEE and ACM, Dr. Raff’s work is highly interdisciplinary, spanning multiple domains and technologies, including cybersecurity, healthcare, computer vision, natural language processing, adversarial learning, privacy, and neuro-symbolic methods. Dr. Raff’s work includes over 140 published articles, six best paper awards, and two books.