Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Inspired by human brain, neuromorphic computing technologies have made important breakthroughs in recent years as alternatives to overcome the power and latency shortfalls of traditional digital computing. An interdisciplinary approach is being taken to address the challenge of creating more efficient and intelligent computing systems that can perform diverse tasks, to design hardware with increasing complexity from single device to system architecture level, and to develop new theories and brain-inspired algorithms for future computing.
Edge and High-Performance Computing, Bio-Signal Processing and Brain-Computer Interface
We welcome the submissions of primary research that fall into any of the above-mentioned categories. All the submissions will be subject to the same peer review process and editorial standard as regular Nature Communications, Nature Computational Science, Communications Engineering, Communications Materials, and Communications Physics articles.
A recent study demonstrates the efficiency of quantum-mechanical modeling of material properties by mapping the problem onto neuromorphic device architectures.
Strong barriers remain between neuromorphic engineering and machine learning, especially with regard to recent large language models (LLMs) and transformers. This Comment makes the case that neuromorphic engineering may hold the keys to more efficient inference with transformer-like models.
Physical computing, particularly photonic computing, offers a promising alternative by directly encoding data in physical quantities, enabling efficient probabilistic computing. This Perspective discusses the challenges and opportunities in photonic probabilistic computing and its applications in artificial intelligence.
Neuromorphic processors for commercial success require facing challenges in methods for programming neuromorphic applications and deployment at scale. Here, the authors discuss the pathways towards widespread consumer adoption of neuromorphic technology in relation to academia and industry.
Brain-inspired neuromorphic algorithms and systems have shown essential advance in efficiency and capabilities of AI applications. In this Perspective, the authors introduce NeuroBench, a benchmark framework for neuromorphic approaches, collaboratively designed by researchers across industry and academia.
By combining several probabilistic AI algorithms, a recent study demonstrates experimentally that the inherent noise and variation in memristor nanodevices can be exploited as features for energy-efficient on-chip learning.
A recent study demonstrates through numerical simulations that implementing large language models based on sparse mixture-of-experts architectures on 3D in-memory computing technologies can substantially reduce energy consumption.
To achieve an advanced neuromorphic computing system with brain-like energy efficiency and generalization capabilities, we propose a hardware–software co-design of in-memory reservoir computing. This co-design integrates a liquid state machine-based encoder with artificial neural network projections on a hybrid analog–digital system, demonstrating zero-shot learning for multimodal event data.
Today’s high-performance computing systems are nearing an ability to simulate the human brain at scale. This presents a new challenge: going forward, will the bigger challenge be the brain’s size or its complexity?
New developing area of NeuroAI at the intersection of neuroscience and artificial intelligence has many open challenges, one of which is training the new generation of experts. In this Comment, the authors provide resources and outline training needs and recommendations for junior researchers working across artificial intelligence and neuroscience.
There is still a wide variety of challenges that restrict the rapid growth of neuromorphic algorithmic and application development. Addressing these challenges is essential for the research community to be able to effectively use neuromorphic computers in the future.
One of the ambitions of computational neuroscience is that we will continue to make improvements in the field of artificial intelligence that will be informed by advances in our understanding of how the brains of various species evolved to process information. To that end, here the authors propose an expanded version of the Turing test that involves embodied sensorimotor interactions with the world as a new framework for accelerating progress in artificial intelligence.
The current gap between computing algorithms and neuromorphic hardware to emulate brains is an outstanding bottleneck in developing neural computing technologies. Aimone and Parekh discuss the possibility of bridging this gap using theoretical computing frameworks from a neuroscience perspective.
Learning from human brains to build powerful computers is attractive, yet extremely challenging due to the lack of a guiding computing theory. Jaeger et al. give a perspective on a bottom-up approach to engineer unconventional computing systems, which is fundamentally different to the classical theory based on Turing machines.
Dmitri Strukov (an electrical engineer, University of California at Santa Barbara), Giacomo Indiveri (an electrical engineer, University of Zurich), Julie Grollier (a material physicist, Unite Mixte de Physique CNRS) and Stefano Fusi (a neuroscientist, Columbia University) talked to Nature Communications about the opportunities and challenges in developing brain-inspired computing technologies, namely neuromorphic computing, and advocated effective collaborations crossing multidisciplinary research areas to support this emerging community.
Lockdowns due to the pandemic in the last two years forced a critical number of chip-making facilities across the world to shut down, giving rise to the chip shortage issues. Prof. Meng-Fan (Marvin) Chang (National Tsing Hua University, TSMC—Taiwan), Prof. Huaqiang Wu (Tsinghua University—China), Dr. Elisa Vianello (CEA-Leti—France), Dr. Sang Joon Kim (Samsung Electronics—South Korea) and Dr. Mirko Prezioso (Mentium Techn.—US) talked to Nature Communications to better understand whether and to what extent this crisis has impacted the development of in-memory/neuromorphic chips, an emerging technology for future computing.
A grand challenge in robotics is realising intelligent agents capable of autonomous interaction with the environment. In this Perspective, the authors discuss the potential, challenges and future direction of research aimed at demonstrating embodied intelligent robotics via neuromorphic technology.
Among the existing machine learning frameworks, reservoir computing demonstrates fast and low-cost training, and its suitability for implementation in various physical systems. This Comment reports on how aspects of reservoir computing can be applied to classical forecasting methods to accelerate the learning process, and highlights a new approach that makes the hardware implementation of traditional machine learning algorithms practicable in electronic and photonic systems.
Neuroprosthetic devices have recently emerged as promising solutions to restore sensory-motor functions lost due to injury or neurological diseases. In this perspective, Donati and Valle propose to combine neuroprostheses with neuromorphic technologies for designing more natural human-machine interfaces with possible improvements in device performance, acceptability, and embeddability.
Reservoir Computing has shown advantageous performance in signal processing and learning tasks due to compact design and ability for fast training. Here, the authors discuss the parallel progress of mathematical theory, algorithm design and experimental realizations of Reservoir Computers, and identify emerging opportunities as well as existing challenges for their large-scale industrial adoption.
Memristors hold promise for massively-parallel computing at low power. Aguirre et al. provide a comprehensive protocol of the materials and methods for designing memristive artificial neural networks with the detailed working principles of each building block and the tools for performance evaluation.
The study presents a generative spike-based framework to re-establish functional connectivity across pathway-damaged brain regions, enabling biomimetic neural prostheses and closed-loop brain stimulation.
Alex Vicente Sola and authors introduce a new high-speed LiDAR-based imaging modality driven by the time-of-arrival of individually detected photons and reconstructed by a spiking convolutional neural network, allowing the system to run fully asynchronously.
Gattaux et al. propose an ant-inspired neural framework for a car-like robot that one-shot learns low-resolution panoramic routes and repeat, shuttle or home. Offering insights into insect navigation and frugal robotic systems.
This study presents a co-designed neuromorphic computing and neural network system for efficient and affordable modeling of ionic and electronic interactions in large-scale material systems.
The Equilibrium Propagation algorithm is a physics-inspired learning algorithm that exploits a physical system whose evolution tends towards minimising an energy function. Here, the authors propose an extension to quantum systems, drawing a parallel between the fundamental requirements of the original EP and Onsager reciprocity, and enabling the efficient training of quantum simulation platforms to solve genuine quantum tasks.
Artificial sensory nervous system with habituation is an important feature for robots that ignore unimportant stimuli and save energy. Here, the authors demonstrate a memristor performing habituation, allowing a robot to filter insignificant stimuli without extra processors or software.
In-sensor computing minimizes latency by directly processing data at the point of capture. Here, authors optimize this process by integrating polarization-sensitive detectors into the computing framework, enabling the superposition of two filtering operations within a single circuit.
Liu et al. report the multi-site chelate effect using quercetin for Sn2+ and retarding crystallisation in FASnI3-based optoelectronic synapse. 12 × 12 real-time NIR imaging array enables spatiotemporal information fusion for object recognition, enhancement, and motion perception in complex conditions.
Here, authors develop an in-memory differentiator using a 40×40 array of ferroelectric capacitors. This device efficiently performs real-time differential computation and motion extraction, demonstrating low energy consumption and high operational frequency, with potential applications in edge computing.
Non-volatile memristor-based memories with resistive switching materials are promising for next-generation data storage and neuromorphic technologies. Here, the importance of precise tuning of TiNx bottom electrodes in HfO2-based memristive devices is demonstrated.
Kwak et al. report AC magnetic parallel dipole line Hall measurements on electrochemical random-access memory based on WO3-x, which determine the oxygen donor level and reveal that conductance potentiation even at low temperature is caused by an increase in both mobility and carrier density.
The authors synthesise a Bi-based halide and use it as a photosensitive control gate in a floating-gate transistor, enabling a non-volatile optoelectronic memory with ultra-low energy consumption and large resistive state numbers, for high-accuracy machine learning.
Inspired by the primate visual system, this work implements an event-driven, bio-inspired architecture for figure-ground segmentation on the neuromorphic robot iCub, bridging neuromorphic algorithms and software. Its performance is benchmarked on the Berkeley Segmentation Data Set and validated in real-world scenarios.
Sound source localisation is used in many consumer devices, to isolate audio from individual speakers and reject noise. Saeid Haghighatshoar and Dylan Richard Muir demonstrate a sound source localisation method from microphone arrays, using Hilbert-Transform-based audio-to-signed-event encoding and spiking neural networks.
Processing heterogeneous visual data in edge-side intelligent machines is complex and inefficient. Here, the authors propose a hardware-software co-designed system using random resistive memory, achieving significant energy efficiency and training cost reductions.
This study shows a viable pathway to the efficient deployment of state-of-the-art large language models using mixture of experts on 3D analog in-memory computing hardware.
Neural networks on analog physical devices often struggle with low computational precision. Here, authors developed an approach that enables neural networks to effectively exploit highly stochastic systems, achieving high performance even under an extremely low signal-to-noise ratio.
The authors present characterisations of highly-structured visual receptive fields of any phase symmetry, implemented in an event-based spiking neuromorphic processor. They specifically demonstrate emerging properties of recurrent inhibitory circuits that reduce the drain of hardware resources.
Complex-valued neural networks can recognize phase-sensitive data in wave-related phenomena. Here, authors report a complex-valued optical convolution accelerator operating at over 2 TOPS for recognition of radar images, represents advances towards real-time analysis of complex satellite data.
Physical unclonable functions (PUFs) are important in authentication applications for Internet of Things devices. This work reports a ferroelectric field-effect transistor-based strong PUF utilizing the transistor cycle-to-cycle variation and verifies its feasibility using 28 nm HKMG technology.
This study introduces a reconfigurable MoS2-based neuromorphic hardware that integrates synaptic, heterosynaptic, and somatic functionalities. It adapts to diverse tasks like medical image enhancement and smart perception, advancing flexible, general-purpose computing solutions.
Proper exposure settings are crucial for modern machine vision cameras. This work develops neuromorphic exposure control using peripheral-vision inspired processing to solve the problem, enhancing performance in applications like autonomous driving and medical imaging.
Existing image-activated cell sorting tools suffer from the challenges of 3D information loss and processing latency in real-time sorting operations. Here, the authors propose a neuromorphic-enabled video-activated cell sorter (NEVACS) framework, which achieves high-dimensional spatiotemporal characterization content and high-throughput sorting of particles.
Functional devices based on sliding ferroelectrics remain elusive. This work demonstrates the rewritable, non-volatile memory devices at room-temperature with two-dimensional sliding ferroelectric rhombohedral-stacked bilayer MoS2. The device shows overall good performance and can be made flexible.
Reservoir computing (RC) is a powerful computing paradigm for machine learning and temporal data processing. Authors present a compact and versatile silicon photonic RC processor that delivers ultrafast speed of 200 TOPS and energy efficiency two orders of magnitude greater than digital processors.
Read-in and read-out of data limit the overall performance of optical computing methods. This work introduces a multilayer optoelectronic framework that alternates between optical and optoelectronic layers to implement matrix-vector multiplications and rectified linear functions experimentally
Debi Pattnaik and co-authors present a flexible Ag nanoparticle-based diffusive memristor that generates electric spikes in response to both voltage and mechanical impact. Their approach is suitable for touch-sensitive sensors with neural network-based processing.
This study demonstrates how point defects in 2D semiconductors can be harnessed for neuromorphic computing. By using random telegraph noise in WSe2 field-effect transistors, the researchers improve inference accuracy of noise-inflicted medical images.
Physical reservoir computing allows real-time low power information processing. Here, the authors report reservoir computing with magnetic skyrmions able to detect millisecond time-scale hand gestures, matching software neural networks’ performance.
Integrating chemical-electric behaviors into optoelectronic synapses holds promise for several applications. Here, the authors report a photoelectrochemical synapse with dual-modal plasticity and chemically-regulated neuromorphic functions.
The authors realize the generation and electrical detection of nonlinear magnons in a ferrimagnetic insulator, giving rise to secondary nonlinear magnons with fine frequency structures on the order of a few MHz and specific propagation characteristics.
Intelligent artificial tactile system for neurorobotics remains challenging. Here, Chen et al. developed an artificial organic afferent nerve to implement slip recognition and prevention actions by learning the real-time spatial information of directional touch.
The implementation of polarimetric functionalities in machine vision is beneficial for real-time navigation. Here, the authors report an optically-controlled polarimetry memtransistor with polarization sensitivity and synaptic functions.
Steady state visually evoked potentials enable EEG vision-based brain computer interfaces. We show that it is possible to encode information distributed across 100+ frequencies. We encode images and perform simple physical computing classification tasks. Connecting more than one in brain in series improves the classification capability.
Visual adaptive devices show promise for simplifying circuits and algorithms in machine vision systems. Here, the authors report a visual adaptive transistor with tunable avalanche effects and microsecond-level bionic vision capabilities, recognizing images in dim and bright conditions with over 98% accuracy.
Beaubois et al. introduce a real-time biomimetic neural network for biohybrid experiments, providing a tool to study closed-loop applications for neuroscience and neuromorphic-based neuroprostheses.
Demirkiran et al. explore the use of the residue number system to overcome precision challenges in analog computing, paving the way for unleashing its full potential as next-generation AI hardware for advanced tasks.
Perception methods that enable control systems to understand and adapt to unstructured environments are desired. Wang et. al. develop a memristor-based differential neuromorphic computing, perceptual signal processing, and online adaptation method providing neuromorphic style adaptation to external sensory stimuli.
Krauhausen et al. developed a robotic system that learned how to avoid dangerous objects. This behavioural conditioning was demonstrated by forming multimodal associative connections of various sensors, using an integrated organic neuromorphic circuit.
The authors proposed a strategy for sensorimotor control using memristive H-H neurons, integrating bio-inspired neural circuits and computational capabilities of neurons’ firing features with a robot for avoidance control.
The distinctive interdependence in mixed ionic-electronic conductors emulates retinal pathway. Here, the authors develop a modular organic neuromorphic spiking circuit to replicate the interdependent functions of receptors, neurons and synapses that are chemically modulated by neurotransmitters.
Costa et al. designed a modular spiking neural network in a neuromorphic device with heterogeneous silicon neurons that remotely detects epileptiform discharges and High Frequency Oscillations in intra-operative EEG during epilepsy surgery in real-time.
Optical recurrent neural networks present a unique challenge for photonic machine learning. Here, the authors experimentally show the first optoacoustic recurrent operator based on stimulated Brillouin scattering which may unlock a new class of optical neural networks with recurrent functionality.
Designing bio-inspired multisensory neurons remains a challenge. Here, the authors develop an artificial visuotactile neuron based on the integration of a photosensitive monolayer MoS2 memtransistor and a triboelectric tactile sensor capable of super-additive response, inverse effectiveness effect, and temporal congruency.
Next-generation human-machine interfaces require efficient physiological signal processing systems. Here, the authors propose a hardware system that uses VO2 memristors to perform brain-like encoding and analysis of physiological signals, and is capable of identifying arrhythmia and epileptic seizures.
Neuro-inspired vision systems hold great promise to address the growing demands of mass data processing for edge computing. Here the authors, develop a neuro-inspired optical sensor based on NbS2/MoS2 films that can operate with monolithically integrated functions of static image enhancement and dynamic trajectory registration.
Implementing emotional aspects like physiology and psychology in decision-making remains a challenge. Here, the authors propose a bio-inspired gustatory circuit based on 2D materials that mimics adaptive feeding behavior in humans, considering both physiological states (hunger) and psychological states (appetite).
Designing optoelectronic synapses having a multispectral color-discriminating ability is crucial for neuromorphic visual system. Here, the authors propose an strategy to introduce RGB color-discriminating synaptic functionality into a 2-terminals memristor regardless of switching medium and design a color image-recognizing CNN and light-programmable reservoir computing.
Designing full-color spherical artificial eyes remains a challenge. Here, Long et al. report a bionic eye where each pixel on the hemispherical retina can recognize different colors based on the unique bidirectional photo response; with optical adaptivity and neuromorphic preprocessing ability
Designing efficient photonic neuromorphic systems remains a challenge. Here, the authors develop an in-sensor Reservoir Computing system for multi-tasked pattern classification based on a light-responsive semiconducting polymer (p-NDI) with efficient exciton dissociations, charge trapping capability, and through-space charge-transport characteristics.
Developing an artificial olfactory system that can mimic the biological functions remains a challenge. Here, the authors develop an artificial chemosensory synapse based on a flexible organic electrochemical transistor gated by the potential generated by the interaction of gas molecules with ions in a chemoreceptive ionogel.
Designing machine learning hardware on flexible substrates is promising for several applications. Here, the authors propose an integrated smart system built with low-cost flexible electronics components for classifying human malodour, and demonstrates that the proposed system scores malodour as good as expert human assessors.
Information-based search strategies are relevant for the learning of interacting agents dynamics and usually need predefined data. The authors propose a method to collect data for learning a predictive sensor model, without requiring domain knowledge, human input, or previously existing data.
Wearable sensors with edge computing are desired for human motion monitoring. Here, the authors demonstrate a topographic design for wearable MXene sensor modules with wireless streaming or in-sensor computing models for avatar reconstruction.
Designing wereable neural invasive electrical stimulation system remains a challenge. Here, researchers provide an effective technology platform for the elimination of tricky neural stimulus-inertia using bionic electronic modulation, which is a significant step forward for long-lasting treatment of nervous system diseases.
Designing efficient bio-inspired vision systems remains a challenge. Here, the authors report a bio-inspired striate visual cortex with binocular and orientation selective receptive field based on self-powered memristor to enable machine vision with brisk edge and corner detection in the future applications.
Designing an efficient platform that enables verbal communication without vocalization remains a challenge. Here, the authors propose a silent speech interface by utilizing a deep learning algorithm combined with strain sensors attached near the subject’s mouth, able to collect 100 words and classify at a high accuracy rate.
With advances in robotic technology, the complexity of control of robot has been increasing owing to fundamental von Neumann bottlenecks. Here, we demonstrate coordinated movement by a fully parallel-processable synaptic array with reduced control complexity.
The adoption of photonic synapses with biosimilarity to realize analog signal transmission is of significance in realizing artificial illuminance modulation responses. Here, the authors report a biomimetic ocular prosthesis system based on quantum dots embedded photonic synapses with improved depression properties through mid-gap trap.
Tactile sensors in human-machine interaction systems can provide precise input signals and the necessary feedback between humans and machines. Here, the authors developed a black phosphorous-based tactile sensor array system that can provide touch into audio feedback.
Designing efficient brain-inspired electronics remains a challenge. Here, Liu et al. develop a flexible perovskite-based artificial synapse with low energy consumption and fast response frequency and realize an artificial neuromuscular system with muscular-fatigue warning.
Neuromorphic computing memristors are attractive to construct low-power- consumption electronic textiles. Here, authors report an ultralow-power textile memristor network of Ag/MoS2/HfAlOx/carbon nanotube with reconfigurable characteristics and firing energy consumption of 1.9 fJ/spike.
Designing efficient sensing-memory-computing systems remains a challenge. Here, the authors propose a self-powered vertical tribo-transistor based on MXenes to implement the multi-sensing-memory-computing function and the interaction of multisensory integration.
While great progress has been made in object recognition, implementing them is typically based on conventional electronic hardware. Here the authors introduce a concept of neuro-metamaterials that enable a dynamic entirely-optical object recognition and mirage.
The real-world object localization application needs a low-latency and power efficient computing system. Here, Moro et al. demonstrate a neuromorphic in-memory event driven system, inspired by the barn owl’s neuroanatomy, which is orders of magnitude more energy efficient than microcontrollers.
The scalability of neuromorphic devices depends on the dismissal of capacitors and additional circuits. Here Liu et al. report an artificial neuron based on the polarization and depolarization of an anti-ferroelectric film, avoiding additional elements and reaching 37 fJ/spike of power consumption.
Traditional learning procedures for artificial intelligence rely on digital methods not suitable for physical hardware. Here, Nakajima et al. demonstrate gradient-free physical deep learning by augmenting a biologically inspired algorithm, accelerating the computation speed on optoelectronic hardware.
Circularly polarized light adds a unique dimension to optical information processing and communication. Here, the authors present a development of a photonic artificial synapse device using chiral perovskite hybrid materials and carbon nanotubes. The heterostructure exhibits efficient synaptic and neuromorphic behaviors, enabling accurate recognition of circularly polarized images.
Existing artificial corneas can assume partial functions of the human cornea, but sense reconstruction remains a challenge. Qu et al. develop an artificially-intelligent cornea with tactile sensation that enables sensory expansion and interaction.
Existing solutions based Advanced Encryption Standard to address the security issues of nonvolatile memories incurs significant performance and power overhead. Here, the authors propose a lightweight XOR-gate based encryption/decryption technique by exploiting in-situ array operations, which achieves significant area/latency/power reduction compared to conventional designs.
Ferroelectric transistors are promising building blocks for developing energy-efficient memory and logic applications. Here, the authors report a record high 300 K resistance on-off ratio achieved in ferroelectric-gated Mott transistors by exploiting a charge transfer layer to tailor the channel carrier density and mitigate the ferroelectric depolarization effect.
Designing efficient nanoscale and adaptable bioinspired memristors remains a challenge. Here, the authors develop a bioinspired hydrophobically gated memristive nanopore capable of learning, forgetting, and retaining memory through an electrowetting mechanism.
The ability of living systems to process signals and information is of vital importance. Inspired by nature, Wang and Cichos show an experimental realization of a physical reservoir computer using self-propelled active microparticles to predict chaotic time series such as the Mackey–Glass and Lorenz series.
Bao et al. report a neuromorphic bionic electro-stimulation solution based on atomic-scale semiconductor floating-gate memory circuit, which enables efficient inhibition of acute inflammation with low stimulation currents that are damage-free to neurons.
Parallel information transmission components and hardware strategies are still lacking in neural networks. Here, the authors propose a strategy to use light emitting memristors with negative ultraviolet photoconductivity and intrinsic parallelism to construct direct information cross-layer modules.
Artificial sensory systems are often limited in structure and functionality. Here, Jiang et al. report a neuromorphic antennal sensory system that achieves spatiotemporal perception of vibrotactile and magnetic stimuli, showcasing biomimetic perceptual intelligence.
All-in-one multi-task photoperception is desirable for artificial vision systems. Wen et al. present wafer-scale high density integration of artificial photoreceptors that combine photoadaptation and circular polarized light vision, enabled by chiral-nanocluster-conjugated molecule heterostructures.
In this work, a nanoscale light-emitting diode with memory-electroluminescence is demonstrated, which is used for mimicking the generation of multiple action-potentials and their combinations in bio-inspired afferent nerves.
The communication of colour information stands as one of the most immediate and widespread methods of interaction among biological entities. Xu et al. report an electrochromic neuromorphic transistor employing color updates to represent synaptic weight for real-time visualised in-sensor computing.
Photonic Stochastic Emergent Storage is a neuromorphic photonic device for image storage and classification based on scattering-intrinsic patterns. Here, the authors show emergent storage employs stochastic prototype scattering-induced light patterns to generate categories corresponding to emergent archetypes.
This work addresses the challenges of radio frequency interference (RFI) in radio astronomy. The authors train spiking neural networks on synthetic and real data, demonstrating a viable path for real-time, energy-efficient RFI detection.
It has recently been shown that synaptic transmission delays enhance the computational capabilities of spiking neural networks. In this manuscript, the authors introduce an exact, event-based training method for various types of delays and benchmark it on mixed-signal neuromorphic hardware.
Here, the authors analyse spiking neural networks with adaptive leaky integrate-and-fire neurons and demonstrate a discretization method that improves stability and performance. The models excel in spatio-temporal tasks like speech recognition and ECG classification without normalization techniques.
Many recent advances in reservoir computing utilize inherently stochastic dynamics and can be designed so that the number of readouts scales exponentially with device size. Here, authors prove the universality of stochastic echo state networks and test the performance of two practical examples.
Selective listening in noisy environments is challenging for individuals with hearing loss. Alexander Boyd and colleagues evaluate a brain-inspired sound segregation algorithm that enhances speech intelligibility in competing-talker situations.
Energy-efficient, task-agnostic continual learning is a key challenge in Artificial Intelligence frameworks. Here, authors propose a hybrid neural network that emulates dual representations in corticohippocampal circuits, reducing the effect of catastrophic forgetting.
The authors combine biologically-inspired learning techniques with neuromorphic hardware to implement an energy-efficient system that demonstrates rapid learning of unseen tasks across various domains.
Artificial neural networks, central to deep learning, are powerful but energy-consuming and prone to overfitting. The authors propose a network design inspired by biological dendrites, which offers better robustness and efficiency, using fewer trainable parameters, thus enhancing precision and resilience in artificial neural networks.
Reservoir computing designs recurrent networks that simultaneously buffer inputs and form nonlinear features. Here, authors propose a configurable scheme with better scaling where memory buffer and nonlinear features are in separate circuits. It can be efficiently implemented in neuromorphic hardware.
The authors propose a model for the process of one-shot learning in the brain. They show that it reproduces the repulsion effect of human memory and provides a blueprint for content-addressable in-memory computing with binary weights.
Existing training algorithms for deep neural networks are not suitable for energy-efficient analog hardware. Here, the authors propose and experimentally demonstrate an alternative training algorithm based on reservoir computing, which improves training efficiency in optoelectronic implementations.
This study introduces an in-memory deep Bayesian active learning framework that uses the stochastic properties of memristors for in situ probabilistic computations. This framework can greatly improve the efficiency and speed of artificial intelligence learning tasks, as demonstrated with a robot skill-learning task.
The extent to which structural modularity in neural networks ensures functional specialization remains unclear. Here the authors show that specialization can emerge in neural modules placed under resource constraints but varies dynamically and is influenced by network architecture and information flow.
The Digital Brain platform is capable of simulating spiking neuronal networks at the neuronal scale of the human brain. The platform is used to reproduce blood-oxygen-level-dependent signals in both the resting state and action, thereby predicting the visual evaluation scores.
Neuromorphic computing has shown the capability of low-power real-time parallel computations, however, implementing the backpropagation algorithm entirely on a neuromorphic chip has remained challenging. The authors propose a spiking neural network implementation of the exact backpropagation algorithm fully on-chip without a computer in the loop.
Neuromorphic software and hardware solutions vary widely, challenging interoperability and reproducibility. Here, authors establish a representation for neuromorphic computations in continuous time and demonstrate support across 11 platforms.
Markus Graber and Klaus Hofmann present a coupled oscillator network, fabricated on a 4.6 mm2 silicon chip with 1440 oscillators and routable connections, designed to solve Ising and other optimization problems efficiently. Their circuit offers a scalable and practical approach for complex optimization problems.
Machine learning and neuromorphic computing network models have distinct strengths in processing spatiotemporal data. Here, authors propose hybrid spatiotemporal neural networks that combine these models, achieving better accuracy, robustness, and efficiency in varied environments across various benchmarks and real-world tasks.
Thermal neuristors based on VO2 have been suggested for neuromorphic computing. Here, authors show that neuristor arrays exhibit long-range order without criticality, revealing that it is not necessary for effective information processing in such systems, and challenging the critical brain hypothesis.
Analog in-memory computing recent hardware implementations focused mainly on accelerating inference deployment. In this work, to improve the training process, the authors propose algorithms for supervised training of deep neural networks on analog in-memory AI accelerator hardware.
To address challenges of training spiking neural networks (SNNs) at scale, the authors propose a scalable, approximation-free training method for deep SNNs using time-to-first-spike coding. They demonstrate enhanced performance and energy efficiency for neuromorphic hardware.
An automatic framework, SNOPS, is developed for configuring a spiking network model to reproduce neuronal recordings. It is used to discover previously unknown limitations of spiking network models, thereby guiding model development.
Artificial associative memories in neural network models have shown ability to store and retrieve static patterns of complex systems, however analysis of dynamic patterns remains challenging. The authors develop a reservoir computing based memory approach for complex multistable dynamical systems.
Intelligent agents can perform two types of behavior, habitual and goal-directed. The authors propose a deep learning framework using a variational Bayes approach, which computationally explains many aspects of the interaction between the two types of behaviors in sensorimotor tasks.
Oscillating neural networks promise ultralow power consumption and rapid computation for tackling complex optimization problems. Here, the authors demonstrate VO2 oscillators to solve NP-complete problems with projected power consumption of 13 µW/oscillator.
Combinatorial optimization problems can be solved on parallel hardware called Ising machines. Most studies have focused on the use of second-order Ising machines. Compared to second-order Ising machines, the authors show that higher-order Ising machines realized with coupled-oscillator networks can be more resource-efficient and provide superior solutions for constraint satisfaction problems.
Visual oddity tasks delve into the visual analytic intelligence of humans, which remained challenging for artificial neural networks. The authors propose here a model with biologically inspired neural dynamics and synthetic saccadic eye movements with improved efficiency and accuracy in solving the visual oddity tasks.
Inspired by human analogical reasoning in cognitive science, the authors propose an approach combining deep learning systems with an analogical reasoning mechanism, to detect abstract similarity in real-world images without intensive training in reasoning tasks.
The biological plausibility of backpropagation and its relationship with synaptic plasticity remain open questions. The authors propose a meta-learning approach to discover interpretable plasticity rules to train neural networks under biological constraints. The meta-learned rules boost the learning efficiency via bio-inspired synaptic plasticity.
Biologically inspired spiking neural networks are highly promising, but remain simplified omitting relevant biological details. The authors introduce here theoretical and numerical frameworks for incorporating dendritic features in spiking neural networks to improve their flexibility and performance.
Muscle electrophysiology is a promising tool for human-machine approaches in medicine and beyond clinical applications. The authors propose here a model simulating electric signals produced during human movements and apply this data for training of deep learning algorithms.
Hybrid neural networks combine advantages of spiking and artificial neural networks in the context of computing and biological motivation. The authors propose a design framework with hybrid units for improved flexibility and efficiency of hybrid neural networks, and modulation of hybrid information flows.
Reservoir computing has demonstrated high-level performance, however efficient hardware implementations demand an architecture with minimum system complexity. The authors propose a rotating neuron-based architecture for physically implementing all-analog resource efficient reservoir computing system.
Artificial neural networks are known to perform well on recently learned tasks, at the same time forgetting previously learned ones. The authors propose an unsupervised sleep replay algorithm to recover old tasks synaptic connectivity that may have been damaged after new task training.
Tasks involving continual learning and adaptation to real-time scenarios remain challenging for artificial neural networks in contrast to real brain. The authors propose here a brain-inspired optimizer based on mechanisms of synaptic integration and strength regulation for improved performance of both artificial and spiking neural networks.
Based on fundamental thermodynamics, traditional electronic computers, which operate serially, require more energy per computation the faster they operate. Here, the authors show that the energy cost per operation of a parallel computer can be kept very small.
Brain-inspired neural generative models can be designed to learn complex probability distributions from data. Here the authors propose a neural generative computational framework, inspired by the theory of predictive processing in the brain, that facilitates parallel computing for complex tasks.
Self-organizing maps are data mining tools for unsupervised learning algorithms dealing with big data problems. The authors experimentally demonstrate a memristor-based self-organizing map that is more efficient in computing speed and energy consumption for data clustering, image processing and solving optimization problems.
Deep learning techniques usually require a large quantity of training data and may be challenging for scarce datasets. The authors propose a framework that involves contrastive and transfer learning and reduces data requirements for training while keeping the prediction accuracy.
Dynamics of neural circuits mapping brain functions such as sensory processing and decision making, can be characterized by probabilistic representations and inference. The authors elaborate the role of spatiotemporal neural dynamics for more efficient performance of probabilistic computations.
Better understanding of a trade-off between the speed and accuracy of decision-making is relevant for mapping biological intelligence to machines. The authors introduce a brain-inspired learning algorithm to uncover dependencies in individual fMRI networks with features of neural activity and predict inter-individual differences in decision-making.
The modelling of human-like behaviours is one of the challenges in the field of Artificial Intelligence. Inspired by experimental studies of cultural evolution, the authors propose a reinforcement learning approach to generate agents capable of real-time third-person imitation.
Brain-inspired spiking neural networks have shown their capability for effective learning, however current models may not consider realistic heterogeneities present in the brain. The authors propose a neuron model with temporal dendritic heterogeneity for improved neuromorphic computing applications.
Brain connectivity patterns shape computational capacity of biological neural networks, however mapping empirically measured connectivity to artificial networks remains challenging. The authors present a toolbox for implementing biological neural networks as artificial reservoir networks. The toolbox allows for a variety of empirical/measured connectomes and is equipped with various dynamical systems, and cognitive tasks.
Brains and neuromorphic systems learn with local learning rules in online-continual learning scenarios. Designing neural networks that learn effectively under these conditions is challenging. The authors introduce a neural network that implements an effective, principled approach to local, online-continual learning on associative memory tasks.
Ising machines have been usually applied to predefined combinatorial problems due to their distinct physical properties. The authors introduce an approach that utilizes equilibrium propagation for the training of Ising machines and achieves high accuracy performance on classification tasks.
The task of planning a sequence of actions, and dynamically adjusting the plan in dependence of unforeseen circumstances, remains challenging for artificial intelligence frameworks. The authors introduce a learning approach inspired by cognitive functions, that demonstrates high flexibility and generalization capability in planning tasks, suitable for on-chip learning.
This study reports a complete photonic neuron integrated on a silicon-nitride chip, enabling ultrafast all-optical computing with nonlinear multi-kernel convolution for image recognition and motion generation.
Leveraging in-memory computing with emerging gain-cell devices, the authors accelerate attention—a core mechanism in large language models. They train a 1.5-billion-parameter model, achieving up to a 70,000-fold reduction in energy consumption and a 100-fold speed-up compared with GPUs.
Edge AI systems require high-frequency, temperature-stable entropy sources, which traditional sources fail to provide. Here, the authors experimentally demonstrate a 3D 16-layer Fe-diode array that achieves high efficiency, low energy consumption, and high recognition accuracy.
Bayesian neural networks is a machine learning architecture designed to capture the uncertainties of the predictions better. Here, the authors developed a 3D ferroelectric NAND-based Bayesian neural network system for enhanced efficiency and robustness.
The authors developed a neuromorphic chip with on-chip learning and support for diverse memory devices. It bridges brain-inspired computing and emerging tech, enabling efficient, flexible testing and advancing next-gen neuromorphic architectures.
Achieving the same robustness of biological networks in neuromorphic systems remains a challenge due to the variability in their analogue components. Here, the authors apply a biologically-plausible cross-homeostatic rule to balance neuromorphic implementations of spiking recurrent networks.
Solving optimization problems efficiently is a critical challenge, yet traditional physical minimizers that map the problem to a spin Hamiltonian suffer from limitations in their spin dynamics. We overcome this issue by introducing the Vector Ising Spin Annealer, which operates in a higher-dimensional space and demonstrates superior performance
Aris Tsirigotis and colleagues propose a photonic neuromorphic accelerator using optical spectrum slicing in a reconfigurable processor. Their approach enables optical-domain preprocessing, achieving 97.7% accuracy on MNIST with up to 30% lower power consumption than digital systems.
Combinatorial Optimization problems can be solved by investigating the ground states of particular Ising models. Here, the authors developed a neuromorphic architecture to ensure asymptotic convergence to the ground state of an Ising problem and to consistently produce high-quality solutions.
Adversarial attacks threaten deep neural networks. Here, authors show analog in-memory computing chips enhance robustness, attributed to stochastic noise properties. This is validated experimentally and in simulations with larger transformer models.
Ising Machines are domain-specific computers tailored to solve hard combinatorial optimization and probabilistic sampling problems. Here, the authors augment an earlier Ising machine concept that combines billiard dynamics with latches, so-called chaotic bits, with stochasticity, improving performance to rival probabilistic bits.
Clement Turck and colleagues present an alternative computing platform leveraging the property of logarithm to transfer multiplication operation into addition. They demonstrate the energy efficiency and superior performance of the prototype on gesture recognition and sleep stage recognition benchmark tasks.
Integration of perception and action is still difficult for current physically separated designs of neural-imitating electronics. This work presents a flexible device which can emulate essential synaptic functions and replicate muscle actuation in response to efferent neuromuscular commands.
Fault tolerance is essential for reliable AI acceleration using novel memristive hardware. Yousuf et al. developed a training-free fault tolerance scheme and demonstrated on a 20,000-memristor prototyping platform that it outperforms other solutions.
This study presents a neuromorphic computing platform capable of learning cross-modal, event-driven signals for efficient real-time knowledge generalization. It also achieves zero-shot transfer learning for multimodal data.
High voltages/light intensities are typically needed to mimic human visual adaptability. Here, the authors present an image sensor array with low operation voltage that mimics synaptic functions with ultraweak light stimulation and performs image processing tasks accurately.
The reliability of memristors has always been a major obstacle in neuromorphic computing. This work reports a negative differential resistance memristor based on quantum well structure for neuron circuit implementation, which shows low performance variation (0.264%), high temperature resistance at 400 °C.
The authors numerically investigate the reservoir computing performance of vertical emitting two-mode semiconductor lasers and show the crucial impact of dynamic coupling, injection schemes and system timescales. A central finding is that high dimensional internal dynamics can only be utilized if an appropriate perturbation via the input is chosen.
Sen et al. report the stacking of a perovskite incipient ferroelectric nanomembrane with atomically thin 2D material for a back-end-of-line compatible ferroelectric-like field effect transistors, functioning as a cryogenic memory at 15 K and as an inference engine at room temperature.
The authors propose vertical tunneling ferroelectric field-effect transistors based on asymmetric MoS2/h-BN/metal tunnel junction as channel. The Fermi level of MoS2 is bipolarly tuned by ferroelectric domains and detected by the quantum tunneling strength across the junction.
Hardware implementation of analog reservoir computing is a challenge. The analog reservoir system in this work contains mixed phase boundary-based transistors with nonlinear short-term memory as physical reservoirs and artificial neuron, and nonvolatile ferroelectric transistors as readout networks.
The switching dynamics in resistive memory hardware enables efficient computations, such as stateful logic and temporal information processing. The authors show that a passive resistive switching circuit represented as an attractor network model provides high-capacity, compact and efficient solution for associative memory.
Physical reservoir computing systems often possess a single set of internal dynamics, limiting their computational capabilities. Here, Stenning et. al. create hierarchical neural networks with distinct physical reservoirs, enabling diverse computational performance and learning of small datasets.
Developing efficient real-time closed-loop interfacing with neuromorphic processors is a challenge. The authors report a GAIA sensor, which is a 4096-channel event-based MEA that encodes biopotentials in event-based pulses reducing data transmission and power consumption.
Biological neural networks demonstrate complex memory and plasticity functions. This work proposes a single memristor based on SrTiO3 that emulates six synaptic functions for energy efficient operation. The bio-inspired deep neural network is trained to play Atari Pong, a complex reinforcement learning task in a dynamic environment.
Mauricio Velazquez Lopez and colleagues fabricate a neuromorphic node with a response time that spans a range of 7 orders of magnitude. Their technology is compatible with complementary metal-oxide semiconductors, which makes it suitable for a variety of machine learning tasks.
Stable anti-ambipolar organic materials are limited, thus preventing the design of integrated, tunable, and multifunctional neuromorphic systems. Here, the authors report a small form factor neuromorphic circuit based on organic anti-ambipolar materials, mimicking the pre-processing functions of the retina.
Bahadır Utku Kesgin and Uğur Teğin propose using a Lorenz attractor as a nonlinear transfer function for neural network nodes. They design a power-efficient electrical circuit and use them for regression and classification test tasks.
Combining experiments, numerical non-linear simulations, and analytical tools, the authors here unravel the operation of organic artificial neurons in liquid environment, crucial components in neuromorphic bioelectronics, neuronal networks, and neuromorphic electronics.
Probabilistic computing demands low power and high quality random number generation. Woo et al. demonstrate the use of a spin crossover in LaCoO3 to generate random numbers that outperform software-generated random numbers in probabilistic computing.
Nonlinear optical computations have been essential yet challenging for developing optical neural networks with appreciable expressivity. In this paper, light scattering is combined with optical nonlinearity to empower a high-performance, large-scale nonlinear photonic neural system.
The authors demonstrate all-spin synapses and neurons using domain wall-magnetic tunnel junctions, utilizing synergistic spin-orbit torque and Dzyaloshinskii-Moriya interaction. The intrinsic linearity is required for compact and energy-efficient bio-inspired hardware for neuromorphic computing.
Reconfigurable neuromorphic transistors are important for creating compact and efficient neuromorphic computing networks. Here, Li et al. introduce an optoelectronic electrolyte-gated transistor to perform multimodal recognition.
Integrating security, computing and memory capabilities in ion-migration-driven memristors is challenging. Here, Woo et al. experimentally demonstrates a single system that performs cryptographic key generation, universal Boolean logic operations, and encryption/decryption.
Photonic integrated circuits have grown as potential hardware for neural networks and quantum computing, yet the tuning speed and large power consumption limited the application. Here, authors introduce the memresonator, a memristor heterogeneously integrated with a microring resonator, as a non-volatile silicon photonic phase shifter to address these limitations.
Creating accurate digital twins and controlling nonlinear systems displaying chaotic dynamics is challenging due to high system sensitivity to initial conditions and perturbations. The authors introduce a nonlinear controller for chaotic systems, based on next-generation reservoir computing, with improved accuracy, energy cost, and suitable for implementation with field-programmable gate arrays.
Designing efficient AI hardware capable of creating artificial general intelligence remains a challenge. Here, the authors present an approach for the on-demand generation of complex networks within a single memristor by harnessing device dynamics with intrinsic cycle-to-cycle variability and demonstrate the effectiveness of memristive complex network-based reservoirs.
Reconfigurable logic is desirable for high-density information processing. Here, the authors demonstrate a binary/ternary logic conversion-in-memory, which can operate in both binary and ternary logic systems to implement various types of logic gates.
While reservoir computing can process temporal information efficiently, its hardware implementation remains a challenge due to the lack of robust and energy efficient hardware. Here, the authors develop an all-ferroelectric reservoir computing system, showing high accuracies and low power consumptions in various tasks like the time-series prediction.
Dendritic computing is a promising approach to enhance the processing capability of artificial neural networks. Here, the authors report the development of a neurotransistor based on a vertical dual-gate electrolyte-gated transistor with short-term memory characteristics, a 30 nm channel length, a low read power of ~3.16 fW and read energy of ~30 fJ for dendritic computing.
Designing efficient in-memory-computing architectures remains a challenge. Here the authors develop a multi-level FeFET crossbar for multi-bit MAC operations encoded in activation time and accumulated current with experimental validation at 28nm achieving 96.6% accuracy and high performance of 885 TOPS/W.
The elementary excitations of magnets are known as magnons. Like photons, they can carry information, but unlike photons, the interactions of magnons are intrinsically non-linear, making them particularly promising for physical reservoir computing, where the non-linear response of a dynamical system is used as a computational resource. Here, Körber et al demonstrate physical reservoir computing using the magnon eigenmodes of a permalloy disc.
Hardware architectures based on self-organized memristive networks of nano objects have attracted a growing attention. Here, nanowire connectomes are experimentally proved to translate spatially correlated short-term plasticity effects into long-lasting topological changes, thus emulating both information encoding and memory consolidation of human brain.
Designing a high-density memory array to effectively manage large data volumes remains a challenge. Here, the authors introduce a stacked ferroelectric memory array comprised of laterally gated ferroelectric field-effect transistors device with high vertical scalability and efficient memory properties, making it suitable for 3D in-memory computing structures.
Combinatorial optimization problems have various important applications but are notoriously difficult to solve. Here, the authors propose a quantum inspired algorithm and apply it to classical analog memristor hardware, demonstrating an efficient solution for intricate problems.
Designing efficient neuromorphic systems based on nanowire networks remains a challenge. Here, Zhu et al. demonstrate brain-inspired learning and memory of spatiotemporal features using nanowire networks capable of MNIST handwritten digit classification and a novel sequence memory task performed in an online manner.
Memory devices with open-loop analog programmability are highly desired in training tasks. Here, the authors developed an electrochemical memory array that can be accurately programmed without any feedback, offering unique capabilities for training.
The progress of high-performance oxide-based transistors is essential for seamlessly integrating monolithic 3-D circuits into the CMOS backend. The authors propose using atomic layer deposition for ZnO due to its compatibility with low-temperature backend integration. They also successfully integrated ZnO TFTs with HfO2 RRAM in a 1 kbit 1T1R array, showcasing RRAM switching capabilities.
Designing a monolithic 3D structure with interleaved logic and high-density memory layers has been difficult to achieve due to challenges in managing the thermal budget. Here, the authors demonstrate a 3D integration of monolayer MoS2 transistors with 3D vertical RRAMs through a low-temperature fabrication process whose 1T–nR structure shows high promise for low-power and high-density memory applications.
Spin defects in semiconductors are promising for quantum technologies but understanding of defect formation processes in experiment remains incomplete. Here the authors present a computational protocol to study the formation of spin defects at the atomic scale and apply it to the divacancy defect in SiC.
Designing high efficient optoelectronic memory remains a challenge. Here, the authors report a novel optoelectronic memory device based on a photosensitive dielectric that is an insulator in dark and a semiconductor under irradiation with multilevel storage ability, low energy consumption and good compatibility.
Dense random access memory is required for building future generations of superconducting computers. Here the authors study vortex-based memory cells, demonstrate their scalability to submicron sizes and robust word and bit-line operation at zero magnetic field.
Designing efficient multistate resistive switching devices is promising for neuromorphic computing. Here, the authors demonstrate a reversible hydrogenation in WO3 thin films at room temperature with an electrically-biased scanning probe. The associated insulator to metal transition offers the opportunity to precisely control multistate conductivity at nanoscale.
Designing efficient optoelectronic synaptic devices with advanced light responsive multimodal platforms remains a challenge. Here, the authors report on an organic optoelectronic neuromorphic platform that is based on conductive polymers and light-sensitive molecules that can be used to imitate the retina including visual pathways and typical memory processes of neurons.
Image reconstruction algorithms raise critical challenges in massive data processing for medical diagnosis. Here, the authors propose a solution to significantly accelerate medical image reconstruction on memristor arrays, showing 79× faster speed and 153× higher energy efficiency than state-of-the-art graphics processing unit.
Analog in-memory computing promises efficient DNN inference acceleration but suffers from nonidealities. Here, hardware-aware training methods are improved so that various larger DNNs of diverse topologies nevertheless achieve iso-accuracy.
Designing efficient selector devices remains a challenge. Here, the authors propose a CuAg alloy-based selector with excellent ON/OFF ratio and thermal stability. It can effectively suppress the sneak-path current in 1S1R arrays, making it suitable for storage class memory and neuromorphic computing applications.
Sensing and processing UV light is essential for advanced artificial visual perception system. Here, the authors report a controllable UV-ultrasensitive neuromorphic vision sensor using organic phototransistors to integrate sensing, memory and processing functions, and perform the static image and dynamic movie recognition.
Dynamic machine vision requires recognizing the past and predicting the future of moving objects. Here, the authors demonstrate retinomorphic photomemristor networks with inherent dynamic memory for accurate motion recognition and prediction.
Designing an infrared machine vision system that can efficiently perceive, convert, and process a massive amount of data remains a challenge. Here, the authors present a retina-inspired 2D optoelectronic device based on van der Waals heterostructure that can perform the data perception and spike-encoding simultaneously for night vision, sensing, spectroscopy, and free-space communications.
A big challenge for artificial intelligence is to gain the ability of learning by experience like biological systems. Here Bianchi et al. propose a hardware neural network based on resistive-switching synaptic arrays which dynamically adapt to the environment for autonomous exploration.
Designing scaled electronic devices for neuromorphic applications remains a challenge. Here, Zhang et al. develop an artificial molecular synapse based on self-assembled peptide molecule monolayer whose conductance can be dynamically modulated and used for waveform recognition.
Hardware-based neural networks can provide a significant breakthrough in artificial intelligence. Here, the authors demonstrate an integrated 3-dimensional ferroelectric array with a layer-by-layer computation for area-efficient neural networks.
Designing bio-inspired artificial neurons within a single device is challenging. Here, the authors demonstrate a spintronic neuron with leaky-integrate-fire and self-reset characteristics and corroborate a new trajectory of all-spin neuromorphic computing hardware holistic implementation.
Inspired by the multisensory cue integration in macaque’s brain for spatial perception, the authors develop a neuromorphic motion-cognition nerve that achieves cross-modal perceptual enhancement for robotics and wearable applications.
Analog–digital hybrid computing based on SnS2 memtransistors is demonstrated for lowpower sensor fusion in drones, where a drone with hybrid computing performs sensor fusion with higher energy efficiency than that with only a digital processor.
A highly efficient hardware element capable of sensing and encoding multiple physical signals is still lacking. Here, the authors report a spike-based neuromorphic perception system consisting of tunable and highly uniform artificial sensory neurons based on epitaxial VO2 capable of hand gesture classification.
Designing energy-efficient computing solution for the implementation of AI algorithms in edge devices remains a challenge. Yang et al. proposes a decentralized brain-inspired computing method enabling multiple edge devices to collaboratively train a global model without a fixed central coordinator.
Designing biocompatible and flexible electronic devices for neuromrophic applications remains a challenge. Here, Kireev et al. propose graphene-based artificial synaptic transistors with low-energy switching, long-term potentiation, and metaplasticity for future bio-interfaced neural networks.
Designing an efficient multi-agent hardware system to solve large-scale computational problems through high-parallelism processing with nonlinear interactions remains a challenge. Here, the authors demonstrate that a multi-agent hardware system deploying distributed Ag nanoclusters as physical agents enables parallel, complex computing.
Layered heterostructures are promising photosensitive materials for advanced optoelectronics. Here, the authors introduce an interfacial coassembly method to construct large-scale perylene/grahene oxide (GO) heterobilayer for broadband photoreception and efficient neuromorphics.
Designing in-sensor computing systems remains a challenge. Here, the authors demonstrate artificial optical neurons based on the in-sensor computing architecture that fuses sensory and computing nodes into a single platform capable of reducing data transfer time and energy for encoding and classification.
Designing a computing scheme to solve complex tasks as the big data field proliferates remains a challenge. Here, the authors present a probabilistic bit generation hardware built using the random nature of CuxTe1−x/HfO2/Pt memristors capable of performing logic gates with invertible mode, showing the expandability to complex logic circuits.
Memory augmented neural network for lifelong on-device learning is bottlenecked by limited bandwidth in conventional hardware. Here, the authors demonstrate its efficient in-memristor realization with a close-software accuracy, supported by hashing and similarity search in crossbars.
Designing efficient Bayesian neural networks remains a challenge. Here, the authors use the cycle variation in the programming of the 2D memtransistors to achieve Gaussian random number generator-based synapses, and combine it with the complementary 2D memtransistors-based tanh function to implement a Bayesian neural network.
The separation of sensor, memory, and processor in a recognition system deteriorates the latency of decision-making and increases the overall computing power. Here, Zhang et al. develop a photoelectronic reservoir computing system, consisting of DUV photo-synapses and a memristor array, to detect and recognize the latent fingerprint with in-sensor and parallel in-memory computing.
Magnetic skyrmions, due to their strongly nonlinearity and multiscale dynamics, are promising for implementing reservoir computing. Here, the authors experimentally demonstrate skyrmion-based spatially multiplexed reservoir computing able to perform Boolean Logic operations, using thermal and current driven dynamics of spin structures.
Retrieving the pupil phase of a optical beam path is a central problem for imaging systems across scales. The authors use Diffractive Neural Networks to directly extract pupil phase information with a single, compact optoelectronic device.
Existing memristors cannot be reconfigured to meet the diverse switching requirements of various computing frameworks, limiting their universality. Here, the authors present a nanocrystal memristor that can be reconfigured on-demand to address these limitations
The integration of artificial neuromorphic devices with biological systems plays a fundamental role for future brain-machine interfaces, prosthetics, and intelligent soft robotics. Harikesh et al. demonstrate all-printed organic electrochemical neurons on Venus flytrap that is controlled to open and close.
Synaptic plasticity and neuronal intrinsic plasticity are both involved in the learning process of hardware artificial neural network. Here, Lee et al. integrate a threshold switch and a phase change memory in a single device, which emulates biological synaptic and intrinsic plasticity simultaneously.
Neuromorphic computing requires the realization of high-density and reliable random-access memories. Here, Thean et al. demonstrate wafer-scale integration of solution-processed 2D MoS2 memristor arrays which show long endurance, long memory retention, low device variations, and high on/off ratio.
Designing energy efficient, uniform and reliable memristive devices for neuromorphic computing remains a challenge. By leveraging the self-rectifying behavior of gradual oxygen concentration of titanium dioxide, Choi et al. develop a transistor-free 1R cross-bar array with good uniformity and high yield.
Device-level complexity represents a big shortcoming for the hardware realization of analogue memory-based deep neural networks. Mackin et al. report a generalized computational framework, translating software-trained weights into analogue hardware weights, to minimise inference accuracy degradation.
Conventional filamentary memristors are limited in dynamics by the high electric-field dependence of the conductive filament. Here, Jeong et al. presents a method which creates a cluster-type memristor, enabling large conductance range and long data retention.
Silicon is an abundant element on earth and is perfectly compatible with the well-established CMOS processing industry. Here, Sun et al. demonstrate multifunctional neuromorphic devices based on silicon nanosheet stacks, bringing silicon back as a potential material for neuromorphic devices.
Intelligent materials change their properties under external stimuli, integrating functionalities at the matter level. Here, Guo et al. report an artificial vision system based on the memory effect produced by sliding ferroelectricity in multiwalled tungsten disulfide nanotubes.
The challenge of high-speed and high-accuracy coherent photonic neurons for deep learning applications lies to solve noise related issues. Here, Mourgias-Alexandris et al. address this problem by introducing a noise-resilient hardware architectural and a deep learning training platform.
One gap between the neuro-inspired computing and its applications lies in the intrinsic variability of the devices. Here, Payvand et al. suggest a technologically plausible co-design of the hardware architecture which takes into account and exploits the physics behind memristors.
Ising machines are accelerators for computing difficult optimization problems. In this work, Böhm et al. demonstrate a method that extends their use to perform statistical sampling and machine learning orders-of-magnitudes faster than digital computers.
Large-scale silicon-based integrated artificial neural networks lack of silicon-integrated optical neurons. Here, Yu et al, report a self-monitored all-optical neural network enabled by nonlinear germanium-silicon photodiodes, making the photonic neural network more versatile and compact.
Developing molecular electronics is challenged by integrating fragile organic molecules into modern micro/nanoelectronics based on inorganic semiconductors. Li et al. apply rolled-up nanotechnology to assemble on-chip molecular devices, which can be switched between photodiodes and volatile memristors.
Bioinspired neuromorphic vision components are highly desired for the emerging in-sensor computing technology. Here, Ge et al. develop an array of optoelectronic synapses capable of memorizing and processing ultraviolet images facilitated by photo-induced non-volatile phase transition in VO2 films.
Some types of machine learning rely on the interaction between multiple signals, which requires new devices for efficient implementation. Here, Sarwat et al demonstrate a memristor that is both optically and electronically active, enabling computational models such as three factor learning.
Spin-torque nano-oscillators have sparked interest for their potential in neuromorphic computing, however concrete demonstration are limited. Here, Romera et al show how spin-torque nano-oscillators can mutually synchronise and recognize temporal patterns, much like neurons, illustrating their potential for neuromorphic computing.
The conventional von Neumann computing architecture is ill suited to data intensive tasks as data must be repeated moved between the separated processing and memory units. Here, Seo et al propose a CMOS compatible, highly linear gate injection field-effect transistor where data can be both stored and processed.
Selective attention is an efficient processing strategy to allocate computational resources for pivotal optical information. Here, the authors propose a bionic vision hardware to emulate the behavior, showing a potential in image classification.
Computational properties of neuronal networks have been applied to computing systems using simplified models comprising repeated connected nodes. Here the authors create layered assemblies of genetically encoded devices that perform non-binary logic computation and signal processing using combinatorial promoters and feedback regulation.
Designing a full-memristive circuit for different algorithm remains a challenge. Here, the authors propose a recirculated logic operation scheme using memristive hardware and 2D transistors for cellular automata, supporting multiple algorithms with a 79-fold cost reduction compared to FPGA.
Multimodal cognitive computing task is an important research content in the field of AI. Here, the authors propose an efficient sensory memory processing system, which can process sensory information and generate synapse-like and multiwavelength light-emitting output for efficient multimodal information recognition.
Designing efficient photonic neuromorphic systems remains a challenge. Here, the authors develop a new class of memristor sensitive to the dual electro-optical history obtained by exploiting electrochemical, photovoltaic and photo-assisted oxygen ion motion effects at a high temperature superconductor / semiconductor interface.
Designing efficient neuromorphic systems remains a challenge. Here, the authors develop a system based on multi-terminal floating-gate memristor that mimics the temporal and spatial summation of multi-neuron connections based on leaky-integrate-and-fire functionality which is capable of high learning accuracy on unlabeled MNIST handwritten dataset.
Artificial spin ices consist of small magnets arranged in a lattice. Their simplicity belies their rich behaviour; they allowed for the investigation of effective magnetic monopoles, and more recently have been suggested as promising platforms for neuromorphic computing. For this latter function, efficient readout of the artificial spin ice state is critical. In this manuscript, Hu et al succeed in distinguishing artificial spin ice states using simple transport measurements.
Future intelligent vision systems need efficient capacitor-free spiking photoreceptor for color perception. Here, Wang et al. report a metal oxide-based vertically integrated spiking cone photoreceptor array which transduces light into spike trains with a power consumption of less than 400 picowatts.
Arranging nanomagnets into a two-dimensional lattice provides access to a rich landscape of magnetic behaviours. Control of the interactions between the nanomagnets after fabrication is a challenge. Here, Yun et al demonstrate all-electrical control of magnetic couplings in a two-dimensional array of nanomagnets using ionic gating.
Molecular electronics holds promise for building memristor at nanoscales for in-memory computing. Li et al. design tailored foldamers with furan-benzene and thiophene-benzene stacking to achieve voltage triggered quantum interference switching for potential random number generator application.
Data-centric applications benefit from dense, low-power memory. Here the authors use a combination of chalcogenide superlattices and nanocomposites to achieve low switching voltage (0.7 V) and fast speed (40 ns) in 40-nm-scale phase-change memory.
Optoelectronic neural networks are a promising avenue in AI computing for parallelization, power efficiency, and speed. Here, the authors present a dual-neuron optical-artificial learning approach for training large-scale diffractive neural networks, achieving VGG-level performance on ImageNet in simulation with a network that is 10 times larger than existing ones.
Designing efficient 3D artificial neural networks chip remains a challenge. Here, the authors report a M3D-LIME chip with monolithic three-dimensional integration of hybrid memory architecture based on resistive random-access memory, which achieves a high classification accuracy of 96% in one-shot learning task while exhibiting 18.3× higher energy efficiency than GPU.
Physical reservoirs that contain intrinsic nonlinear dynamic processes could serve as next-generation dynamic computing systems. Here, Liu et al. introduced an interface-type transistor based on oxygen ion dynamics to perform reservoir computing.
Designing an efficient activation function for optical neural networks remains a challenge. Here, the authors demonstrate a modulator-detector-in-one graphene/silicon heterojunction ring resonators enabling on-chip reconfigurable activation function devices with phase activation capability for optical neural networks.
Probabilistic computing has recently emerged as a promising energy-based computing system for solving non-deterministic polynomial-time-hard (NP-hard) problems. Here the authors develop a novel pbit unit, using NbOx volatile memristor, in which a self-clocking oscillator harnesses noise-induced metal-insulator transition, enabling high-performance probabilistic computing.
Bayesian networks gain importance in safety-critical applications. Authors conducted experiments with a memristor-based Bayesian network trained with variational inference with technological loss, achieving accurate heartbeats classification and prediction certainty.
Designing high performance organic neuromorphic devices remains a challenge. Here, Liu et al. report the development of an organic synapse based on a semicrystalline polymer PBFCL10 with device dimension of 50 nm and integration size of 1 Kb and a mixed‐signal neuromorphic hardware system based on the organic neuromatrix and FPGA controller for decision‐making tasks.
Layered thio- and seleno-phosphate ferroelectrics show promise for next-generation memory but have thermal stability issues. Using the electric field-driven phase transition in antiferroelectric CuCrP2S6, the authors introduce a robust memristor, emphasizing the potential of van der Waals antiferroelectrics in advanced neuromorphic computing.
Neural networks are powerful tools for solving complex problems, but finding the right network topology for a given task remains an open question. Here, the authors propose a bio-inspired artificial neural network hardware able to self-adapt to solve new complex tasks, by autonomously connecting nodes using electropolymerization.
Developing efficient reservoir computing hardware that combines optically excited acoustic and spin waves with high spatial density remains a challenge. In this work, the authors propose a design capable of recognizing visual shapes drawn by a laser within remarkably confined spaces, down to 10 square microns.
In-sensor and near-sensor computing are emerging as the next-generation computing paradigm, for high-density and low-power sensory processing. Here, the authors report a fully hardware-implemented artificial visual system for versatile image processing based on multimodal-multifunctional optoelectronic resistive memory devices with optical and electrical resistive switching modes.
Designing memristor-integrated passive crossbar arrays to accelerate artificial neural networks with high reliability remains a challenge. Here, the authors propose a self-rectifying resistive switching device incorporated into a crossbar array with a density of 1 kb whose operational performance is assessed in terms of defected-cell proportion, reading margin, and selection functionality.
Designing efficient high-density crossbar arrays are nowadays highly demanded for many artificial intelligence applications. Here, the authors propose a two-terminal ferroelectric fin diode non-volatile memory in which a ferroelectric capacitor and a fin-like semiconductor channel are combined to share both top and bottom electrodes with high performance and easy fabrication process
Designing efficient artificial neural network circuit architectures for optimal information routing remains a challenge. Here, the authors propose “Mosaic", the first demonstration of on-chip in-memory spike routing using memristors, optimized for small-world graphs prevalent in mammalian brains, offering orders of magnitude reduction in routing events compared to current approaches.
Frequency converters for wireless internet of things applications typically require separate circuits for different functions, causing energy and performance inefficiencies. Using an epitaxially grown VO2 memristor array, Liu et al. present a frequency converter with in-situ frequency synthesis and mix functionality.
Dealing with the explosive growth of diverse image data in the era of big data poses challenges for storage. Feng et al. propose a memristor-based near-storage in-memory processing system to boost the energy and storage efficiency.
Existing neuromorphic hardware, focusing mainly on shallow-reservoir computing, is challenged in providing adequate spatial and temporal scales characteristic for effective computing. Here, Gao et al. report an ultra-short channel organic neuromorphic vertical transistor with distributed reservoir states.
Filamentary RRAM technologies suffer from variations and noise, leading to computational accuracy loss, and increased energy consumption. Park et al. created a trilayer metal-oxide bulk switching RRAM technology without filament formation and showed edge computing for an autonomous navigation task.
Probabilistic inference hardware prevents overconfidence. Lee et al. report a Gaussian-like memory transistor using p-n junction coupled with separate floating gate, offering precise control of the Gaussian outputs, simplified circuit design, and low power consumption for inference computing.
A wide reservoir computing system is an advanced architecture. However, its hardware implementation remains elusive due to the lack of 3D architecture framework. Choi et al. demonstrate such hardware made of a multilayered 3D stacked memristive crossbar array for efficient learning and forecasting.
The authors present DenRAM, a hardware realization of spiking neural network with dendritic architecture. It utilizes memristive devices to implement both delay and weight parameters, enhancing low-power signal processing with reduced memory use.
Mimicking high-level abstraction of the brain to achieve energy advantages is a fundamental issue in neuromorphic computing. Here, the authors fabricate an asynchronous chip and demonstrate a high-accuracy neuromorphic system with power consumption of 0.7mW.
Processing spatiotemporal information calls for the construction of hardware systems with computing capability comparable to biological neural networks. Inspired by human cochlea, Milozzi et al. develop neuromorphic circuits for memristive tonotopic mapping via volatile resistive switching memory devices.