The search for new energy-efficient electronic hardware for machine learning and artificial intelligence (AI) tasks is central to much of electronics research today. One such approach involves the use of two-terminal memory devices known as memristors. These devices can provide both information processing and memory in a single unit, and can be used to create different forms of neuromorphic computing. When built into large-scale crossbar arrays, they can, for instance, be used to implement parallel matrix–vector multiplication operations — a core computation in many AI models.

Photograph of the packaged cellular neural network chip developed by Qiangfei Xia and colleagues. Credit: Ali Abdel-Maksoud and Qiangfei Xia, University of Massachusetts Amherst

While questions remain about the endurance and retention of memristors1, as well as the industrial viability of the technology2,3, the capabilities and applications of the devices continue to expand. And in this issue of Nature Electronics, we highlight some of the latest advances.

We begin with work on cellular neural networks. The cellular neural network is a computing architecture that resembles the structure of the human retina, and could provide massively parallel analogue computation, which is valuable in applications such as high-speed image processing. However, hardware implementations of such networks are typically bulky and power hungry. Qiangfei Xia and colleagues now show that low-power cellular neural networks can be created with memristors.

The researchers — who are based at the University of Massachusetts, Amherst, Rensselaer Polytechnic Institute, Millburn High School, and the University of Southern California, Los Angeles — first create a Python-based digital twin for hardware control and network simulation. They then build a version of the network with transistors, and then one with multilevel non-volatile memristors, which they show can be used for image processing tasks such as edge and horizontal line detection. (See also the Research Briefing on the work.)

While the potential of crossbar arrays of memristors to perform parallel matrix–vector multiplication has been illustrated, inverse matrix–vector multiplication, which is a harder task for conventional computers, has remained challenging. In another Article in this issue, Piergiulio Mannocci, Daniele Ielmini and colleagues report an in-memory computing accelerator for inverse matrix–vector multiplication.

The researchers — who are based at Politecnico di Milano, Hewlett Packard Labs, and Peking University — use static random-access memory and fabricate their chip in 90-nm complementary metal–oxide–semiconductor (CMOS) technology. They show that the chip can be used to find solutions to systems of differential equations by recursive block inversion, as well as for sounding rocket trajectory tracking via Kalman filter and the acceleration of inverse kinematics in robotic arms.

Various neuromorphic systems with spike-timing-dependent plasticity have been developed. Integrating synaptic fatigue dynamics, which resemble biological short-term plasticity, into such systems could improve capabilities, but hardware implementation is non-trivial. In a further Article in this issue, Yuchao Yang and colleagues report the development of an interfacial dynamic memristor that offers high endurance and cycle-to-cycle uniformity, and can exhibit short-term synaptic fatigue plasticity.

The researchers — who are based at Peking University, the Chinese Academy of Sciences in Beijing, and the Chinese Institute for Brain Research in Beijing — couple the interfacial dynamic memristor to a one-transistor–one-non-volatile memristor cell with tunable spike-timing-dependent plasticity dynamics. This creates a hybrid synaptic element with fatigue spike-timing-dependent plasticity. Using these elements, a spiking neural network circuit is built that can be used in speech recognition tasks, exhibiting superior performance to conventional spike-timing-dependent plasticity approaches.

Finally there is work on human–machine interfaces, where Yuchao Yang and others report a memristor-based single-spike coding system for use in such interfaces. The approach relies on vanadium oxide memristors for the accurate encoding of information as single spikes, and hafnium oxide/tantalum oxide memristors with programming strategies to limit conductance drift.

The researchers — who are based at Peking University, Southwest University in Chongqing, and the Chinese Institute for Brain Research in Beijing — show that the system can be used for real-time vehicle control from surface electromyography. They also highlight, via simulations, that it uses around 38 times less energy, and with around 6.4 times lower latency, than a conventional rate coding system.