Extended Data Fig. 6: Schematic diagram of the experimental setup. | Nature Machine Intelligence

Extended Data Fig. 6: Schematic diagram of the experimental setup.

From: Deep learning incorporating biologically inspired neural dynamics and in-memory computing

Extended Data Fig. 6

a, The network architecture is designed using SNUs and standard tools from the TensorFlow framework. The training is performed in the same way as in other networks: a loss function is defined, and an optimizer is configured to minimize it using gradient descent. b, A wrapper provides functions to read and write the weights similarly to any regular TensorFlow variable. These functions manage the communication with the hardware through a Python-MATLAB interface that translates the read or write requests into FPGA commands, and converts conductance values obtained from the FPGA board back to TensorFlow. The writing can be performed without rereading the updated values from the hardware: steps 3 and 4 are optional. c, The FPGA board interacts with the prototype chip holding the PCM devices (not at scale): indirectly, through the Analog Front-End Board, to provide the power supply, and to clock and generate the current pulses; and directly, to control the chip operation and read conductance values from the on-chip analog-to-digital converter. d, An inference example: information about a chord propagates through the network with spikes to the sigmoidal output layer that generates next notes’ probabilities. e, At each layer, the weights of activated 2-PCM synapses are constructed from f, and conductance values are returned by the on-chip analog-to-digital converter.

Back to article page