Fig. 1: Limitation of current In-memory computing (IMC) for Recurrent Neural Networks and our proposed solution. | Nature Communications

Fig. 1: Limitation of current In-memory computing (IMC) for Recurrent Neural Networks and our proposed solution.

From: Efficient nonlinear function approximation in analog resistive crossbars for recurrent neural networks

Fig. 1

a A survey of DNN accelerators show the improvement in energy efficiency offered by IMC over digital architectures. However, the improvement does not extend to recurrent neural networks (RNN) such as LSTM and there exists a gap in energy efficiency between RNNs and feedforward architectures. Details of the surveyed papers available here66. b Architecture of a LSTM cell showing a large number of nonlinear (NL) activations such as sigmoid and hyperbolic tangent which are absent in feedforward architectures that mostly use simple nonlinearities like rectified linear unit (ReLU). c Digital implementation of the NL operations causes a bottleneck in latency and energy efficiency since the linear operations are highly efficient in time and energy usage due to inherent parallelism of IMC. For a LSTM layer with 512 hidden unit and with k = 32 parallel digital processors for the NL operations, the NL operations still take 2–5X longer time for execution due to the need of multiple clock cycles (Ncyc) per NL activation. d Our proposed solution creates an In-memory analog to digital converter (ADC) that combines NL activation with digitization of the dot product between input and weight vectors.

Back to article page