Fig. 4: Simulation of a DRNN for keyword recognition. | Nature Communications

Fig. 4: Simulation of a DRNN for keyword recognition.

From: Optogenetics inspired transition metal dichalcogenide neuristors for in-memory deep recurrent neural networks

Fig. 4: Simulation of a DRNN for keyword recognition.

A deep neural network with 12 trainable layers including convolutional, fully connected, and recurrent LSTM layers is used to classify 12 different spoken digits. a The detailed architecture of the network is shown with all filter sizes and dimensions mentioned. b Highly linear weight update of PENs allow us to transfer high precision weights from offline learnt DRNNs (using sophisticated learning schemes with mini-batch averaging for smooth convergence) along with online learning with blind updates for simpler feed-forward networks. c The accuracy obtained in electrical inference with a two-shot opto-electronic write of weights is within 2.5% of the software simulation with floating-point numbers. The linear dynamic range of the PEN is limited to ~600 due to the two-shot write scheme. Simulating only one layer of convolution, fully connected or LSTM layers show the recurrent LSTM layers to be most sensitive to low weight dynamic range. d PENs used in our work exhibit an order of magnitude higher linear dynamic range than other recent reports, enabling simulation of DRNNs for speech recognition with an order of magnitude higher weights than other reported simulations for handwritten digit recognition.

Back to article page