Fig. 1 | Scientific Reports

Fig. 1

From: Probabilistic metaplasticity for continual learning with memristors in spiking networks

Fig. 1

Probabilistic metaplasticity with error threshold-based training. (a) Spiking network trained on error threshold where the network weights are realized with 1T1R memristor crossbar array. The dendritic compartments in the hidden and output layer neurons integrate the error. When the error reaches the threshold \(U_{\text {th}}\), the memristor weights are updated to the next higher (negative error) or lower (positive error) conductance level with probability \(p_{\text {update}}\). (b) Mean and standard deviation of resistance levels versus programming compliance current for the 1T1R memristor device (shown in inset) adopted in this work20. (c) Update probability of weights for different values of the metaplasticity coefficient m and the weights w. m is positively associated with the activity level of adjacent neurons. We see that connections to highly active adjacent neurons (high m) and high weight magnitude (|w|) lead to low update probability.

Back to article page