Fig. 5: Simulation on deep networks. | Nature Communications

Fig. 5: Simulation on deep networks.

From: Error-aware probabilistic training for memristive neural networks

Fig. 5: Simulation on deep networks.

a Training results of ResNet34 and SRResNet with different εcell and improvement of the suggested EaPU. As the increase of εcell, the performance of the original algorithm declines dramatically, while our suggested EaPU achieves near-invariable performance. b Training process of ResNet34 and SRResNet with different Rwg. A larger Rwg indicates a narrower conductance range, and Rwg = 0 means the noiseless training results. With \({{\mbox{SD}}}_{{\varepsilon }_{{\mbox{cell}}}}\) kept constant, the performance of the original algorithm degrades rapidly as Rwg increases, whereas training with EaPU maintains stable performance. c Training results of simulated ResNet152 and Swin Transformer. Although the training results of the nonideal model (\({{\mbox{SD}}}_{{\varepsilon }_{{\mbox{cell}}}}\) is 2.4 μS and Rwg is 1/80 μS−1) decline by around 3% or 4% when using EaPU, they are much better than the original algorithm with the nonideal model (61.33% vs 1.1% and 90.22% vs 1.44%). d Update the ratio of ResNet152 during the training process. During the training process, the update ratio is smaller than 1.4‰. e Update the ratio of all the layers in ResNet152. The legend represents the number of training iterations. f The reduction in Rup and Nup achieved by EaPU. Adopting EaPU reduces Nup by over three orders of magnitude (with pre-trained weights, a further reduction of four orders of magnitude can be achieved), which can contribute to lowering training energy consumption.

Back to article page