Figure 8

Pretraining approach for FeCaps-based neural network. (a) Convolution layer mapping to FeCap crossbars using Im2Col transform. (b) Demonstration of bit-slicing on input activations and stored weights for FeCap-based crossbars. (c) Neural network architectural layout of LeNet and Resnet-20 models. (d) Simulation results on MNIST and CIFAR-10 datasets for LeNet and Resnet-20 models comprised of FeCap and FeFET crossbars. FeCap crossbars based on Quantization-Aware Training (QAT) indicate lower accuracy. Accuracy evaluation of FeCap crossbars using our pretraining approach demonstrates significant accuracy improvement in FeCap-based neural network, comparable to the FeFET baseline. Note that QAT represents the training approach that only trains for quantization without considering the leakage of FeCaps.