Extended Data Fig. 1: Experiment platform used to assess AI inference operations of the nvCIM macro with Resnet20 model.
From: A CMOS-integrated spintronic compute-in-memory macro for secure AI edge devices

a, Experiment platform including nvCIM testchip, FPGA board as system controller with intermediate data processor, and a PC to display classification results on an LCD screen. This nvCIM test-chip eliminated the drop in inference accuracy typical of software-based inference, regardless of the dataset SVHN, CIFAR-10, BraTS, and CIFAR-100. b, Flow chart showing inference process implemented on experiment platform. The nvCIM macro performed full-channel dot-product operations (convolutions), while the FPGA fed the input to the nvCIM macro, collected the full-channel dot-product values (Dot-PVFC) from the nvCIM macro, performed pooling operations, and executed the 1st-layer convolution operation. The PC platform presented intermediate data generated during inference and the final inference results on an LCD screen.