Fig. 2: Precision and accuracy comparison of the RNS-based analog core against a regular fixed-point analog core. | Nature Communications

Fig. 2: Precision and accuracy comparison of the RNS-based analog core against a regular fixed-point analog core.

From: A blueprint for precise and fault-tolerant analog neural networks

Fig. 2

a The distribution of average error observed at the output of a dot product performed with the RNS-based analog approach (pink) and the LP regular fixed-point analog approach (cyan). Error is defined as the distance from the result calculated in FP32. The experiments are repeated for 10,000 randomly generated vector pairs with a vector size of h = 128. The center lines of the boxes represent the median. The boxes extend between the first and the third quartile of the data, while whiskers extend 1.5 × of the inter-quartile range from the box. b Inference accuracy of regular fixed-point (LP) and RNS-based cores (See Table 1) on MLPerf (Inference: Datacenters) benchmarks. The accuracy numbers are normalized by the accuracy achieved in FP32. The bottom three plots show the loss during training for FP32 and the RNS-based approach with varying moduli bit-width. ResNet-50 (c) is trained from scratch for 90 epochs using the SGD optimizer with momentum. BERT-Large (d) and OPT-125M (e) are fine-tuned from pre-trained models. Both models are fine-tuned using the Adam optimizer with a linear learning rate scheduler for 2 and 3 epochs for BERT-Large and OPT-125M, respectively. All inference and training experiments use FP32 for all non-GEMM operations. See Accuracy Modeling under Methods for details.

Back to article page