Fig. 3: Weight assignment using uniform distribution and k-means clustering.
From: Graphene memristive synapses for high precision neuromorphic computing

a Data structure showing vectors A and B of sizes 1 × 2 and 2 × 1, respectively, and their product, C. Matrix elements for A and B are drawn randomly from (b) uniform and (c) Gaussian normal weight distributions in the range of [−1, 1]. d Uniform quantization where the data range [−1, 1] is divided into N equally spaced bins. Any weight that belongs to a given bin is assigned to the analog memory value associated with that bin. The error histogram computed from (CQ – C), where the elements of CQ are the product of quantized elements from A and B (i.e., AQ and BQ) as a function of N, when weights are drawn from (e) uniform and (f) normal distributions, corresponding to (b) and (c), respectively. g Box plot of the error in (e) and (f), which shows monotonic decrease as N increases. Also, for similar N, the error is significantly higher for normally distributed weights when compared to uniformly distributed weights. h The schematic of k-means clustering, which is an unsupervised learning algorithm that divides the n data samples into k clusters, such that k ≤ n. The algorithm randomly chooses the centroids, calculates the distance of each point to the centroid, and, finally, minimizes the variance of the distance iteratively to identify the centroids. These centroids are usually located near the mean of the clusters. In k-means clustering quantization, weights in a specific cluster are quantized to their respective centroids. The error histogram as a function of N when the weights are drawn from the (i) uniform and (j) normal distributions corresponding to (b) and (c), respectively. k Box plot of the error in (i) and (j), which shows significant reduction in error for both cases when compared to that shown for uniform quantization in (g), especially for normally distributed weights.