Table 5 Comparison of the accuracies obtained with different memristor-based neural network types and learning algorithms, both from simulation and experimental approaches
From: Hardware implementation of memristor-based artificial neural networks
Neural Network type | Learning algorithm | Database | Size | Training | Accuracy | Platform | Ref. | |
---|---|---|---|---|---|---|---|---|
(Sim.) | (Exp.) | |||||||
Single-Layer Perceptron (SLP) | Backpropagation (Scaled Conjugate Gradient) | MNIST (n × n px.) | 1 layer (n2 × 10) | Ex-situ | ∼91% | SPICE sim. QMM model | ||
Manhattan update rule | Custom pattern | 1 layer (10 × 3) | In-situ | ND | Exp.(TaOX/Al2O3) | |||
Yale-Face | 1 layer (320 × 3) | In-situ | ∼91.7% | Exp. (TaOX) | ||||
Multi-Layer Perceptron (MLP) | Backpropagation (Stochastic Gradient Descent) | MNIST (8 × 8 px) | 2 layers (64 × 54 × 10) | In-situ | ∼91.7% | ∼91.7% | Exp. (HfO2) | |
Backpropagation (Scaled Conjugate Gradient) | MNIST (n × n px.) | k layers (n2 × m×…× k × 10) | Ex-situ | ∼96% | SPICE sim. QMM model | |||
Backpropagation | MNIST (14 × 14 px) | 2 layers (196 × 20 × 10) | Ex-situ | ∼92% | ∼82.3% | Software/Exp. (HfO2) | ||
MNIST (22 × 24 px) | 2 layers (528 × 250 ×…× 125 × 10) | In-situ | ∼83% | ∼81% | Software/Exp. (PCM) | |||
MNIST (28 × 28 px) | 2 layers (784 × 100×…×10) | Ex-situ | ∼97% | Software (Python) | ||||
Sign- Backpropagation | MNIST (28×28 px) | 2 layer (784 × 300×…×10) | In-situ | ∼94.5% | Software (MATLAB) | |||
Convolutional Neural Network (CNN) | Backpropagation | MNIST (28×28 px) | 2 layer (1st Conv., 2nd FC) | In-situ | ∼94% | Software | ||
Spiking Neural Network SNN) | Spike Timing Dependent Plasticity (Unsupervised) | MNIST (28×28 px) | 2 layer (784 × 300×…×10) | In-situ | ∼93.5% | Software (C++ Xnet) |