Table 5 Comparison of the accuracies obtained with different memristor-based neural network types and learning algorithms, both from simulation and experimental approaches

From: Hardware implementation of memristor-based artificial neural networks

Neural Network type

Learning algorithm

Database

Size

Training

Accuracy

Platform

Ref.

(Sim.)

(Exp.)

Single-Layer

Perceptron (SLP)

Backpropagation (Scaled Conjugate Gradient)

MNIST (n × n px.)

1 layer (n2 × 10)

Ex-situ

91%

SPICE sim.

QMM model

253

Manhattan update rule

Custom pattern

1 layer (10 × 3)

In-situ

ND

Exp.(TaOX/Al2O3)

105

Yale-Face

1 layer (320 × 3)

In-situ

91.7%

Exp. (TaOX)

194

Multi-Layer

Perceptron (MLP)

Backpropagation (Stochastic

Gradient Descent)

MNIST (8 × 8 px)

2 layers (64 × 54 × 10)

In-situ

91.7%

91.7%

Exp. (HfO2)

54

Backpropagation (Scaled Conjugate Gradient)

MNIST (n × n px.)

k layers (n2 × m×…× k × 10)

Ex-situ

96%

SPICE sim.

QMM model

253

Backpropagation

MNIST (14 × 14 px)

2 layers (196 × 20 × 10)

Ex-situ

92%

82.3%

Software/Exp. (HfO2)

196

MNIST (22 × 24 px)

2 layers (528 × 250 ×…× 125 × 10)

In-situ

83%

81%

Software/Exp. (PCM)

267

MNIST (28 × 28 px)

2 layers (784 × 100×…×10)

Ex-situ

97%

Software (Python)

288

Sign-

Backpropagation

MNIST (28×28 px)

2 layer (784 × 300×…×10)

In-situ

94.5%

Software (MATLAB)

289

Convolutional

Neural

Network (CNN)

Backpropagation

MNIST (28×28 px)

2 layer (1st

Conv., 2nd

FC)

In-situ

94%

Software

268

Spiking Neural

Network

SNN)

Spike Timing

Dependent

Plasticity (Unsupervised)

MNIST (28×28 px)

2 layer (784 × 300×…×10)

In-situ

93.5%

Software (C++ Xnet)

269

  1. Note that in all cases the synaptic layers are implemented with CPAs and simulations are performed without having into account the line parasitics or realistic memristor models. Given that the CPA is a building block in these complex neural networks, realistic SPICE simulations of the CPA are still required.