Fig. 4: Fashion-MNIST classification.
From: Multilayer nonlinear diffraction neural networks with programmable and fast ReLU activation function

a The 10-class fashion item dataset for recognition. b Comparison of two network architectures: the nonlinear network with alternating linear and nonlinear layers, and the linear network with only linear layers. c Sampled output field distributions generated by the linear and nonlinear networks for two simple images. Both networks successfully focus on the target regions. d Sampled output-field distributions for three complex images from both networks. The nonlinear network focuses on correct regions and the linear network on incorrect regions. e Recognition accuracy increases with the number of nonlinear layers. f Effect of nonlinear layer positioning on the accuracy, with the front and back placements outperforming the central insertions. g Impact of activation function type. The ReLU function provides the most significant performance improvement.