Fig. 3: Functional learning (FL) paradigm.
From: Optical neural network via loose neuron array and functional learning

This figure illustrates various details of the network design and training process. a The first part of the FNN is a physically-inspired functional basis block before the convolutional neural network (CNN) layers32. It reflects the physical structure of the device in which each input neuron has a potential impact, i.e., light propagation, to each output neuron, forming a four-dimensional light field with numerous connections, whose cardinality equals the input neural size times the output neural size. The pixels of the LC panel, represented as LC neurons, can potentially attenuate an arbitrary input-to-output connection and are represented mathematically by attenuation to the input-to-output data flow. In order to depict features in various resolutions, we downsample the input and LC neurons half and merge the results through trainable parameters. The second part of the FNN consists of the subsequent five layers of the CNN with a 3 × 3 × 64 kernel and the rectifier linear unit (ReLU)2, which nonlinearly mixes the functional basis block. More details pertaining to the connections are given in supplementary document. b Illustration of the process of testing LC neuron connections by the quad-tree searching method. Illustrated here are the first four steps in a simple case. c By training the LC neurons' parameters, the input planeʼs neuron can either activate or deactivate a neuron in the next layer. For example, neuron a minuses neuron d equals neuron A. If the trained network produces a strong connection (marked with blue) and a weak connection (marked with orange), the yellow neuron activates neuron A and deactivates neuron D, and vice versa. The output of X-activation is the input of the next LFNN layer.