Fig. 4: AsyT for scaled-up DPNNs.
From: Asymmetrical estimator for training encapsulated deep photonic neural networks

a The schematic of scaled-up DPNN systems, where either spatial or temporal copies of the PPM can be used to construct a hidden layer with dimensionality higher than the hardware dimensionality. There is further error accumulation from each representation of the PPM. For training, AsyT only requires the output neuron information and treats the multiple PPMs collectively as if there were one larger-sized PPM. b AsyT is not affected by the error accumulation and achieves near-ideal BP performance, while in-silico BP is degraded to random guessing. c The confusion matrix for testing data of AsyT for the hand-written digits.