Fig. 5: The repeatability of AsyT’s performance.
From: Asymmetrical estimator for training encapsulated deep photonic neural networks

a–c Training three datasets of MNIST, FMNIST, and KMNIST with the AsyT method for PNNs. The test accuracy is significantly improved from in-silico BP (grey line) in all three cases. d AsyT (blue line) is pressure tested with varying levels of information mismatch, where the standardized distortion is defined based on the experimental level error. e AsyT (blue line) shows good performance for different neuron numbers in a hidden layer, maintaining the difference to the ideal BP maximum (grey line) below 2%. f AsyT’s performance (blue line) to maintain the test accuracy difference compared to error-free BP maximum (grey line) below 2.5% for different network sizes. g AsyT’s (blue line) forward pass in the digital parallel model protects the training from unexpected strong perturbation. PAT (orange line) experiences degradation due to the mismatch in the description of the differentiable model. h For very noisy physical systems, AsyT’s (blue line) parallel model’s digital update is unaffected (orange line: PAT). Thus, granting resistance to noise in the system.