Fig. 1: Concept of the AsyT method.
From: Asymmetrical estimator for training encapsulated deep photonic neural networks

a AsyT only requires the neuron information at the output layer, allowing for the continuous propagation of physical information in the entire DPNN structure. For IP-BP methods, the analogue information propagation is truncated at each hidden layer, not fully utilizing the advantage of fast PNN processing. Moreover, the introduced AD interface bottleneck time scales with \(O\left(2M-P\right)\) for IP-BP methods, but as \(O\left(P\right)\) for AsyT. b Computation solely within the analogue or digital domain is optimized compared to the interface. AsyT uses the additional digital parallel pass to avoid information traffic at the interface while compensating for the fewer physical information accesses. c Workflow for using the AsyT method. AsyT utilizes both the forward and backward passes of the digital parallel model. The digital model’s operation is general to a set of statistically varied PNNs, requiring only one computation for a task to allow lowered distributed overhead (see “Discussion” section).