Fig. 6: AsyT’s efficient training for reproducible DPNNs.
From: Asymmetrical estimator for training encapsulated deep photonic neural networks

a A truncated PNN’s access points scale as \(O\left(2M-P\right)\) (red line), with \({T}_{{{\mbox{extract}}}}\) determined by the specific model’s complexity. In comparison, AsyT’s encapsulated PNN structure’s access point is determined by the task and not the model’s complexity (\({P} \, < \, \left(2M-P\right)\)) (blue line). b Due to the sequential nature of propagation through layers, a truncated PNN’s access timestep (orange line) is associated with network depth, while encapsulated PNN (blue line) is not affected (this is under the assumption that \({T}_{{{\mbox{prop}}}}\ll {T}_{{{\mbox{interface}}}}\), see Supplementary Note 10). c Through re-applying a general parallel model to different copies of the PNN device, the computational overhead is distributed and reduced for each device copy. This allows the computational overhead to be comparable to standard BP without sacrificing the ability to construct the encapsulation.