Fig. 2: Pre-training and training quantum Boltzmann machines. | Communications Physics

Fig. 2: Pre-training and training quantum Boltzmann machines.

From: On the sample complexity of quantum Boltzmann machine learning

Fig. 2

a Quantum relative entropy \(S(\eta | {\rho }_{{\theta }^{{{{\rm{pre}}}}}})\) obtained after various pre-training strategies. We compare a mean-field (MF) model, a one-dimensional and two-dimensional geometrically local (GL) model, and a Gaussian Fermionic (GF) model to no pre-training/maximally mixed state. For the GL models, we stop the pre-training after the pre-training gradient is smaller than 0.01. We consider an 8-qubit target η as the Gibbs state \({e}^{{{{{\mathcal{H}}}}}_{{{{\rm{XXZ}}}}}}/Z\) of a one-dimensional XXZ model (Quantum Data), and a target η which coherently encodes the binary salamander retina data set (Classical Data). b Quantum relative entropy versus number of iterations for Quantum Data. The t < 0 iterations (gray area) show the reduction in relative entropy for GL 2D pre-training (red line). The t = 0 iteration corresponds to the pre-training results in panel (a). The t > 0 iterations show the training results in the absence of gradient noise, i.e., κ = ξ = 0.

Back to article page