Fig. 1: The training protocol of the quantum end-to-end learning framework. | npj Quantum Information

Fig. 1: The training protocol of the quantum end-to-end learning framework.

From: Experimental quantum end-to-end learning on a superconducting processor

Fig. 1

In the k-th iteration, a randomly selected image of a handwritten digit in the MNIST dataset is converted to a vector x(k) and then transformed by a matrix W(k) to the control variables \({{{{\bf{\uptheta }}}}}_{{{{\rm{En}}}}}^{(k)}\) that steer the quantum state to \(\left\vert {\psi }^{(k)}({t}_{E})\right\rangle\) of the qubits in the QNN. This process encodes x(k) to \(\left\vert {\psi }^{(k)}\right\rangle\). Subsequent inference control pulses \({{{{\bf{\uptheta }}}}}_{{{{\rm{In}}}}}^{(k)}\) are applied to drive \(\left\vert {\psi }^{(k)}({t}_{E})\right\rangle\) to \(\left\vert {\psi }^{(k)}({t}_{E+I})\right\rangle\) that is to be measured. The parameters in W(k) and \({{{{\bf{\uptheta }}}}}_{{{{\rm{In}}}}}^{(k)}\) are updated for the next iteration according to the loss function \({{{\mathcal{L}}}}\) and its gradient obtained from the measurement. The circled numbers represent the specific points in the data flow and the corresponding learning performances are shown in Fig. 4. The top right is the false-colored optical image of the six-qubit device used in our experiment.

Back to article page