Fig. 2 | npj Quantum Information

Fig. 2

From: Quantum generalisation of feedforward neural networks

Fig. 2

Neural network implementations of a a classical autoencoder and b a quantum autoencoder, respectively. The blue boxes represent the data compression devices after the training procedures. a A classical autoencoder taking two inputs \({\rm{i}}{{\rm{n}}_1} = a_1^{(0)}\) and \({\rm{i}}{{\rm{n}}_2} = a_2^{(0)}\) and compressing them to one hidden layer output \(a_1^{(l)}\). The final output layer is used in training and is trained to reconstruct the inputs. The notation here follows ref. 1. b A quantum autoencoder that can accommodate two input qubits that are entangled. c A plot of the quantum autoencoder cost function w.r.t. the number of steps used in the training procedure. In this example the input state is picked uniformly at random from \(\left( {1{\rm{/}}\sqrt 2 } \right)\left\{ {\left| {00} \right\rangle + \left| {11} \right\rangle ,\left| {00} \right\rangle - \left| {11} \right\rangle } \right\}\). The cost function can be seen to converge to zero, showing that the network has learned to compress the input state onto one qubit and then later recreate the input state. The non-monotonic decrease is to be expected as we are varying the input states. Qualitatively identical graphs of the cost function converging to 0 were also obtained for other examples of 2 orthogonal input states, including for the case of 3 input qubits and 1 bottleneck qubit

Back to article page