Fig. 1: The generative modeling framework using quantum circuit Born machines. | npj Quantum Information

Fig. 1: The generative modeling framework using quantum circuit Born machines.

From: Trainability barriers and opportunities in quantum generative modeling

Fig. 1

Given a training dataset \(\tilde{P}\) with distribution \(\tilde{p}({\boldsymbol{x}})\) over discrete data samples x, the goal of a QCBM is to learn a distribution qθ(x) which models the real-world distribution p(x) from which the training data itself was sampled. This is done by tuning the parameters θ of a parametrized quantum circuit such that the QCBM minimizes a loss function that estimates the distance between the model and the training distribution. The QCBM is an implicit model and can thus in general not be paired with an explicit loss function, but it may be trainable using an implicit loss. In contrast to the conventional loss estimation strategy (solid lines) of generating a set of samples \({\tilde{Q}}_{{\boldsymbol{\theta }}}\) and forming an empirical distribution \({\tilde{q}}_{{\boldsymbol{\theta }}}({\boldsymbol{x}})\), strategies that are `more quantum' (dashed lines) can be employed with the aim of allowing QCBMs to be trained with loss functions which conventionally appear explicit.

Back to article page