Fig. 2: Study of loss concentration with the exact KLD loss function.
From: Trainability barriers and opportunities in quantum generative modeling

Numerical evidence that the exact KLD loss can have a non-vanishing loss variance even when model probabilities exhibit exponential concentration. We study the loss concentration in randomly initialized line-topology circuits for various datasets, and increasing the number of qubits n and circuit depth. We emphasize that the model probabilities qθ(x) where evaluated exactly and in the absence of shot noise. We also show the infinite layer results beyond 6 qubits that are generated using Eq. (20). The GHZ dataset consists of the all-0 and all-1 bitstrings (\({\mathcal{O}}(1)\) support), the \({\mathcal{O}}(n)\) and \({\mathcal{O}}({n}^{2})\) datasets consist of n and n2 random bitstrings, respectively, and the cardinality dataset contains all bitstrings with \(\frac{n}{2}\) cardinality (\({\mathcal{O}}({2}^{n})\) support). There appears to be a strong data dependence for the magnitude of the loss variance, which could lead to exponential concentration.