Fig. 2: The model families in quantum machine learning. | Nature Communications

Fig. 2: The model families in quantum machine learning.

From: Quantum machine learning beyond kernel methods

Fig. 2: The model families in quantum machine learning.

a While data re-uploading models are by definition a generalization of linear quantum models, our exact mappings demonstrate that any polynomial-size data re-uploading model can be realized by a polynomial-size explicit linear model. b Kernelizing an explicit model corresponds to turning its observable into a linear combination of feature states ρ(x), for x in a dataset \({{{{{{{\mathcal{D}}}}}}}}\). The representer theorem guarantees that, for any dataset \({{{{{{{\mathcal{D}}}}}}}}\), the implicit model \({f}_{{{{{{{{\boldsymbol{\alpha }}}}}}}},{{{{{{{\mathcal{D}}}}}}}}}^{*}\) minimizing the training loss associated with \({{{{{{{\mathcal{D}}}}}}}}\) outperforms any explicit minimizer \({f}_{{{{{{{{\boldsymbol{\theta }}}}}}}}}^{*}\) from the same Reproducing Kernel Hilbert Space (RKHS) with respect to this same training loss. However, depending on the feature encoding ρ() and the data distribution, a restricted dataset \({{{{{{{\mathcal{D}}}}}}}}\) may cause the implicit minimizer \({f}_{{{{{{{{\boldsymbol{\alpha }}}}}}}},{{{{{{{\mathcal{D}}}}}}}}}^{*}\) to severely overfit on the dataset and have dramatically worse generalization performance than \({f}_{{{{{{{{\boldsymbol{\theta }}}}}}}}}^{*}\).

Back to article page