Fig. 1: Overview of results.
From: Quantum advantage for learning shallow neural networks with natural data distributions

a Target function and input distributions. Given an input vector \(x\in {{\mathbb{R}}}^{d}\), we consider learning functions of the form \({g}_{{w}^{\star }}(x)=\cos ({x}^{\top }{w}^{\star })\), where \({w}^{\star }\in {{\mathbb{R}}}^{d}\) is an unknown vector. Our illustration emphasizes their connection with classical deep learning, where they are called cosine neurons. We also consider more general periodic neurons, which one can view as linear combinations of cosine neurons with unknown weights. We consider input distributions, such as uniform, Gaussians, and more general distributions which are sufficiently flat, as characterized by technical conditions specified in Supplementary Note 5. b Classical hardness. We strengthen the arguments of82 to show that classical gradient and correlational SQ methods require an exponential number of iterations (i.e., an exponential number of gradient samples) in the dimension of the problem and the norm Rw of w⋆ to learn these functions. c Quantum algorithm. In contrast, our new quantum algorithm using QSQs is exponentially more efficient with respect to both time and sample complexity.