Fig. 2: Caricature of main results: how to lift an i.i.d. learning algorithm \({{{\mathcal{A}}}}\) beyond the i.i.d. setting. | Nature Communications

Fig. 2: Caricature of main results: how to lift an i.i.d. learning algorithm \({{{\mathcal{A}}}}\) beyond the i.i.d. setting.

From: Learning properties of quantum states without the IID assumption

Fig. 2

Left: the performance of general learning algorithms is covered by our first main result (Theorem 1). Right: the performance of non-adaptive and incoherent learning algorithms is covered by our second main result (Theorem 3). Restricting to non-adaptive and incoherent measurement \({{{{\mathcal{M}}}}}_{{{{\bf{r}}}}}\) leads to much better theoretical performance guarantees. \({{{{\mathcal{M}}}}}_{{{{\rm{dist}}}}}\) is a measurement device with low distortion, w is calibration, p is prediction, \({{{\mathcal{A}}}}\) is the data processing of the i.i.d. algorithm and \({{{{\mathcal{M}}}}}_{{{{\bf{r}}}}}^{{{{\mathcal{A}}}}}\) is a measurement device uniformly chosen from \({{{\mathcal{A}}}}\)'s set of measurements. Success occurs if p is (approximately) compatible with the remaining post-measurement test copies \({\rho }_{l,{{{\bf{w}}}},p}^{{A}_{N}}\) or \({\rho }_{l,{{{\bf{r}}}},{{{\bf{w}}}},p}^{{A}_{N}}\).

Back to article page