Fig. 3: Optimal hyperparameter settings.
From: Practical Hamiltonian learning with unitary dynamics and Gibbs states

Settings for N, L, and A as a function of the desired error ϵ. These settings are found based on minimizing the upper bound on N ⋅ L (in practice, L can only take integer values, so the values shown would be rounded to the nearest integer). For the case of arbitrary Hamiltonians, we observe \(L \sim {{{{{{{\mathcal{O}}}}}}}}\left(\log {\epsilon }^{-1}\right)\), \(A \sim {{{{{{{\mathcal{O}}}}}}}}\left(1\right)\), and \(N \sim {{{{{{{\mathcal{O}}}}}}}}\left({{{{{{{\rm{polylog}}}}}}}}(1/\epsilon ){\epsilon }^{-2}\right)\). We find similar scaling for the case of the commuting Hamiltonian in every variable except A, which also scales as \({{{{{{{\mathcal{O}}}}}}}}\left(\log {\epsilon }^{-1}\right)\). Despite this, the overall sample complexity is only better than the general case by a constant factor.