Fig. 2: Adaptive cost function.

a Schematic representation of the function f(t) and the cost landscape of C(t, θ) versus t. We choose f(t) as a slowly growing function with t. When the optimization starts at t = 0, the cost function does not exhibit a barren plateau as the Hamiltonian is local H(0) = HL. As t increases H(t) becomes a linear combination of HL and a global Hamiltonian HG(t) which is adaptively updated using information gained from measurements on V(θ)ρV†(θ). As shown in the insets, this procedure allows for local minima to become global minima. Finally, when the algorithm ends at t = 1 the Hamiltonian is global H(t) = HG(t). b Schematic representation of the eigenenergies of H(t) versus t. For small t the Hamiltonian is local and hence its spectrum contains non-degenerancies that reduce the space of solutions. At t = 1, H(t) becomes a global Hamiltonian and the spectrum has m non-degenerate levels and a (2n − m)-degenerate level.