Abstract
We generalize the projection–based quantum measurement–driven k–SAT algorithm of Benjamin, Zhao, and Fitzsimons1 to arbitrary strength quantum measurements, including the limit of continuous monitoring. In doing so, we clarify that this algorithm is a particular case of the measurement–driven quantum control strategy elsewhere referred to as “Zeno dragging”. We argue that the algorithm is most efficient with finite time and measurement resources in the continuum limit, where measurements have an infinitesimal strength and duration. Moreover, for solvable k-SAT problems, the dynamics generated by the algorithm converge deterministically towards target dynamics in the long–time (Zeno) limit, implying that the algorithm can successfully operate autonomously via Lindblad dissipation, without detection. We subsequently study both the conditional and unconditional dynamics of the algorithm implemented via generalized measurements, quantifying the advantages of detection for heralding errors. These strategies are investigated first in a computationally–trivial 2-qubit 2-SAT problem to build intuition, and then we consider the scaling of the algorithm on 3-SAT problems encoded with 4–10 qubits. We numerically investigate the scaling of 3-SAT with respect to algorithmic runtime and find that the optimized time to solution scales with qubit number n as λn, where λ is slightly larger than \(\sqrt{2}\) for unconditional dynamics and less than \(\sqrt{2}\) for conditional dynamics. We assess the implications for using this analog measurement–driven approach to quantum computing in practice.
Similar content being viewed by others
Introduction
Since the remarks by Feynman2, and algorithms by Shor3 and Grover4, quantum computation5,6 has drawn increasing research interest, spurring the rapid development of quantum technologies across a variety of experimental platforms. Even independent of any particular hardware implementation, a variety of approaches have been developed, including circuit–based quantum computers6, adiabatic–7,8 or annealing–based9,10 quantum computing, and measurement–based quantum computing11, in addition to many flavors of quantum simulators. Below we will explore an instance of a different approach: In contrast with “measurement–based” computation, where a highly entangled state is prepared and then measured, we will study here an example of what may instead be called “measurement–driven”1,12,13,14,15,16,17 computation. Measurement–driven computation here refers to an approach in which the fundamental invasiveness of quantum measurements or dissipators are used to create the quantum dynamics that solve a computational problem. Related approaches to computation involving or simulating open quantum systems have also been explored18,19,20,21.
Our point of departure in this manuscript is the quantum measurement–driven approach to solving k–SAT problems proposed by Benjamin, Zhao, and Fitzsimons1 (henceforth BZF). k–SAT problems involve n Boolean variables and require satisfaction of m clauses each containing k Boolean variables. The complexity of k-SAT is well understood, and Boolean satisfiability problems were among the first to be classified as NP-complete22,23,24. Many rigorous and heuristic classical algorithms have been proposed to solve k-SAT, particularly 3-SAT. While not the best classical algorithm, the celebrated Schöning’s algorithm25, which repeatedly checks and randomly corrects the clauses, is probably the most well-known. This has a provable runtime upper bound that scales as (4/3)n for 3-SAT25.
BZF proposed a method by which a k–SAT problem may be solved through repeated cycles of k–qubit projective measurements, with a measurement representing each clause of the logical proposition. Through gradual adjustment of the measurement axes defining true or false on a given qubit, the clause measurements are able to negotiate a solution. This may be regarded as a form of analog quantum computation (most similar to adiabatic computation), where open system dynamics (here, the invasiveness of measurement) are used to create the solution dynamics, instead of the closed system (unitary) dynamics that are more often considered for such purposes. Our aim in this manuscript is to generalize the BZF scheme to the cases of sequential general–strength measurements, and weak measurements or dissipators.
This approach rests on the enormous progress made in continuous quantum monitoring over the past few decades. Generalized quantum measurements are well understood theoretically26 in the language of Positive Operator–Valued Measures. Generalized measurements include not only projectors, but also minimally invasive (and minimally informative) weak quantum measurements. Continuous quantum monitoring arises in the limit of continuous infinitesimal–strength quantum measurements27,28,29,30,31, and has been extensively explored in experiments, primarily on superconducting qubit platforms32,33,34,35. The “quantum trajectories” conditioned on sequences of measurement readouts are accessible in real-time in such experiments, and the ensemble average of such trajectories reduces to Lindbladian dissipation of the monitored channel(s). Continuous monitoring enables such features as simultaneous monitoring of non-commuting qubit observables36,37,38,39,40,41,42,43, and measurement–dependent feedback control28,30,44,45,46,47,48,49 that enables such diverse capabilities as e.g., measurement–driven entanglement generation50,51,52,53,54,55, quantum state or subspace stabilization46,56,57,58,59,60,61,62,63,64,65,66, dissipative control via continuously–modified measurements (i.e., “Zeno Dragging”)67,68,69,70, or real–time error detection and correction71,72,73,74,75,76,77,78,79,80,81,82,83. The quantum Zeno effect84 describes the inhibition of quantum dynamics that occur on a timescale which is slow compared to measurement or dissipation. This too has been examined for weak measurements and/or continuous dissipation56,85,86,87,88,89,90,91,92, and has moreover found a number of applications as a form of dissipation engineering93,94, for quantum control of logical operations on qubits95,96,97, and for error correction57,58,59,60,61,62,63,64,65,66,69,97,98,99,100,101,102,103,104,105.
We highlight two main motivations for generalizing the BZF algorithm with such tools:
-
1.
Projective measurements imply idealized and discrete quantum operations that exist at the limit of infinite resource consumption106, in contrast with laboratory situations which typically involve continuous dissipation of a system via imperfectly–monitored channels,
-
2.
We will be able to formally connect the BZF k–SAT algorithm to “Zeno Dragging”67,68,70, which is a method for measurement–driven quantum control based on the quantum Zeno effect. Importantly, while Zeno dragging is improved by true measurement (dissipation leading to detection), it is a protocol that is able to function autonomously (with dissipation alone).
As we formally develop these ideas, we will be able to quantify some of the algorithm’s convergence properties as a function of the quantum measurement resources it requires. The extensive literature concerning quantum measurement contributes to the understanding of the generalized BZF algorithm that we develop below because (i) the algorithm of interest requires monitoring non-commuting observables corresponding to different clauses of a k–SAT problem, and (ii) the observables are changed over time as in the Zeno dragging approach to control, such that a solution state or subspace is dissipatively stabilized by the set of clause measurements. In particular, whenever a solution exists, the measurement–driven algorithm generates dynamics following a pure–state kernel of the Liouvillian in the adiabatic regime, similar to refs. 86,88.
The plan and aims of this paper are as follows: In section “Towards formulating the BZF scheme via continuous measurement” we review the construction of k–SAT problems, and the construction of quantum measurements used in the BZF projective algorithm. In section “Generalizing the measurement strength”, we describe how the projectors in the BZF algorithm may be generalized to finite strength and/or continuous monitoring. Specifically, we write down Kraus operators and state update rules for generalized clause measurements in section “Conditional and un-conditional dynamics under generalized measurement and dissipation”, and then write the corresponding time–continuous version of the dynamics in section “Weak continuous limit”. In section “Convergence in the Zeno limit” we then formalize the parallel between these continuous dynamics and Zeno dragging, and describe the convergence properties of the time–continuous algorithm in the Zeno limit (analogous to the adiabatic limit). These convergence properties imply that the algorithm is capable of functioning autonomously. We are then in a position to begin putting forward several generalized BZF algorithms: In section “Un-conditional BZF algorithm with finite measurement strength” we describe a dissipative (autonomous) algorithm, and in section “Heralding BZF algorithm with finite measurement strength via filtering” we describe a heralded algorithm that explicitly uses the clause measurement records. Finally, in section “Readout scheme and time-to-solution”, we describe the final (local) qubit readout, used to turn a state in the qubit register into a candidate solution bitstring to a k-SAT proposition, and describe how we quantify the time-to-solution (TTS). The methods of section “Generalizing the measurement strength” are applied first for the simple case of a two–qubit 2-SAT problem in section “Two–qubit 2-SAT: from discrete to continuous measurement”, with the aim of building intuition, and then applied to larger–scale 3-SAT problems in section “Scaling the problem up”. As we investigate larger 3-SAT problems, we comment increasingly on the TTS and its scaling with regard to qubit number. Concluding remarks are offered in section “Discussion and outlook”.
Reformulating the BZF scheme with continuous measurement
SAT problems
In this subsection, we briefly introduce the Boolean satisfiability (SAT) problem. A SAT problem involves n Boolean variables \({\{{b}_{j}\}}_{j = 1}^{n}\), where each variable bj takes values from either true (denoted by 0 or +) or false (denoted by 1 or −). A literal xj can take values from \(\{{b}_{j},{\bar{b}}_{j}\}\), where \({\bar{b}}_{j}\) is the negation of bj. In k-SAT, a clause Ci contains k literals connected by logical OR (∨). A k-SAT instance is then defined by a Boolean formula F in the conjunctive normal form (CNF), which involves m clauses connected by logical AND (∧)
where Ci can be, for example, \({C}_{1}={b}_{2}\vee {b}_{3}\vee {\bar{b}}_{5}\) for k = 3. A clause is satisfied when at least one of the literals is true, and we say a given SAT instance is satisfiable iff there exists an assignment of the n Boolean variables bsoln ∈ {0, 1}n such that all m clauses are satisfied simultaneously. A SAT problem is then a decision problem, where the goal is to decide whether a given SAT instance is satisfiable or not (A summary of index variables is provided in Table 1).
Note that, while 2-SAT can be solved by classical algorithms such as that of ref. 107 in polynomial time, k-SAT for k ≥ 3 is NP-complete. However, the classical complexity arguments are based on the worst case analysis. In practice, the difficulties of solving random SAT are not uniformly distributed for the clause density α, which is defined as α = m/n. It has been shown that, when α is increasing from 0, there is a phase transition exhibiting easy-hard-easy behavior for SAT108.
The BZF algorithm
We now describe the way we encode the classical Boolean variables into qubits, following the scheme due to Benjamin, Zhao, and Fitzsimons (BZF)1. We will then briefly review the BZF algorithm for 3-SAT based on projective measurements before proceeding to section “Generalizing the measurement strength” where we present the extended algorithm based on generalized measurement schemes that constitute the focus of this work.
For a classical Boolean variable bj, the two possible values (true and false) are represented by two θ-dependent pure states of a qubit j as per
where θ is a control parameter taking its value from [0, π/2]. Here \(\left\vert +\right\rangle\) is the equal superposition state \(\left\vert +\right\rangle =\frac{1}{\sqrt{2}}(\left\vert 0\right\rangle +\left\vert 1\right\rangle )\) that lies on the equator in the xz-plane of each qubit’s Bloch sphere. The θ-dependence is obtained by rotating \(\left\vert +\right\rangle\) around the y-axis of the Bloch sphere by an angle ± θ, with the sign corresponding to true or false. The rotation operator is given by
See Fig. 1 for an illustration.
A θ-dependent projection operator \({\hat{P}}_{i}(\theta )\) is assigned to each clause Ci. Specifically, for k-SAT, the projector is given by
where the integer \({l}_{{i}_{q}}\) is + 1 if the qth literal in the ith clause \({x}_{{i}_{q}}\) is equal to the Boolean variable \({b}_{{i}_{q}}\) itself, and \({l}_{{i}_{q}}\) is − 1 if \({x}_{{i}_{q}}\) is equal to the negated corresponding Boolean variable \({\bar{b}}_{{i}_{q}}\). The states \(\vert {\theta }^{\perp }\rangle\) and \(\vert -{\theta }^{\perp }\rangle\) are defined by
which are orthogonal to \(\left\vert \theta \right\rangle\) and \(\left\vert \bar{\theta }\right\rangle\) defined in Eq. (2) respectively, i.e., \(\langle {\theta }^{\perp }| \theta \rangle =0=\langle {\bar{\theta }}^{\perp }| \bar{\theta }\rangle\). As an example, the projector of the clause \({C}_{1}={b}_{2}\vee {b}_{3}\vee {\bar{b}}_{5}\) is given by
Essentially, the projector \({\hat{P}}_{i}(\theta )\) checks whether the clause Ci is violated: \({\hat{P}}_{i}(\theta )\) defines a measurement of a Hermitian observable
where \(\hat{{\mathbb{1}}}\) is the identity operator and \({\hat{X}}_{i}(\theta )\) has two eigenvalues ± 1. If the measurement gives +1 (Success), then Ci is not violated, while if the measurement gives −1 (Fail), then Ci is violated and the k-qubit subsystem of the quantum state is projected into the state \({\bigotimes }_{q = 1}^{k}{\vert {l}_{{i}_{q}}{\theta }^{\perp }\rangle }_{({i}_{q})}\langle {l}_{{i}_{q}}{\theta }^{\perp }\vert\), that corresponds to the only assignment of the relevant k Boolean variables violating the clause Ci.
The BZF algorithm1 can be summarized as follows. The input quantum state is the equal superposition of all computational basis states \({\left\vert +\right\rangle }^{\otimes n}\). In the cth cycle of clause measurements, one sequentially checks all the clauses \({\{{C}_{i}\}}_{i = 1}^{m}\) via projectors \({\{{\hat{P}}_{i}({\theta }_{c})\}}_{i = 1}^{m}\) in a pre-determined order. At the end of each cycle of clause measurements, θc is updated according to some schedule moving from θ = 0 to θ = π/2 over the course of the algorithm. The result is that after many clause measurement cycles, with the true/false measurement angles getting further apart, the qubits’ states “fan out”, and eventually arrive at z = ±1 in the computational basis when θ reaches π/2. All of the qubits are then individually measured in the computational basis to give the assignment of Boolean variables as the output, at which point the algorithm has terminated. During the entire process, if any clause measurement has failed, then one restarts the algorithm from the very beginning.
Intuitively speaking, when one fixes θ = 0, the quantum state cannot distinguish true and false of the Boolean variable, i.e., \(\vert \theta \rangle =\vert \bar{\theta }\rangle\) and \(\vert {\theta }^{\perp }\rangle =\vert {\bar{\theta }}^{\perp }\rangle\), but the clause checks will never fail. On the other hand, when θ = π/2, the quantum states corresponding to true and false are orthogonal \(\langle \bar{\theta }| \theta \rangle =\langle {\bar{\theta }}^{\perp }| {\theta }^{\perp }\rangle =0\) and thus can be perfectly distinguished. However, implementing the algorithm at fixed θ = π/2 is equivalent to a randomized classical brute force search. This is the advantage of the BZF algorithm, where θ is varied from θi to π/2 so that the success probability throughout the algorithm is higher than classical brute force searching, while the value θ = π/2 at the end of the algorithm ensures the complete information of the solution assignment can be obtained. In particular, BZF showed that for θ ∈ (0, π/2), the number of quantum states that can satisfy all clause checks is equal to the number of solution assignments of the Boolean variables in the SAT problem. Furthermore, if there is a solution assignment (s1, …, sn) with sj ∈ {±1} satisfying the CNF Boolean formula, then there is a corresponding quantum solution given by.
We note that this solution state is both pure and separable, and necessarily also a +1 simultaneous eigenstate of all of the clause observables \({\hat{X}}_{j}(\theta )\) defined in Eq. (7). BZF also showed that the fidelity of the instantaneous quantum state to the solution is monotonically increasing as one continues to implement clause check cycles that herald success. BZF’s numerical results indicated that the running time of their projection–based quantum algorithm measured in terms of the number of projective measurements scales as (1.19)n, outperforming the classical Schöning algorithm which is well known for its provable upper bound of (1.334)n 25.
Generalizing the measurement strength
In this section, we introduce a measurement model that will allow us to scale between the limits of projective measurement and weak continuous measurement, in the context of the BZF algorithm for solving k-SAT. As we generalize the measurement strength below, we will be moving the discrete and projective scheme of BZF towards a different scheme of generalized measurement, including the limit of weak continuous measurement, in which the measurement or dissipation is composed of a sequence of infinitesimal–strength open–system processes occurring on infinitesimal timesteps. With finite measurement strength, perfect projectors exist only in the limit of operations that take an infinitely long time to complete; such ideal operations do not really exist in any laboratory. Explicitly considering tradeoffs between measurement quality and the time expended to carry them out will later prove to be important in quantifying the performance and speed of the measurement–driven algorithm. As part of this exploration, we will show that the continuous version of the BZF algorithm is a version of what we have elsewhere called Zeno Dragging (see ref. 70 and references therein). Zeno dragging involves monitoring or dissipating observable(s) so as to induce the quantum dynamics to follow and eigenspace of the observable(s) over time. Here, we define clause measurements such that the k-SAT solution is the mutual + 1 eigenspace of many observables, that we Zeno drag as θ is varied. This allows for high–probability/high–fidelity control as long as the observable is moved slowly compared to the strength with which it is measured14,88. Moreover, such Zeno dragging schemes work autonomously (via dissipation without detection). We will ultimately define two main variants of the generalized algorithm, involving (i) the average (dissipative) dynamics, or (ii) the true measurement dynamics in which errors can be heralded. These two approaches are presented in four algorithmic subroutines below.
Conditional and un-conditional dynamics under generalized measurement and dissipation
In general, a measurement process can be described by a set of Kraus operators \(\{{\hat{M}}_{r}\}\) satisfying \({\sum }_{r}{\hat{M}}_{r}^{\dagger }{\hat{M}}_{r}=\hat{{\mathbb{1}}}\), where r is the label for the measurement record30,31. Given the prior-measurement state described by a density matrix ρ0, the probability of obtaining measurement result r is given by Born’s rule \({\mathbb{P}}(r)={\rm{Tr}}({\hat{M}}_{r}{\rho }_{0}{\hat{M}}_{r}^{\dagger })\), and the post-measurement state ρr conditioned on the readout r is
We will consider here Kraus operators for generalized clause checks of the form
where \(\hat{X}(\theta )\) is defined in Eq. (7). Here, τ is the “characteristic measurement time” representing the measurement strength and Δt is the duration of the measurement. Short τ denotes the fast “collapse” or strong measurement, while large τ denotes the slow “collapse” or weak measurement. The particular form of these Kraus operators assumes a measurement apparatus generating a continuous–valued readout r, and is based on, e.g., the Kraus operators derived in quantum optical contexts109,110. In particular, in the limit Δt/τ ≫ 1 the above–defined generalized measurement is effectively projective, while Δt/τ ≪ 1 is the limit of weak measurement. The latter leads to diffusive quantum trajectories when infinitesimal–strength measurements are continuously made over time (i.e., in the limit of continuous monitoring)28,29,30,31. When Δt/τ is between the two limits, the Kraus operators describe a generalized discrete measurement with finite strength.
When one considers the measurement process without measurement records, the dynamics are described by the average over all the possible trajectories, weighted by their probabilities. In this case, given the density operator ρ(t) at t, the average post-measurement density operator \(\bar{\rho }(t+\Delta t)\) is given by
where β = e−Δt/2τ.
Weak continuous limit
In this work, we will pay particular attention to the limit where Δt/τ ≪ 1, which is the limit of weak continuous measurement. In this case, the measurement strength in infinitesimal time interval Δt approaches 0. By expanding Eq. (11) to first order in Δt, we can derive the Lindblad master equation (LME) for the average dynamics
where \({\mathcal{L}}[\hat{X}]\) is the Lindbladian generator \({\mathcal{L}}[\hat{X}]\rho =\hat{X}\rho {\hat{X}}^{\dagger }-\frac{1}{2}({\hat{X}}^{\dagger }\hat{X}\rho +\rho {\hat{X}}^{\dagger }\hat{X})\), and \(\bar{\rho }\) denotes the average state. This is an expected and general property of Markovian quantum trajectories28,29,30,31,111,112. Note that we will be able to assume \(\hat{X}{(\theta )}^{\dagger }=\hat{X}(\theta )\) and \(\hat{X}{(\theta )}^{2}=\hat{{\mathbb{1}}}\) throughout this work, which is guaranteed for \(\hat{X}\) of the form \(\hat{{\mathbb{1}}}-2\,\hat{P}\) (7).
The individual trajectories conditioned on the measurement record, which can be expressed as \(r\,dt=\langle \hat{X}\rangle dt/\sqrt{\tau }+dW\), are also of interest. From Eq. (9) we can derive an Itô stochastic master equation (SME) describing such dynamics under the weak continuous limit28,29,30
where \({\mathcal{H}}[\hat{X}]\rho =\rho \hat{X}+\hat{X}\rho -2\langle \hat{X}\rangle \rho\) is the measurement backaction with \(\langle \hat{X}\rangle ={\rm{Tr}}(\hat{X}\rho )\) the expectation value of \(\hat{X}\). Here dW is the Wiener increment satisfying dW2 = dt by Itô’s lemma. Notice that the dynamics satisfying Eq. (12) are recovered by averaging over all possible trajectories in Eq. (13), consistent with the fact that dW has zero mean.
A striking difference between the algorithm dynamics for SAT under continuous measurement and under projective measurement is the effect of measurement ordering. To see this, we first notice that the observables corresponding to different clauses do not necessarily commute. This happens for θ ∈ (0, π/2), due to common Boolean variables involved in the two clauses being of complementary form. For example, the observables for C1 = x1 ∨ x2 and \({C}_{2}={x}_{1}\vee {\bar{x}}_{2}\) do not commute because both x2 and \({\bar{x}}_{2}\) appear in different clauses. Therefore, in the dynamics under projective measurement, the order of measurements will be important. However, for weak continuous measurement, non-commutativity plays no role in the time–continuous master equations Eq. (12) and Eq. (13), because non-commutative effects in the dynamics only appear to O(Δt2) and higher36,37,38,39,40,42,113. In other words, one can simultaneously measure all the clause observables under weak continuous measurement, and any effects due to clause ordering must vanish in the limit Δt/τ → 0. Importantly, the weak continuous measurement enabled simultaneous clause check immediately reduces the algorithmic running time by a factor of m compared to strong measurement, where clauses must be measured sequentially. The average dynamics under such m simultaneous clause measurements are described by the master equation
and the individual trajectory conditioned on \({\{d{W}_{i}\}}_{i = 1}^{m}\) is described by the SME
where each observable \({\hat{X}}_{i}(\theta )\) corresponds to a clause Ci. Each of the readouts on which these dynamics are conditioned is a sum of signal and noise contributions, namely,
where the first term is the expected signal content of the measurement outcome, and the Wiener process dWi is pure noise.
Convergence in the Zeno limit
We now elaborate on the convergence properties of Zeno dragging necessary for an understanding of our algorithms. Let us consider the case where there is a unique solution to our k–SAT problem, i.e., we assume there exists a unique solution of the form of Eq. (8). We may then define a frame change by the rotation
where ± rotations are assigned to each qubit according to the unknown solution bitstring s, such that the ideal solution dynamics become static in the \(\hat{Q}\)–frame. In this frame all of the \({\hat{X}}_{i}(\theta )\) are partially diagonalized, in the sense that the row/column of each \({\hat{{\mathcal{X}}}}_{i}={\hat{Q}}^{\dagger }\,{\hat{X}}_{i}(\theta )\,\hat{Q}\) which corresponds to the solution state at θ = π/2 (or t = Tf, where Tf is the total time) in the computational basis will now be occupied only by its θ–independent diagonal element. The schedule θ(t) is here assumed to vary in a continuous and differentiable way.
Let \(\varrho ={\hat{Q}}^{\dagger }\,\rho \,\hat{Q}\), such that the Itô \(\hat{Q}\)–frame dynamics read
with
The additional Hamiltonian term \({\hat{H}}_{Q}\) encodes diabatic motion due to movement of the \(\hat{Q}\)–frame as the observables are rotated, with s ∈ {±1}n representing the solution bitstring corresponding to Eq. (17). The solution state Eq. (8) is a simultaneous + 1 eigenstate of the clause observables \({\hat{X}}_{i}\), which we now notate as \(\tilde{\rho }=\left\vert {\phi }_{soln}(\theta )\right\rangle \left\langle {\phi }_{soln}(\theta )\right\vert\). In the new frame, this eigenstate is θ–independent and therefore time–independent, i.e., we have \({\hat{{\mathcal{X}}}}_{i}\,\tilde{\varrho }\,{\hat{{\mathcal{X}}}}_{i}=\tilde{\varrho }\) for all i.
We can now consider the algorithm dynamics in the \(\hat{Q}\)–frame, in the Zeno limit Tf /τ → ∞ (which is an adiabatic limit). We initialize our system in \(\tilde{\varrho }\), corresponding to \(\rho ={(\left\vert +\right\rangle \left\langle +\right\vert )}^{\otimes n}\) at θ = 0. Notice that because \(\tilde{\varrho }\) is an eigenstate of all the \({\hat{{\mathcal{X}}}}_{i}\), we have \({\mathcal{L}}[{\hat{{\mathcal{X}}}}_{i}]\tilde{\varrho }=0\) and \({\mathcal{H}}[{\hat{{\mathcal{X}}}}_{i}]\tilde{\varrho }=0\) for all i. Then in the limit Tf /τ → ∞, we have \(\dot{\theta }\to 0\), and \(\tilde{\varrho }\) is therefore a fixed point of both the conditional and unconditional dynamics. This means that in the Zeno limit, our algorithm converges in probability to the desired solution dynamics via perfect Zeno pinning in the \(\hat{Q}\)–frame, thereby achieving deterministic (diffusion–free) evolution. In other words, when the \(\hat{Q}\)–frame can be constructed, namely, when a unique solution exists, one may think of the rotation of the clause observables as generating a diabatic perturbation about the solution Eq. (8) when Tf /τ ≫ 1. We point out that these limiting–case solution dynamics are both pure and separable in the case of a unique solution. Furthermore, the arguments above imply that the scaling of the Lindbladian algorithm with qubit number goes to 1n in the Zeno limit Tf/τ → ∞, since any solution that exists is found deterministically, independent of the number of qubits. See Supplementary Information Section I and refs. 57,58,59,60,61,62,63,64,65,66,70 for further comments and context. We will revisit questions related to algorithmic scaling in later sections.
We now proceed by clarifying the connection between such continuous BZF algorithms and the measurement–driven approach to quantum control known as “Zeno Dragging”. In ref. 70 we defined “Zeno Dragging” in terms of a quantity
claiming that “Zeno dragging is a viable approach to driving a quantum system from some initial \(\vert {\psi }_{i}\rangle\) to a final \(\vert {\psi }_{f}\rangle\), if and only if there exists a parameter(s) θ controlling the choice of measurement such that (i) a continuous sweep in θ is possible, and (ii) that this generates a continuous deformation of a local minimum of \({\mathsf{g}}(\rho ,\theta )\) which traces a path from \(\vert {\psi }_{i}\rangle\) to \(\vert {\psi }_{f}\rangle\)”. It is easy to verify that each \({{\mathsf{g}}}_{j}\) vanishes at an eigenstate of \({\hat{X}}_{j}(\theta )\), and hence that \({\mathsf{g}}(\rho ,\theta )\) vanishes at a common eigenstate of all the \({\hat{X}}_{j}(\theta )\), if such an eigenstate exists. One can readily see the connection between this definition with definitions employing the Liouvillian kernel86,88. We conclude that the time–continuum version of the finite–time generalized–measurement BZF algorithm is a Zeno Dragging operation by the definition of ref. 70, where Zeno dragging here means that we continuously and quasi-adiabatically deform the initial state \({\left\vert +\right\rangle }^{\otimes n}\) to the distinguishable solution state \(\left\vert {\phi }_{soln}(\theta =\frac{\pi }{2})\right\rangle\) by rotating θ from 0 to π/2.
Finally, we note that in the case with multiple solutions, each solution will still take the form of Eq. (8), and together they will span the solution subspace. The convergence and stability arguments given above will apply to the entire solution subspace in so far as they apply for each of the individual solutions Eq. (8) forming a basis for that space. Even in the multi–solution case, the solution space should here be defined as the root of \({\mathsf{g}}\), i.e., the common + 1 eigenspace of all the clause observables. From a control perspective, \({\mathsf{g}}\) is an objective function, and minimization of \({\mathsf{g}}\) implies staying as close to our target solution eigenspace as possible70. See Supplementary Information Section I for more extended remarks.
To summarize: the adiabatic theorem for Lindbladian dynamics, which is equivalent to Zeno dragging in the time continuum limit, ensures that if one starts in the kernel of \({\mathcal{L}}(\theta )\), and if the total evolution time Tf is long enough compared to the minimum Liouvillian gap of \({\mathcal{L}}(\theta )\), then the quantum system will stay near the instantaneous pure–state kernel of \({\mathcal{L}}(\theta )\)86,88. In our case, it is easy to check that the pure-state kernel of \({\mathcal{L}}(\theta )\) is spanned by the solution state(s) \(\left\vert {\phi }_{soln}(\theta )\right\rangle\)1, since a state being in the kernel is equivalent to its passing all clause checks with certainty. It is natural from a control perspective to imagine Zeno dragging in terms of a single measurement that isolates some target state or subspace (see70 and references therein). Here however, we have a sort of mirror image of that scenario: instead of a single measurement that positively identifies some target dynamics, the continuous BZF algorithm contains a collection of measurements that each rule out a possible solution, with the desired solution emerging as the remaining option from the collective dynamics. We might term this “Zeno exclusion control”. We have essentially shown that a collective “ruling out” of all states that fail clause checks leads to dynamics with the same autonomous/stabilizing properties on the remaining solution subspace as in controlled Zeno dragging. While having many measurements appears experimentally cumbersome, this strategy makes sense from an algorithmic perspective, because it implements Zeno dragging without our knowing the target dynamics a priori (and here knowing target dynamics \(\left\vert {\phi }_{soln}(\theta )\right\rangle\) would amount to already knowing a solution to the k-SAT problem).
Un-conditional BZF algorithm with finite measurement strength
The autonomous stability we have just described motivates us to further investigate a BZF-type algorithm based on Lindbladian dissipation \({\mathcal{L}}(\theta )=\mathop{\sum }\nolimits_{i = 1}^{m}\frac{1}{4\tau }{\mathcal{L}}[{\hat{X}}_{i}(\theta )]\) (from Eq. (14)) alone. Recall that Eq. (11) gives the average dynamics of our quantum system under a single clause check measurement with finite measurement strength. We use this to propose an algorithm based on the average dynamics for general Δt/τ (where Eq. (12) is recovered in the Δt → dt limit of Eq. (11)). Algorithmically, one can perform such clause check measurements for all m clauses in a predetermined order, which consists of a clause check cycle. At the end of each clause check cycle, one then updates the control parameter θ, proceeding monotonically from θ = 0 to π/2. Finally, one reads out all the qubits, in the computational basis. This is summarized in Algorithm 1 (with the readout procedure deferred to Algorithm 4).
Algorithm 1
k-SAT by average dynamics
1: input: τ, Tf, Δt, and a schedule function θ(t/Tf)
2: initialize t ← 0 and the state to \(\rho (0)=\left\vert +\right\rangle {\left\langle +\right\vert }^{\otimes n}\)
3: while t ≤ Tf do
4: t ← t + Δt
5: θ ← θ(t/Tf)
6: sequentially dissipate clause checks according to by Eq. (11) for all clauses
7: end while
8: return ρ(Tf)
One can immediately check that the solution state defined in Eq. (8) is a fixed point of Eq. (11) applied over all clause dissipators. However, we also point out that the algorithm using average dynamics nevertheless allows the possibility of reaching the final solution state at θ = π/2 via some diabatic path, where the measurement (if recorded) has failed and thus the state deviates from the solution state at some intermediate time. A Lindbladian BZF algorithm is a special case of Algorithm 1, operating in the time–continuum limit. It relies on the convergence in mean and probability in the Zeno limit, as described in section “Convergence in the Zeno limit”.
Heralding BZF algorithm with finite measurement strength via filtering
We now formally consider the advantages of having a detector granting us access to the pure state conditional dynamics Eq. (9), instead of relying only on the average dynamics Eq. (11) and autonomous aspects of Zeno stabilization. It is possible that propagation of the conditional dynamics may be computationally very expensive in contexts where solving a k-SAT problem is of interest, even though it is always possible in principle. This is no barrier to the scheme presented here however, since it does not require computation of the conditional ρ(t), but instead only access to the clause readouts ri (followed by local z readouts rj after θ reaches π/2; see section “Readout scheme and time-to-solution” for details). Note that in the event that the conditional dynamics can actually be tracked through the entire evolution with high efficiency, the terminal local z readout may no longer be necessary.
The potential benefit of an algorithm employing heralded success of clause readout ri is primarily in the possibility of feedback. We might quickly terminate trajectories that have already collapsed into the subspace associated with failure of clauses, which means they are no longer able to adiabatically follow the instantaneous solution state. Restarting the algorithm upon detection of an error is the simplest possible form of feedback we might employ in this scenario, and in this work we do not go beyond that simplest case of error detection. However, more sophisticated forms of feedback might aim to actively correct errors in real time, instead of simply restarting the algorithm when a clause fails; see section “Two–qubit 2-SAT: from discrete to continuous measurement” for additional discussion of these possibilities that might be investigated in future work.
Unlike projective measurements where one can easily diagnose the collapse into some subspace of the measured observable using the measurement result, in weak measurement the dynamics are diffusive, which complicates fast diagnosis of errors directly from the noisy measurement record. In order to overcome this, a filter is needed. We first use the time continuous situation to develop some intuition for the filter we are going to use. Such filtering is used in continuous quantum error correction to detect and correct errors in real-time71,72,73,74,75,76,77,78,79,80,81,82,83 (CQEC). In the language of error correction, an ideal Zeno dragging procedure conditioned on \({r}_{i}=+1/\sqrt{\tau }\,\forall \,i\) defines the “codespace” that we attempt to follow. A failed clause measurement will return readouts with a mean signal centered around \(-1/\sqrt{\tau }\) instead of \(+1/\sqrt{\tau }\), corresponding to an “error subspace” in error correction language. The main difficulty in realizing an effective CQEC implementation is in managing the tradeoff between rapid error detection and statistical confidence in the detection and characterization of the error. We consider filter functions on the readout of the form
where \({\mathcal{W}}(t,{t}^{{\prime} })\) is a window function to be chosen below, and \({{\mathcal{N}}}_{{\mathcal{W}}}=\mathop{\int}\nolimits_{0}^{t}{\mathcal{W}}(t,{t}^{{\prime} })\,d{t}^{{\prime} }\) is a normalization factor. For \({\mathcal{W}}/{{\mathcal{N}}}_{{\mathcal{W}}}=1\), and in the continuum limit where dt is infinitesimal, \({{\mathcal{B}}}_{i}\) can be interpreted as a time–continuum approximation of the log-likelihood ratio for a sequence of clause measurements heralding success \(\left\langle {r}_{i}\right\rangle =+1/\sqrt{\tau }\) versus failure \(\left\langle {r}_{i}\right\rangle =-1/\sqrt{\tau }\), over the entire measurement record. Thus, (20) should be similarly understood as being like a log-likelihood, where the role of the window function \({\mathcal{W}}(t,{t}^{{\prime} })\) is to weight that likelihood towards the “recent history” of the measurement record in an appropriate way. For the measurement signal ri(t) for each clause i, we obtained the filtered signal \({\bar{r}}_{i}(t)\) obtained via an exponential filter with a finite integration window
where Nbe = 1 − e−1 is the normalization constant, and Tbe is the response time. This is essentially an exponential filter inside a single threshold boxcar filter78,79.
Recall the general form of the clause readouts Eq. (16) in the time–continuum limit. If the signal has reached a steady value s0 before t0, i.e., \(\langle {\bar{r}}_{i}({t}_{0})\rangle =\langle {r}_{i}({t}_{0})\rangle ={s}_{0}\), and the “signal part” of the raw signal changes from s0 to 〈ri(t > t0)〉 = s1 at t0, then the expectation value of the filtered signal will approach to the new steady value s1 exponentially as
for t0 ≤ t ≤ t0 + Tbe, with 〈 ⋅ 〉 understood as an ensemble average. One can check that \(\langle {\bar{r}}_{i}(t+{T}_{be})\rangle ={s}_{1}\), which will remain for t > t0 + Tbe. In the steady state, the variance of the filtered signal is
where e ≈ 2.7183. This is consistent with the intuition that in the filtered signal, a larger Tbe value results in smaller fluctuations but a longer response time, while a smaller Tbe value results in a quicker response but bigger fluctuations.
When implementing the error detection for a discrete-time algorithm, one then discretizes Eq. (21) to get an update equation for the filtered signal \({\bar{r}}_{i}(t)\) as
The error-detection strategy using the filtered signal depends on a threshold value rthr. During the evolution of the system under Eq. (9) for all clauses, if any of the filtered signal corresponding to the ith clause is below the threshold, i.e., \({\bar{r}}_{i}(t) < {r}_{th}\), at time t, then the algorithm is terminated at time t and diagnosed with “FAILED”. This subroutine is summarized in Algorithm 2.
Algorithm 2
k-SAT by single trial heralded dynamics
1: input: τ, Tf, Δt, Tbe, rth, and a schedule θ(t/Tf)
2: initialize t ← 0 and the state to \(\rho (0)=\left\vert +\right\rangle {\left\langle +\right\vert }^{\otimes n}\)
3: while t ≤ Tf do
4: t ← t + Δt
5: θ ← θ(t/Tf)
6: sequentially measure all clauses as defined by Eq. (9) and get measurement records \({\{{r}_{i}(t)\}}_{i = 1}^{m}\).
7: update the filtered signals \({\{{\bar{r}}_{i}(t+\Delta t)\}}_{i = 1}^{m}\) via Eq. (24).
8: if any \({\bar{r}}_{i}(t) < {r}_{th}\) then
9: terminate and return “FAILED”, t
10: end if
11: end while
12: return ρ(Tf)
Notice that in the projective limit where Δt/τ ≫ 1, one can choose Tbe = Δt, so that the projection into the failed subspace can be detected immediately. In the continuum limit where Δt/τ ≪ 1, one can typically choose Tbe ~ τ, so that the collapse caused by the measurement is sufficiently far along to be confidently detectable through the readout noise.
As discussed above, the heralded dynamics benefit from earlier detection of the failure. This advantage allows us to define a heralded algorithm that restarts when the failure is detected early, and thus is more likely to follow the correct trajectory given a fixed amount of temporal computational resource (total algorithm running time), even without feedback. This heralded algorithm is summarized in Algorithm 3.
Algorithm 3
k-SAT by heralded dynamics
1: input: τ, Tf, Δt, Tbe, Tmin, rth, and a schedule θ(t/Tf)
2: initialize trest ← Tf and the state to \(\rho (0)=\left\vert +\right\rangle {\left\langle +\right\vert }^{\otimes n}\)
3: while trest≥Tmin do
4: run Algorithm 2 with input τ, trest, Δt, Tbe, rth, θ(t)
5: if Obtained “Failed” and t then
6: trest ← trest − t
7: else
8: return ρ(trest)
9: end if
10: end while
11: run Algorithm 2 without failure detection and with schedule θ(t/trest) from t = 0 until time t = trest
12: return ρ(trest)
Notice that in Algorithm 3, we do not terminate the dynamics when the remaining time trest is smaller than a minimum value Tmin. This is because when the total dragging time is too small (comparable to Tbe and τ), the quantum Zeno effect is not strong and the failure detection based on the filter is also not reliable anymore. However, in this situation we can still use conditional dynamics, i.e., Eq (9) rather than the averaged dynamics of Eq. (11).
We reiterate that although the clause measurement outcomes are recorded in the implementation of Algorithm 3, we do not assume that these measurement records are used directly to estimate the solution bitstring or conditional state, even if the solution state might be successfully prepared at the end of the algorithm. We only use these measurement signals to herald the success of trajectories. Therefore, for both Algorithm 1 and Algorithm 3, we would need to perform local Pauli-z measurements to read out the solution bitstring at the final time Tf. We shall discuss the final readout measurements in the next section.
We would like to emphasize that although Algorithms 1–3 are defined with finite Δt, one can also obtain continuous–time versions of each algorithm by going into the limit of Δt → 0. In this case, the dynamics are described by Eq. (14) and (15), and the filter is given by Eq. (21).
Readout scheme and time-to-solution
Before we move on to illustrate the algorithms, we shall first introduce some performance metrics that we use to benchmark the algorithms in sections “Two–qubit 2-SAT: from discrete to continuous measurement” and “Scaling the problem up”.
For n-qubit systems, if we make the weak measurements in the computational basis, the Kraus operator for obtaining a readout signal tuple \({\bf{r}}=({r}_{1},\cdots \,,{r}_{n})\in {{\mathbb{R}}}^{n}\) is
where Δtm is the duration of the readout. Here b = (b1, ⋯ , bn) ∈ {−1, +1}n represents a possible solution bitstring, and
is the corresponding projector in the computational basis (z basis). Suppose that upon performing local readouts of the state ρ = ρ(Tf) returned by one of Algorithms 1–3 to obtain r, we then generate the our candidate solution bitstring via \(\tilde{{\bf{s}}}={\rm{sign}}({\bf{r}})\). Let s = (s1, ⋯ , sn) ∈ {−1, +1}n be the actual solution bitstring of the k-SAT problem. Then the probability of getting \(\tilde{{\bf{s}}}={\bf{s}}\), i.e., the probability that our algorithm and readout return a correct solution, is
where we have abbreviated
Simplifying this yields
where hs,b is the Hamming distance between s and each possible bitstring b. The above derivation gives a procedure of reading out a bitstring via generalized measurement from the final state of either quantum Algorithm 1 or Algorithm 3. The overall general algorithm using any one of these two generalized measurement algorithms together with the final state readout is summarized in Algorithm 4 below.
Algorithm 4
k-SAT with generalized qubit readout
1: input: a CNF Boolean formula F, τ, Tf, Δt, Δtm, a schedule θ(t/Tf): if Algorithm 3 is used, values of Tbe, Tmin, rth are also input.
2: run either Algorithm 1 or Algorithm 3, and obtain a final state ρ(Tf)
3: read out local \({\hat{\sigma }}_{z}\) of ρ(Tf) with Kraus operators given in Eq. (25), obtaining readouts \({\bf{r}}={\{{r}_{i}\}}_{i = 1}^{n}\)
4: extract the candidate bitstring b = sign(r)
5: if b is classically verified to be a solution bitstring then
6: return verified solution b
7: else
8: repeat 2 − 4
9: end if
Recall from the arguments of section “Convergence in the Zeno limit” that in the limit of long dragging times Tf, ρ is expected to concentrate itself in the solution subspace, i.e., \({\rm{Tr}}(\rho \,{\hat{P}}_{{\bf{b}}})\) will only contribute to bitstrings that are actually solutions (assuming one exists). Moreover, as the readout time also becomes long (Δtm ≫ τ, such that the local readout is effectively projective), \({\rm{erf}}(\sqrt{\Delta {t}_{m}/2\tau })\) also tends to one, so that the leading 2−n is canceled out, and we expect to deterministically recover a correct solution bitstring.
We analyze the tradeoffs between the success probability \({{\mathbb{P}}}_{{\bf{s}}}\) in an individual run of Algorithm 4 and algorithm runtime by first defining a time-to-solution (TTS) as
where an algorithm realized within a dragging time Tf is followed by qubit readout of duration Δtm and is repeated N times. The superscript (m) denotes that the final measurement time Δtm is included. Suppose we allow ourselves to perform these N shots with the assumption that classically verifying whether each output bitstring b is actually a solution to our k-SAT problem is easy. This is true since k-SAT is in NP. How much time, and how many shots N, should be required to guarantee that the correct solution were to appear at least once with probability greater than some confidence cutoff \({{\mathbb{P}}}_{\star }\)? We can say that the probability that M shots out of N are successful are governed by binomial statistics, i.e.,
Then the probability of at least one successful run is
where \({{\mathbb{P}}}_{{\bf{s}}}^{(0/N)}\) is the probability that all N runs fail. Consequently our condition of interest is
such that the expected number of runs needed to achieve a solution with probability at least \({{\mathbb{P}}}_{\star }\) is
Here, \({{\mathbb{P}}}_{\star }\) should be understood as a desired level of solution confidence specified by the experimenter.
Given the above, we may now define a modified TTS that replaces N by N⋆, i.e.,
This is now consistent with the confidence bound \({{\mathbb{P}}}_{\star }\), i.e., it measures the time to solution for achieving a solution with probability at least \({{\mathbb{P}}}_{\star }\). The readout time Δtm is often neglected114 below, in order to emphasize the scaling of the TTS with the pure dragging time Tf alone, which is consistent with common analyses of asymptotic scaling of quantum algorithms.
The expression N⋆(30) may also be understood as a means of assessing whether a given set of parameters Tf/τ and Δtm/τ are sufficient to achieve any useful advantage between this measurement–driven algorithm and some other option. For example, a choice of parameters requiring N⋆ ≥ 2n to achieve the desired confidence level is clearly useless, because it would be faster to simply guess solution bitstrings at random and check whether they satisfy all m clauses. More generally, we can impose a condition that we want \({N}_{\star }\le {\lambda }_{c}^{n}\), where λc is some critical rate parameter that we want to beat. Some algebra reveals that
such that we can understand the \({{\mathbb{P}}}_{{\bf{s}}}\) required in an individual run of the algorithm to achieve a given scaling rate λc. We will soon see that in practice, the most difficult aspect of these expressions to evaluate is the dependence of \({{\mathbb{P}}}_{{\mathsf{s}}}\) on Tf.
Two–qubit 2-SAT: from discrete to continuous measurement
In order to gain some intuition about the dynamics under our measurement-driven algorithm, it is insightful to analyze the minimal version of the SAT problem. 2-SAT belongs to complexity class P and thus might be viewed as less interesting compared with other intractable SAT problems. The particular situation in which there are only two Boolean variables involved in the 2-SAT CNF formula is nevertheless of interest here, because the application of our generalized measurement BZF algorithms 1–3 illustrates the continuous form of BZF dynamics, while remaining relatively simple and accessible to detailed analysis. We will refer to this illustrative toy problem as the 2-qubit 2-SAT problem.
The simplest 2-SAT problem
2-qubit 2-SAT is the minimal version of the SAT problem that captures most of the significant features of SAT while involving a minimum number of qubits and clause measurements. In this section, we will analyze the algorithmic dynamics of the two-qubit system under generalized measurements. In particular, we will show that for a given total algorithm running time, weak continuous measurements lead to the highest success probability for finite Tf /τ. This motivates a detailed study of those continuous dynamics. We first examine the unconditioned average dynamics in Algorithm 1 with LME (14), and then consider the heralded dynamics of Algorithm 3 with the SME of Eq. (15).
More specifically, we consider a 2-qubit 2-SAT problem with a single satisfying solution. Without loss of generality such a problem can be defined by the following CNF formula
where
It can be easily checked that the only solution to this simple 2-qubit 2-SAT problem is (b1, b2) = (0, 1). Classically, one can imagine solving this by the following procedure. In the space {0, 1}2, each clause excludes an assignment that will violate it. For example, C1 = b1 ∨ b2 will exclude (b1, b2) = (1, 1) as a solution assignment. After the last clause check, the legal assignments that survive this procedure correspond to the solution. The BZF algorithm is similar, in that each clause measurement checks whether the quantum state is projected into the subspace it excludes as a failure subspace and the surviving quantum state conditioned on successful clause checks is the solution subspace. The three observables corresponding to the three clause checks in Eq. (33b) are given by
with
where \({\hat{\sigma }}_{x}^{(j)}\) and \({\hat{\sigma }}_{z}^{(j)}\) are Pauli-x and Pauli-z operators that only acts on the jth qubit. \({\hat{\sigma }}_{x}^{(j)}(\theta )\) is then the Pauli-x operator rotated clockwise in the x-z plane by θ as indicated in Fig. 2.
The quantum system is then driven by the generalized measurements of these three observables, and we adopt a simple linear schedule for the control parameter θ: for the cth cycle of clause measurements, \({\theta }_{c}=\frac{c}{N}\cdot \frac{\pi }{2}\) where N is the total number of cycles of clause measurements. The instantaneous solution state \(\left\vert {\phi }_{soln}(\theta )\right\rangle\), which is the common “+1”-eigenstate of \({\hat{X}}_{1}(\theta ),{\hat{X}}_{2}(\theta ),{\hat{X}}_{3}(\theta )\) is given by
The probability of successfully implementing the algorithm and thus also the running time then depend primarily on the ability to follow this instantaneous solution state in an adiabatic (Zeno) sense.
Zeno dragging for discrete and continuous generalized measurements
Burgarth et al.89 showed that the weak continuous limit is favorable for generating Zeno dynamics in the case of single measurement channel. We now show that a similar result can be obtained for the dynamics generated by multiple measurement channels. In particular, we will demonstrate that dynamics driven by weak continuous measurements approaches the target dynamics \(\left\vert {\phi }_{soln}(\theta )\right\rangle\)(35) better than discrete stronger measurements executed using the same total measurement strength and duration.
It will be adequate to consider the average dynamics given by Eq. (11) to make this point. Let the final state following the average dynamics driven by the generalized measurements be ρf. We then calculate the fidelity f between ρf and \(\left\vert {\phi }_{soln}(\theta =\frac{\pi }{2})\right\rangle =\left\vert 10\right\rangle\), i.e., f = 〈10∣ρf∣10〉. More specifically, we calculate the fidelity f(Δt/τ, Tf/τ) as a function of both Δt/τ, which controls the effective individual measurement strength, with τ the measurement time, and Tf/τ, where Tf = ℓ ⋅ Δt is the total duration of Zeno dragging. Figure 2 illustrates the relevant simulation of the average dynamics Eq. (11) for the 2-SAT problem on 2 qubits. This computation is consistent with the intuition that for a given measurement strength, increasing the total duration Tf (thus dragging more slowly) brings us towards the adiabatic/Zeno regime. However, for a given fixed and finite value of Tf, the fidelity finds its maximum value at Δt/τ = 0. This means that rather than multiple pulsed and discrete strong measurements, Zeno dragging in our two–qubit 2-SAT problem is most effective in the continuum limit, for any given value of Tf/τ.
The three projectors corresponding to the three clauses in the 2-qubit 2-SAT defined by Eq. (33b) are shown as tensor products of \(\left\vert {\theta }^{\perp }\right\rangle =\hat{R}(\pi +\theta )\left\vert +\right\rangle\) (indicated as a red dot) and \(\vert {\bar{\theta }}^{\perp }\rangle =\hat{R}(\pi -\theta )\vert + \rangle\) (indicated as a blue dot). Upon measuring the observable corresponding to each \({\hat{P}}_{j}(\theta )\), an outcome +1 heralds success (and corresponds to the ±θ states), while an outcome −1 heralds failure (and corresponds to the ±θ⊥ states). Thus, each measurement serves to rule out a possible final solution, shown to the right of drawings of each projector. The unique solution that remains is the solution to this 2-SAT problem.
Illustration of adiabatic convergence via Lindblad dynamics
We demonstrate here the convergence of the LME Eq. (14) to the adiabatic limit in the long time using the 2-qubit 2-SAT problem. Firstly, it is illustrative to show the exact diagonal form of the solution subspace in the Zeno frame defined by Eq. (17). For the 2-qubit 2-SAT example in Fig. 3, we have \(\hat{Q}={\hat{R}}_{Y}(\theta -\pi /2)\otimes {\hat{R}}_{Y}(\pi /2-\theta )\), and the observables of Eq. (34a) are then equal to
in the \(\hat{Q}\)–frame, represented in the \({\{\left\vert 11\right\rangle ,\left\vert 10\right\rangle ,\left\vert 01\right\rangle ,\left\vert 00\right\rangle \}}^{\top }\) basis. The solution state, which is now θ–independent, is marked in boldface in Eq. (36a). Clearly, the solution subspace is isolated in a block diagonal form and becomes θ-independent, as discussed in previous sections.
The two panels differ only in the scale and range of their axes. These results are obtained from simulating the average dynamics driven by measuring the three clause observables \({\hat{X}}_{1}(\theta )\), \({\hat{X}}_{2}(\theta )\) and \({\hat{X}}_{3}(\theta )\) in the 2-qubit 2-SAT problem Eq. (33b) according to Eq. (11). For a given dragging time Tf = ℓ ⋅ Δt, where Tf is subdivided into ℓ measurements each of duration Δt, the optimal value of Δt/τ is found at Δt/τ → 0, which implies that ℓ → ∞. The upper panel also shows small values of Tf ~ Δt: in this region we set Δt to the nearest ℓ value that evenly subdivides T. The small fidelity oscillations seen in this region are due to the discrete nature of this operation. For Δt > Tf (red hatched region, which is forbidden), we perform only the clause measurement at θ = π/2, yielding the lowest possible value of fidelity \({\mathscr{F}}=\frac{1}{4}\) for a two-qubit problem. This forbidden region is a general feature of such an analysis, and informs the behavior of the surrounding contours. The maximization of the fidelity for a given Tf/τ via taking Δt → 0 illustrates the advantages of operating in the limit of continuous weak measurements that is located towards the bottom edge of each plot.
We may now consider the dynamics of the algorithm on average, i.e, as modeled by Eq. (12). These dynamics represent the average performance, but can also be interpreted as the dynamics arising in the event that we dissipate information to the environment without actually detecting it28,29,30,31. This reflects the perspective that the average/Lindbladian dynamics are equivalently the conditional dynamics in the limit of vanishing measurement efficiency. In looking at the Lindblad dynamics, we are then looking at an “un-heralded” version of the continuous–time BZF algorithm, which is the time-continuous limit Δt/τ → 0 of Algorithm 1. Note that from the discussion of convergence in section “Convergence in the Zeno limit”, we can expect to deterministically achieve perfect solution dynamics in the limit of long dragging time, even in the case without detection.
Some key features of these Lindblad algorithm dynamics are illustrated in Fig. 4. Here we integrate the Lindblad dynamics for a variety of Γ Tf values (where Γ = 1/4τ is the measurement rate), scanning from the quasi-adiabatic/Zeno regime Γ Tf ≫ 1, to the diabatic regime Γ Tf ~ 1. We see that in the adiabatic limit Γ Tf → ∞, the dynamics converge to the pure and separable solution Eq. (35), as expected. For smaller values of Γ Tf, the fidelity is clearly reduced. We show that the reduced density matrices become less and less pure, with reduced contrast along the local z–axes along which we wish to read out a final 2-SAT solution. This loss of purity in the reduced density matrices is due both to overall purity loss of the two–qubit state due to dissipation, and due to the formation of spurious measurement–induced entanglement between the qubits when the Lindblad algorithm is constrained to finish within a finite time.
The same measurement strengths and linear schedule θ = π t/2 Tf are used through all four panels. The time axis is displayed here in units of the total evolution time Tf. a Illustrates the reduced density matrix evolution for qubit 1 (teal) and qubit 2 (purple), shown for both in the xz Bloch plane; results are shown for a variety of measurement strengths spanning from Γ Tf = 1 (pale) to Γ Tf = 1 × 104 (dark). b Shows the corresponding time traces of the local z coordinates for the two qubits. c shows the behavior of the separability (\(1-{\mathcal{C}}\), where \({\mathcal{C}}\) is the concurrence127) throughout the Lindblad evolution. d Plots the purity of the two-qubit state as it undergoes Lindblad evolution. These results show that in the adiabatic regime Γ Tf ≫ 1, we converge towards a pure-state and completely separable evolution (see Eq. (35)) that deterministically generates the solution of this very simple 2-SAT problem, i.e., z1 = 1 and z2 = −1, corresponding to (b1, b2) = (0, 1) as expected from Eq. (33b). We showed in section “Convergence in the Zeno limit” that this is in fact a general feature of k-SAT problems with a unique solution in this measurement--driven algorithm. Even far from the adiabatic regime (Γ T ~ 1), we still see that some information about this solution manifests itself on average, since the reduced density matrix traces (local z coordinates in (b)) still drift discernibly apart and move towards their respective solution states.
In Fig. 4a, b (and also in Fig. 5b below) one may immediately observe that as the dragging time Γ Tf increases, the system enters a time regime where lengthening the algorithm evolution time leads to rapid improvement in the contrast between the output states (around Γ Tf ≲ 200). However, the benefit of increasing Γ Tf is comparatively small beyond this point (Γ Tf ≳ 200). Therefore, to obtain the solution state with a desired high probability, we expect that there may generically exist an optimal runtime value \({T}_{f}^{opt}\), such that repeating this Lindbladian dragging process some number of times leads to extraction of the solution at the desired confidence level. We devote the next section to developing conceptual and formal tools needed to probe this idea in our two–qubit problem, and will return to it later at scale.
The times Tf, Δtm, and TTUS99% (Eq. (40)) are all given in units of the characteristic measurement time τ, which is assumed to characterize the strength of both the clause measurements and subsequent local readout measurements. We illustrate the timeline of each experimental run of the protocol in (a). b Shows the time evolution of the local z coordinate (see Fig. 4c), and illustrates the average z resolution we can expect to obtain on any one of the two qubits, and illustrates how this depends on the Lindblad dragging times Tf expended on the analog measurement-driven computation. c Analyzes the readout time Δtm required to achieve a solution in N shots, given the local bias ∣z∣ on the qubits. We plot Eq. (39) for n = 2 and required confidence level \({{\mathbb{P}}}_{\star }=0.99\), which we denote N99%. Since we consider a two-qubit problem (n = 2), it is effectively meaningless to consider performing N > 2n = 4 shots; the contour separating the useful (lower) region from this region is highlighted in red. d Aggregates the results of (b, c). Here we plot Eq. (40) (a special case of Eq. (31)) for \({{\mathbb{P}}}_{\star }=0.99\), notated as \({{\rm{TTS}}}_{99 \% }^{(m)}\). Some selected contours of N99% Eq. (39) are overlaid, highlighting the discrete nature of the ceiling function introduced to enforce the confidence threshold. For this very small toy problem, we observe that there is not an optimal TTS visible. Because 2n is small in this instance, the stipulation that N be at most 2n is quite restrictive (it is, after all, very easy to solve this toy problem just by checking all candidate solutions). We will see in later sections that the importance of these small-size effects falls away as we consider larger problems and TTS scaling.
Illustration of readout scheme and TTS
In the Lindbladian setting the observer gains no information during the dragging time Tf, due to performing dissipation without detection. All solution information must then be obtained from local measurements to read out the qubit states at the end of the algorithm, as described in Algorithm 4 and section “Readout scheme and time-to-solution”.
Let us consider the statistics of the measurement outcomes when such a measurement is applied to the reduced density matrix ρj for the jth qubit in our register. This means that we apply Eq. (25) under the simplifying assumption that there is negligible correlation between any of the qubits (Fig. 4 indicates that for relatively long dragging times, this is a reasonable assumption to make). We have the probability density
for each qubit, where ℘(r∣0) and ℘(r∣1) are Gaussians with positive and negative mean signals respectively, and variances 1/Δtm (such models are commonly used for dispersive or longitudinal readout of individual qubits109,110). As in section “Readout scheme and time-to-solution”, suppose that upon reading out rz (a vector of all the rj), we obtain the solution bitstring from that run of the experiment via \(\tilde{{\bf{s}}}={\rm{sign}}({{\bf{r}}}_{z})\). The probability that each \({\tilde{s}}_{j}\) correctly reflects the underlying biases zj, such that \(\tilde{{\bf{s}}}\) matches the correct solution s, is given by
where in the last line the biases ∣zj∣ = ∣zT∣ are assumed the same up to a sign. We note that this expression is a special case of Eq. (27), under the simplifying assumptions that the state of the different qubits is approximately separable, and that the local biases are uniform and correctly reflect the solution state. The assumptions of separability and correct solution bias are generically valid for sufficiently long Tf, and when applied to problems with a unique solution. Notice that in the limit of deterministic dragging T/τ → ∞ and strong readout Δtm/τ → ∞, this expression scales as 1n. In the opposing limit of T → 0 (so that ∣zT∣ → 0), the scaling instead goes as 2n. This corresponds to the readout essentially choosing one of the 2n candidate solution bitstrings at random, in analogy with the worst, i.e., brute-force classical approach to k-SAT.
Let us next consider the number of runs N⋆ required to achieve a solution probability at least \({{\mathbb{P}}}_{\star }\), Eq. (30), to this special case of 2-qubit 2-SAT. For k-SAT problems with unique solution and uniform local bias, we may write
using Eq. (38). A visualization of this expression for n = 2 2-SAT appears in Fig. 5c. We remark that if a bound on the bias ∣zT∣ could be systematically derived in the case of a unique solution, then the behavior of this dissipation–driven algorithm could in turn be systematically bounded using the expressions above. Given the above, we may now define the time to a unique solution,
This is a special case of Eq. (31), using the further assumptions about the local qubit bias implicit in Eq. (39). These assumptions are compatible with dynamics like those of Fig. 4, and this expression is consequently used in the analysis of Fig. 5 for that same example. Figure 5c, d are instructive with regards to the process of estimating a time to solution, illustrating how one might derive regions of Tf and Δtm required to solve a k-SAT problem with a continuous measurement–driven approach, within a certain number of shots and/or with a specified confidence threshold.
One may infer from Fig. 5 that if an optimal TTS exists in this system, it falls in the regime of very short dragging times Tf and readouts Δtm, requiring far more than the N = 4 runs in which the solution to this toy problem can be obtained via a guess and check approach (the worst solution approach). This formally tells us that this toy problem is too simple for our algorithm to be worthwhile, and that we are nearing the end of this small problem’s utility as an instructional tool. We will see in section “Scaling with qubit number for 3-SAT” that a transition occurs at which we will find a genuine optimal TTS, indicating that we can tell when a problem becomes large enough to become more computationally worthwhile.
Remarks: multi-solution and unsatisfiable problems
In the analysis above, we have emphasized the sample two–qubit 2-SAT problem of Fig. 3, which has a unique solution. However, variants on this problem containing either more than one solution, or no solutions at all, are also illuminating to consider. We describe such alternative two–qubit 2-SAT problems in detail in Supplementary Information Section II, restricting ourselves here to a brief summary.
When multiple solutions satisfy our two–qubit 2-SAT problem, or when no solution can satisfy our two–qubit 2-SAT problem, we are no longer guaranteed any bias in the local z coordinates on average at the end of a Lindbladian dragging interval, even in the Zeno limit. However, the output of the two problems, i.e., when multiple solutions exist or when no solution exists, still differ in meaningful ways. In the case of multiple solutions, we will generate relatively high purity entangled states within the solution subspace, i.e., superpositions of the possible classical solutions, one of which is highly likely to be correctly resolved by reading out the qubits in the desirable regime Tf ≫ τ and Δtm ≫ τ. This means that although the individual readout measurements of \({\hat{\sigma }}_{z}^{(j)}\) will appear random run-to-run, information about the solution space will still appear in the structure of the correlations between those zj outcomes on a run-by-run basis. On the other hand, when the SAT problem is unsatisfiable, we obtain the maximally–mixed state on average, so that the local qubit readouts at the end will be random and not exhibit any mutual qubit correlations.
Illustration of heralded algorithm
We conclude our discussion of two–qubit 2-SAT with a demonstration of the heralded algorithm (Algorithm 3) using the 2-qubit 2-SAT defined by Eq. (33b). We have set the threshold to be \({r}_{th}=-2.5/\sqrt{{T}_{be}}\) so that some level of fluctuation due to noise is allowed by the filtered signal. This corresponds to correctly identifying the failure with a confidence probability of 99.65% in the steady state. Figure 6 shows the evolution of the qubit dynamics zi(t) and the filtered signals \({\tilde{r}}_{i}(t)\) under the heralded algorithmic dynamics. A successful run of the algorithm will drag the qubits to their corresponding solution states, analogously to the fixed point algorithm driven by the Lindbladian. The difference is that with the heralded algorithm, we can now detect any failure shortly after it occurred instead of at the end of the algorithm. This can be seen in Fig. 6, where the failed trajectory is detected within an interval ~Tbe of a failure event.
The SAT problem here is the same as the one defined in Eq. (33b). We set the measurement collapse time τ = 1 and the total dragging time Tf = 100. The filter response time is set to be \({T}_{be}=\max \{2\tau ,0.1{T}_{f}\}\), and the threshold is \({r}_{th}=-2.5/\sqrt{{T}_{be}}\). a Shows the evolution of a successful run, where the filtered signals \(\bar{r}\) Eq. (21) never reach the threshold value and the reduced conditional qubit dynamics (inset) diffuse relatively cleanly towards the correct solution state. b Shows the evolution of a failed run. In this example, qubit 0 is collapsed into an incorrect subspace near t/τ ≈ 50 (see inset) and the filtered signal 1 reaches the threshold near t/τ ≈ 60, heralding violation of a particular clause, and therefore a failure of the algorithm. The failure is detected when the threshold is crossed: the algorithm is terminated at this point. We also show the evolution for some time past this point, up to t = Tf, for illustrative purposes.
Scaling the problem Up
We have gained some intuition about the workings of the continuous BZF algorithm for k-SAT by investigating the simplest version of it, namely for k = 2 with 2 qubits. We now extend our analysis towards cases of greater computational interest, to study the algorithm’s performance on k-SAT problems with both k = 2 and k = 3, on 4–10 qubits. In this section, we will benchmark the Zeno dragging algorithms at various values of clause density α, as well as for different parameters Δt and Tf. In order to study the dependence only on the dragging time Tf, we will then assume projective measurement at the final readout phase, and thus we will make calls to Algorithm 1 and Algorithm 3 instead of to Algorithm 4.
Quantum computational phase transition
It has been rigorously proven that in the large n limit, the probability of satisfying a random SAT problem exhibits a phase transition from satisfiable to unsatisfiable (SAT-UNSAT) when the number of clauses per qubit number, i.e., α = m/n, exceeds a critical value αc115. For 2-SAT, the critical clause density is analytically determined to be αc = 1116. For 3-SAT, the critical value is empirically evaluated to be αc ≈ 4.26117,118. The computational cost for solving these random SAT problems correspondingly exhibits an easy-hard-easy pattern, with the computational cost transition occurring at the critical value of αc118,119. Quantum algorithms that optimize solutions for k-SAT, such as the quantum approximate optimization algorithm (QAOA), show a similar computational phase transition near αc, even for systems as small as 6 qubits120,121.
Here, we show evidence of an analogous quantum computational phase transition for both random 2-SAT and 3-SAT under our measurement-driven quantum algorithms. Specifically, we numerically estimate the probability of successfully determining the satisfiability of a random instance as a function of the clause density α, under different values of dragging time Tf. The empirical estimation of this success probability Psucc(α, n, Tf) is given by
where Nprob(α, n) is the number of random instances generated for clause density α and number of variables n, and Nsucc(α, n, Tf) is the corresponding number of instances whose satisfiability are correctly determined by the measurement-driven quantum algorithm with total dragging time Tf.
Figure 7a, b show results for calculations with n = 5 qubits using the Lindblad Algorithm 1. We can see that even with such a relatively small system, for both 2-SAT and 3-SAT problems the SAT probability undergoes a clear SAT-UNSAT transition crossing at a distinct value αc, while Psucc(α, n, Tf) shows an easy-hard-easy transition, with the hardest part located near αc. We also see that when more quantum computational resources are provided, specifically, for larger Tf, the algorithm can obtain higher values of Psucc(α, n, Tf), indicating better performance.
a, b are for 2-SAT and 3-SAT with the Lindblad Algorithm 1 (Δt → dt) respectively, while (c, d) are with the heralded Algorithm 3. The same measurement strength Γ = 1/(4τ) is used for all panels, together with a linear schedule θ(t) = πt/2Tf for 0 ≤ t ≤ Tf. We have set the measurement time τ = 1 throughout all the simulations. In the heralded algorithm, we have additionally set \({T}_{be}=\max \{2\tau ,0.1{T}_{f}\}\), Tmin = 5τ, and \({r}_{th}=-2.5/\sqrt{{T}_{be}}\). The black dotted vertical lines are the locations of critical clause densities in the limit of large n, i.e., αc = 1 for 2-SAT and αc ≈ 4.26 for 3-SAT. The black curves are the probabilities of satisfiability as a function of α for the randomly generated SAT instances. The colored curves are Psucc(α, n, Tf) under various values of Tf. Each datum on the curve is obtained using a sample size Nprob(α, n) = 500. The non-zero width of the critical region and the deviation of the location of the lowest value of Psucc from αc result from the finite system size n.
As discussed in section “Heralding BZF algorithm with finite measurement strength via filtering”, the heralded dynamics benefit from real-time detection of any failures. This advantage allows us to define the heralded algorithm (Algorithm 3) that restarts when the failure is detected early, and thus is more likely to follow the correct trajectory given a fixed amount of computation time Tf even without feedback. Notice that in Algorithm 3, we do not terminate the dynamics when the time left trest is smaller then a minimum value Tmin. This is because when the total dragging time is too small (comparable to Tbe and τ), the quantum Zeno effect is not strong and the failure detection based on the filter is also not reliable anymore. However, we can still use conditional dynamics, i.e., SME Eq (15) rather than the averaged dynamics LME Eq. (14), in this situation.
We perform the same type of calculation for the success probability Eq. (41) as a function of the clause density α under the heralded algorithm 3. The results are shown in Fig. 7c, d. We can see that the heralded algorithm shows the same type of computational phase transition as the Lindblad algorithm in panels (a) and (b). However, the performance of the heralded algorithm is better than that of the Lindblad algorithm, especially near the critical clause density αc, and also when Tf is large. We can understand the latter as a consequence of the combination of earlier detection of failure and the possibility of running multiple trials.
This analysis has shown that for the continuous measurement–driven quantum algorithms, it would be most difficult to successfully solve SAT problems having the critical clause density αc, similar to known results for QAOA and for classical algorithms. Therefore, in order to establish the scaling of the algorithm with respect to the system size n, we shall focus on the hardest SAT instances for the quantum algorithm, i.e., instances with α = αc from now on.
Scaling with qubit number for 3-SAT
In this section, we study the scaling of the 3-SAT TTS with the qubit number n, which is central to quantifying the algorithm performance in terms of computational complexity. Specifically, we do not expect the TTS to scale polynomially with the qubit number n, which would imply NP ⊆ BQP. Since there are no strong computational complexity arguments implying this, we will then instead assume that TTS scales exponentially with the qubit number n throughout the rest of this work, i.e., TTS ~ λn. We shall focus on studying the base number λ for the algorithms with different parameter settings.
In section “Illustration of Heralded algorithm” we demonstrated the behavior of the TTS as a function of the sum of the dragging time Tf and the final readout time Δtm. However, in the context of algorithmic scaling, the TTS is generally characterized without taking the final readout time into account, which corresponds to using the dragging time Tf alone. To enable comparison with the literature, we therefore define the algorithmic time to solution as the following114
where \({{\mathbb{P}}}_{{\bf{s}}}\) is the solution state probability at the end of one run of the algorithm.
We have shown that for infinitesimal dt, there exists a limit of deterministic and pure–state dynamics following the solution subspace; this implies that there exists a sufficiently long Tf for any satisfiable problem such that N99% becomes 1. Our challenge now however is to navigate the tradeoff between Tf and N99% that determines the TTS99%. In order to do this, we consider 3-SAT problem instances that are chosen to have critical clause density αc ≈ 4.26 and a unique satisfying assignment. Those problems are among the hardest to solve and thus serve as good testing problems for benchmarking our generalized measurement algorithms.
Scaling of TTS99% as a function of T f and Δt
In this subsection, we now study how the base number λ of the scaling with n depends on the total pure dragging time Tf and on the individual clause measurement duration Δt for Algorithm 1 with average dynamics, as well as for Algorithm 3 with heralded dynamics.
We simulated both algorithms using the same sets of parameters as in section “Quantum computational phase transition”, for up to n = 9 qubits. The results are shown in Figs. 8 and 9, from which it is evident that as the number of qubits increases, the TTS99% increases exponentially. Furthermore, for different values of total pure dragging time Tf, the TTS99% exhibits different scaling base numbers λ. Typically, one can reach a better (smaller) value of λ by implementing the algorithms with a larger value of Tf : this is observed over a range of Δt values, for both algorithms. This makes sense because for a fixed Tf, the TTS scaling parameter λ derives only from the number of runs \({N}_{99 \% }=\log (1-0.99)/\log (1-{{\mathbb{P}}}_{{\bf{s}}})\) required to achieve 99% confidence threshold as per Eq. (42) and one would therefore expect better performance (smaller N) with more computational resources (larger Tf).
The 3-SAT time to solution TTS99% defined by Eq. (42) is plotted as a function of the number of qubits, for different values of the total dragging time Tf and for different time duration Δt of a single clause measurement under both Algorithm 1 with average dynamics (upper (a), (b), (c)) and Algorithm 3 with heralded dynamics (lower (d), (e), (f)). The 3-SAT instances are randomly generated at α ≈ αc with a unique solution. We have set τ = 1 and used a linear schedule θ(t) = πt/2Tf for all simulations. We use Δt = 0.01 (a, d), Δt = 0.1 (b, e), and Δt = 1 (c, f), all in units of τ. For the heralded algorithm, we have additionally set \({T}_{be}=\max \{2\tau ,0.1{T}_{f}\}\), Tmin = 5τ, and \({r}_{th}=-2.5/\sqrt{{T}_{be}}\). The data for the heralded algorithm are averaged over more than 10,000 trajectories while the data for averaged algorithm are averaged over more than 150 trajectories. We see that TTS99% generally increases exponentially as a function of the number of qubits. Notice the appearance of line-crossings, which is a signature of non-monotonic dependence of TTS99% on Tf that indicates the possible existence of optimal values of TTS99%. This figure continues with larger values of Δt in Fig. 9.
We again have Algorithm 1 with average dynamics (a, b, c), and Algorithm 3 with heralded dynamics (d, e, f). Here Δt = 1 (a, d), Δt = 10 (b, e), and Δt = 100 (c, f), in units of τ. All other considerations and comments match those of Fig. 8.
Another interesting behavior is the dependence of λ on Δt. We recall that Δt/τ ≪ 1 corresponds to weak measurement and Δt/τ ≫ 1 corresponds to strong measurement. Figure 10 shows the scaling base λ as a function of Δt for different values of dragging time Tf. We observe that for fixed Tf, both average and heralded algorithms display better scaling of λ (smaller values) in the weak continuum limit (Δt → 0) than in the strong measurement limit (Δt → ∞). This indicates the advantage of weak continuous measurement for the TTS99% scaling, in addition to the advantage demonstrated in Fig. 2 in terms of final state fidelity with respect to the solution state.
The fitted values of the base scaling parameter λ (not λopt) shown in Figs. 8 and 9 are aggregated and plotted here as a function of the duration Δt of clause measurement, for different dragging times Tf under both Algorithm 1 with average dynamics (red and orange points) and Algorithm 3 with heralded dynamics (blue and green points).
It is worth noting here that the TTS99% scaling λ studied in this subsection should not be interpreted as a performance metric of the algorithm in the asymptotic sense. For the limited size of systems that we are able to simulate, the absolute value of the scaling base number λ can be easily manipulated. To see this, consider either Algorithm 1 or Algorithm 3 for some set of finite system sizes nlist = {n1, …, nd}. One can choose Tf finite but large enough such that the dynamics for all ni in nlist is in the quasi-adiabatic limit, where the final solution state probability is close to 1. In this case, one only needs one attempt of the algorithm in order to find the solution bitstring, and thus the benchmarked scaling parameter λ is screened by the limited system size and appears to be 1, which is not reasonable. Nevertheless, λ can serve as a good metric for demonstrating performance improvement when using different parameters in the algorithm or when comparing different variants of the algorithm, e.g., the heralded and unheralded (average) generalized measurement algorithms (Algorithm 3 and Algorithm 1, respectively).
Optimal TTS99% scaling
As shown in refs. 114,122 for quantum annealing, to benchmark the scaling of such algorithms in a manner independent of the value of the single-shot running time, one needs to identify an optimal TTS99% for each system size n and then obtain the scaling parameter λopt, such that \({{\rm{TTS}}}_{opt} \sim {\lambda }_{opt}^{n}\). The existence of an optimal TTS99% value is intuitive from the following arguments. When Tf is small, one typically needs a diverging (or 2n) number of repeats of the algorithm in order to find the solution bitstring, therefore in this case the TTS99% is also diverging or as large as ~2n. In contrast, when Tf is large and divergent, while one only needs a single attempt of the algorithm, the TTS99% is also divergent. Therefore, one would expect a sweet spot where Tf is finite and the TTS99% attains an optimal value by balancing the trade-off of using a large Tf with a single attempt versus using a small Tf with many attempts.
We demonstrate here that similar to quantum annealing, both the unheralded and heralded Zeno-dragging algorithms also exhibit an optimal TTS99% that depends on the system size n and on Δt. To understand how the performance of the algorithms scales with the system size in a well-defined sense, we study here the scaling of this optimal TTS99% with n.
In Figs. 8 and 9 it is evident that when the number of qubits is larger than some value, there can be some crossings of the TTS99% lines, e.g., for both algorithms with Δt = 10. This implies that when increasing Tf, the TTS99% exhibits a nonlinear behavior and does not necessarily increase or decrease with Tf, which is an indication of the possible existence of an optimal TTS99% value. We investigate this nonlinearity by plotting the TTS99% as a function of Tf for system size up to n = 10 in Fig. 11. This shows the TTS99% versus Tf for both unheralded and heralded algorithms with Δt = 10, for system sizes n = 8 and n = 9. For these sizes the TTS99% clearly exhibits an optimum value. By fitting the optimal values obtained from different system sizes n ∈ [5, 10] with an exponential function (Fig. 11c, f), we determined the optimal TTS99% to scale as \(\sim {({\lambda }_{opt})}^{n}\), with λopt ≈ 1.4346 for Algorithm 1 (average dynamics) and λopt ≈ 1.3492 for Algorithm 3 (heralded dynamics).
We plot TTS99% as a function of the total dragging time Tf under both Algorithm 3 with heralded dynamics (a–c), and Algorithm 1 with average dynamics (d–f). Simulation parameters are the same as Fig. 8 with Δt = 10 and a linear schedule is also used. We observe clear optima for TTS99% in (a, b, d, e). We locate these optimal TTS99% values by fitting the data with a polynomial function. In (c, f), we plot the obtained optimal TTS99% as a function of qubit number n, and fit the data with an exponential function to evaluate the scaling base number λopt. These figures with Δt = 10 serve as a demonstration of how λopt is extracted. Similar calculations were made for Δt ∈ {0.01, 0.1, 1, 10, 100} to allow construction of Fig. 12 below.
We then repeat the procedure for a range of measurement times Δt ∈ {0.01, 0.1, 1, 10, 100} for both algorithms, and obtain the values of λopt for each value of Δt. The results are shown in Fig. 12, where we see that for n = 9, 10 qubits, the TTS99% from the heralded algorithm is smaller (better) than for the average algorithm (left panel (a)), and more importantly, the scaling base parameter λopt is significantly smaller for the heralded algorithm, over a wide range of Δt values (right panel (b)). We notice here that the scaling λopt achieved by the heralded algorithm is systematically smaller than that for Grover’s algorithm4 which has a scaling base number \(\lambda =\sqrt{2}\approx 1.414\). This is a result of the proper usage of structure in our algorithm (recall that Grover is for unstructured search). Overall, it is evident that the heralded algorithm using signal filtering shows better TTS99% behavior than the unconditional algorithm for all values of Δt. This indicates the advantages of implementing the earlier error detection of the heralded algorithm in the quasi-adiabatic regime, rather than relying solely on the autonomous features on which the unconditional algorithm is based, deep in the adiabatic regime.
a Displays the optimal TTS99% as a function of Δt for different system sizes. We see that the optimal TTS99% increases as Δt increases, and the heralded algorithm has a consistently smaller optimal TTS99% than the corresponding average algorithms. b Shows λopt as a function of Δt for both algorithms. The scaling λopt is seen to be generically better in the strong measurement region than in the weak measurement region. It also evident that λopt for the heralded algorithm is uniformly better than that for the average algorithm, consistent with the behavior in (a).
Discussion and outlook
In this paper we have extended a quantum algorithm first proposed by Benjamin, Zhao, and Fitzsimons1 (BZF), that uses projective measurements for solving Boolean satisfiability problems, to use instead generalized measurements for measurements of the Boolean clauses and for readout. First, we outlined a dissipation-only (unheralded) variant of the generalized algorithm, which is equivalent to discarding all the measurement results along the dynamics, or equivalently, using average dynamics. Second, we developed a filtering protocol for detecting clause failure based on the noisy signals of the generalized measurements that enables a heralded variant of the generalized algorithm employing dynamics conditioned on no clause failures. We discussed the statistics of solution readout following both versions of the generalized algorithm, again using generalized measurements, and showed how this readout time can be incorporated into the estimation of the expected time to solution (TTS). We then established the convergence of the algorithm in the limit of long dragging time and time–continuous measurement, noting that the time–continuous algorithm is an instance of “Zeno dragging”70, where autonomous and deterministic convergence of the dynamics to a solution subspace is guaranteed in the Zeno limit. Finally, we illustrated the algorithmic dynamics, including the convergence in the Zeno limit and the filtering protocol. We have done this in detail for an illustrative and relatively simple 2-qubit 2-SAT problem, and then performed extensive numerical simulations on larger 2-SAT and 3-SAT problems involving 4–10 qubits.
With these larger simulations we numerically benchmarked the performance of both the heralded and unheralded algorithms as a function of the total pure dragging time Tf, the clause density α, as well as the duration of the clause measurements, Δt. We found that both algorithms exhibit a computational phase transition, as many other classical and quantum algorithms do, where the k-SAT problem is only hard for the algorithms to solve in the vicinity of the region of a critical value of α. We found strong evidence that working in the weak continuous limit is advantageous over operating in the strong measurement limit, at least for problems on moderate numbers of qubits. This result can be extracted from the observations that: (i) for fixed Tf, the scaling base number λ decreases as Δt decreases, and (ii) the optimal TTS decreases as Δt decreases. Finally, we also demonstrated that the algorithm performance can be systematically improved using clause detection and filtering in order to herald errors and restarting runs. This is shown in the improvement of the success probability for identifying a satisfying solution near the critical α that is enabled by heralded dynamics, as well as the smaller value of the optimal TTS scaling base parameter λopt in the heralded dynamics relative to the average dynamics.
This generalized measurement–driven approach shows some unique advantages and potential. For example, even though there are still many strategies by which one might optimize the algorithm, the optimal TTS scalings that we evaluate here are better than those achieved by Grover’s algorithm4,123,124, characterized by λ ≈ 1.414, and are comparable to those of Schöning’s celebrated results25, characterized by λ ≈ 1.33, respectively. BZF1 have shown that their projective algorithm admits a solution state of the form of Eq. (8) for every solution bitstring to a k-SAT problem. Adapting this to the case of time–continuous monitoring, we have shown in section “Convergence in the Zeno limit” and Supplementary Information Section I that in the limit of slow Zeno dragging, we can expect to deterministically converge to such a solution, assuming we apply the dynamics to a satisfiable k-SAT problem, i.e., when a solution does exist. This implies that for large Tf /τ, we can push λ arbitrarily close to 1 (albeit with a diverging pre-factor in the TTS). This is a direct consequence of the fact that operations that use the Zeno effect for stabilization generally do so in the mean, i.e., they stabilize autonomously56,57,58,59,60,61,62,63,64,65,66,70,87,89,92. The use of feedback in addition to this autonomous stabilization offers the possibility of not only detecting errors in real time, as we have discussed in sections “Illustration of Heralded algorithm” and “Scaling the problem up”, but also of correcting them immediately. Such feedback would be a natural subject for theoretical work following up on the present investigation, and would be closely related to continuous quantum error correction57,58,59,60,61,62,63,64,65,66,70,71,72,73,74,75,76,77,78,79,80,81,82,83,102,103.
There are still many modifications that could be made to optimize the performance of the algorithm beyond what we have developed above. For instance, based on the equivalence of Zeno–dragging control and Zeno–exclusion control, one can exploit the newly developed CDJ–P method70, which is a open-loop control technique to optimize the schedule function θ(t). Another direction for future work could be developing better filters for error detection, such as a Bayesian filter or machine learning filter83, for use in the feedback strategies mentioned in the preceding paragraph.
Implementation of this algorithm on real hardware will require the engineering of several complicated measurements (or dissipators), which do not commute and will have to be implemented simultaneously. We suggest that progress on this front could realistically aim to follow a similar roadmap to the one we have followed in this theoretical paper, namely, to start with implementation of a two–qubit 2-SAT problem. Successful realizing of such a 2-SAT problem would be a strong step towards understanding and solving some of the engineering challenges involved and establishing proof-of-principle, at which point one could reasonably then consider attempting to scale the relevant experimental methods towards problems of a computationally more interesting size. Several experimental results that suggest ways forward here already exist. In particular, measuring or dissipating dynamic observables has been realized in a few contexts68,125,126, as has measurement of non-commuting observables36,82. Reference 82, focused on continuous quantum error correction, is especially relevant to our current context since the real-time error correction is implemented in that work on three qubits by simultaneously monitoring the parity of overlapping pairs of qubits, thereby connecting with several conceptual aspects of the generalized BZF algorithm presented in this work. In addition, the Zeno effect has been used for control, demonstrating that it is possible to engineer measurements that divide a larger system into specific subspaces95,96, much like the clause measurements used here.
Taken together, we believe that our results indicate that the generalized BZF k-SAT algorithm shows promise for realization of a measurement-driven k-SAT solver, suggesting new avenues along which measurement–driven quantum computation might be further developed.
Data availability
The data that support the findings of this study are available upon reasonable request.
References
Benjamin, S. C., Zhao, L. & Fitzsimons, J. F. Measurement-driven quantum computing: performance of a 3-SAT solver. https://doi.org/10.48550/arXiv.1711.02687 (2017).
Feynman, R. Simulating physics with computers. Int. J. Theor. Phys. 21, 467–488 (1982).
Shor, P. W. Algorithms for quantum computation: Discrete logs and factoring. In: Shafi, G (ed.) Proc. 35nd Annual Symposium on Foundations of Computer Science. pp 124–134 (IEEE Computer Society Press: Los Alamitos, CA, USA, 1994).
Grover, L. K. A fast quantum mechanical algorithm for database search. In Proc. Twenty-Eighth Annual ACM Symposium on Theory of Computing, STOC ’96, 212–219 https://doi.org/10.1145/237814.237866 (Association for Computing Machinery, 1996).
Benioff, P. The computer as a physical system: a microscopic quantum mechanical Hamiltonian model of computers as represented by Turing machines. J. Stat. Phys. 22, 563–591 (1980).
Nielsen, M. A. & Chuang, I. L. Quantum Computation and Quantum Information (Cambridge University Press, 2000).
Farhi, E., Goldstone, J., Gutmann, S. & Sipser, M. Quantum computation by adiabatic evolution. https://doi.org/10.48550/arXiv.quant-ph/0001106 (2000).
Das, A. & Chakrabarti, B. K. Colloquium: quantum annealing and analog quantum computation. Rev. Mod. Phys. 80, 1061–1081 (2008).
Morita, S. & Nishimori, H. Mathematical foundation of quantum annealing. J. Math. Phys. 49 (2008).
Hauke, P., Katzgraber, H. G., Lechner, W., Nishimori, H. & Oliver, W. D. Perspectives of quantum annealing: methods and implementations. Rep. Prog. Phys. 83, 054401 (2020).
Briegel, H. J., Browne, D. E., Dür, W., Raussendorf, R. & den Nest, M. V. Measurement-based quantum computation. Nat. Phys. 5, 19–26 (2009).
Childs, A. M. et al. Quantum search by measurement. Phys. Rev. A 66, 032314 (2002).
Verstraete, F., Wolf, M. & Ignacio Cirac, J. Quantum computation and quantum-state engineering driven by dissipation. Nat. Phys. 5, 633–636 (2009).
Zhao, L., Pérez-Delgado, C. A., Benjamin, S. C. & Fitzsimons, J. F. Measurement-driven analog of adiabatic quantum computation for frustration-free hamiltonians. Phys. Rev. A 100, 032331 (2019).
Berwald, J., Chancellor, N. & Dridi, R. Grover speedup from many forms of the Zeno effect. Quantum 8, 1532 (2024).
Berwald, J., Chancellor, N. & Dridi, R. Zeno-effect computation: opportunities and challenges. Phys. Rev. A. 111, 042623 (2025).
Ding, Z., Chen, C.-F. & Lin, L. Single-ancilla ground state preparation via Lindbladians. Phys. Rev. Res 6, 033147 (2024).
Hu, Z., Xia, R. & Kais, S. A quantum algorithm for evolving open quantum dynamics on quantum computing devices. Sci. Rep. 10, 3301 (2020).
Schlimgen, A. W., Head-Marsden, K., Sager-Smith, L. M., Narang, P. & Mazziotti, D. A. Quantum state preparation and nonunitary evolution with diagonal operators. Phys. Rev. A 106, 022414 (2022).
Chen, C.-F., Kastoryano, M. J., Brandão, F. G. S. L. & Gilyén, A. Quantum thermal state preparation. https://doi.org/10.48550/arXiv.2303.18224 (2023).
Ding, Z., Li, X. & Lin, L. Simulating open quantum systems using hamiltonian simulations. PRX Quantum 5, 020332 (2024).
Cook, S. A. The complexity of theorem-proving procedures. In Proc. Third Annual ACM Symposium on Theory of Computing, STOC ’71, 151–158 https://doi.org/10.1145/800157.805047 (Association for Computing Machinery, 1971).
Karp, R. M. Reducibility among Combinatorial Problems 85–103 (Springer US, 1972).
Levin, L. A. Universal sequential search problems. Probl. Inform. Transm. 9, 265 (1973).
Schoning, T. A probabilistic algorithm for k-sat and constraint satisfaction problems. In 40th Annual Symposium on Foundations of Computer Science (Cat. No. 99CB37039), 410–414 (IEEE, 1999).
Kraus, K., Böhm, A., Dollard, J. D. & Wootters, W. H. States, Effects, and Operations: Fundamental Notions of Quantum Theory. Lecture Notes in Physics, 190 (Springer, 1983).
Carmichael, H. J. An Open Systems Approach to Quantum Optics (Springer, 1993).
Wiseman, H. M. & Milburn, G. J. Quantum Measurement and Control (Cambridge University Press, 2009).
Barchielli, A. & Gregoratti, M. Quantum Trajectories and Measurements in Continuous Time. (Springer-Verlag, 2009).
Jacobs, K. Quantum Measurement Theory and its Applications (Cambridge University Press, 2014).
Jordan, A. N. & Siddiqi, I. A. Quantum Measurement: Theory and Practice (Cambridge University Press, 2024).
Gambetta, J. et al. Quantum trajectory approach to circuit QED: quantum jumps and the Zeno effect. Phys. Rev. A 77, 012112 (2008).
Murch, K. W., Weber, S. J., Macklin, C. & Siddiqi, I. Observing single quantum trajectories of a superconducting quantum bit. Nature 502, 211 (2013).
Hacohen-Gourgy, S. & Martin, L. S. Continuous measurements for control of superconducting quantum circuits. Adv. Phys.: X 5, 1813626 (2020).
Blais, A., Grimsmo, A. L., Girvin, S. M. & Wallraff, A. Circuit quantum electrodynamics. Rev. Mod. Phys. 93, 025005 (2021).
Hacohen-Gourgy, S. et al. Dynamics of simultaneously measured non-commuting observables. Nature 538, 491 (2016).
Chantasri, A. et al. Simultaneous continuous measurement of noncommuting observables: Quantum state correlations. Phys. Rev. A 97, 012118 (2018).
Lewalle, P., Chantasri, A. & Jordan, A. N. Prediction and characterization of multiple extremal paths in continuously monitored qubits. Phys. Rev. A 95, 042126 (2017).
Ficheux, Q., Jezouin, S., Leghtas, Z. & Huard, B. Dynamics of a qubit while simultaneously monitoring its relaxation and dephasing. Nat. Comm. 9, 1926 (2018).
Lewalle, P., Steinmetz, J. & Jordan, A. N. Chaos in continuously monitored quantum systems: an optimal-path approach. Phys. Rev. A 98, 012141 (2018).
Atalaya, J., Hacohen-Gourgy, S., Martin, L. S., Siddiqi, I. & Korotkov, A. N. Multitime correlators in continuous measurement of qubit observables. Phys. Rev. A 97, 020104 (2018).
Atalaya, J., Hacohen-Gourgy, S., Martin, L. S., Siddiqi, I. & Korotkov, A. N. Correlators in simultaneous measurement of non-commuting qubit observables. npj Quantum Inf. 4, 41 (2018).
Atalaya, J., Hacohen-Gourgy, S., Siddiqi, I. & Korotkov, A. N. Correlators exceeding one in continuous measurements of superconducting qubits. Phys. Rev. Lett. 122, 223603 (2019).
Jacobs, K. & Shabani, A. Quantum feedback control: how to use verification theorems and viscosity solutions to find optimal protocols. Contemp. Phys. 49, 435–448 (2008).
Jacobs, K. Feedback control using only quantum back-action. N. J. Phys. 12, 043005 (2010).
Tanaka, S. & Yamamoto, N. Robust adaptive measurement scheme for qubit-state preparation. Phys. Rev. A 86, 062331 (2012).
Gough, J. E. Principles and applications of quantum control engineering. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 370, 5241–5258 (2012).
Zhang, J., Liu, Y., Wu, R.-B., Jacobs, K. & Nori, F. Quantum feedback: theory, experiments, and applications. Phys. Rep. 679, 1 (2017).
Minev, Z. K. et al. To catch and reverse a quantum jump mid-flight. Nature 570, 200 (2019).
Martin, L., Motzoi, F., Li, H., Sarovar, M. & Whaley, K. B. Deterministic generation of remote entanglement with active quantum feedback. Phys. Rev. A 92, 062321 (2015).
Martin, L., Sayrafi, M. & Whaley, K. B. What is the optimal way to prepare a Bell state using measurement and feedback? Quantum Sci. Technol. 2, 044006 (2017).
Zhang, S., Martin, L. S. & Whaley, K. B. Locally optimal measurement-based quantum feedback with application to multiqubit entanglement generation. Phys. Rev. A 102, 062418 (2020).
Lewalle, P. et al. Entanglement of a pair of quantum emitters via continuous fluorescence measurements: a tutorial. Adv. Opt. Photon. 13, 517–583 (2021).
Martin, L. S. & Whaley, K. B. Single-shot deterministic entanglement between non-interacting systems with linear optics. https://doi.org/10.48550/arXiv.1912.00067 (2019).
Lewalle, P., Elouard, C. & Jordan, A. N. Entanglement-preserving limit cycles from sequential quantum measurements and feedback. Phys. Rev. A 102, 062219 (2020).
Facchi, P. & Pascazio, S. Quantum zeno subspaces. Phys. Rev. Lett. 89, 080401 (2002).
Mirrahimi, M. & van Handel, R. Stabilizing feedback controls for quantum systems. SIAM J. Control Optim. 46, 445–467 (2007).
Ticozzi, F. & Viola, L. Quantum markovian subsystems: invariance, attractivity, and control. IEEE Trans. Autom. Control 53, 2048–2063 (2008).
Amini, H., Mirrahimi, M. & Rouchon, P. On stability of continuous-time quantum-filters. In 2011 50th IEEE Conference on Decision and Control and European Control Conference 6242–6247 (IEEE, 2011).
Ticozzi, F., Nishio, K. & Altafini, C. Stabilization of stochastic quantum dynamics via open- and closed-loop control. IEEE Trans. Autom. Control 58, 74–85 (2013).
Benoist, T., Pellegrini, C. & Ticozzi, F. Exponential stability of subspaces for quantum stochastic master equations. Ann. Henri Poincaré 18, 2045–2074 (2017).
Cardona, G., Sarlette, A. & Rouchon, P. Exponential stochastic stabilization of a two-level quantum system via strict Lyapunov control. In 2018 IEEE Conference on Decision and Control (CDC) 6591–6596 (IEEE, 2018).
Liang, W., Amini, N. H. & Mason, P. On exponential stabilization of spin-1/2 systems (2019). In 2018 IEEE Conference on Decision and Control (CDC) 6602–6607 (IEEE, 2018).
Cardona, G., Sarlette, A. & Rouchon, P. Exponential stabilization of quantum systems under continuous non-demolition measurements. Automatica 112, 108719 (2020).
Amini, N. H., Bompais, M. & Pellegrini, C. Exponential selection and feedback stabilization of invariant subspaces of quantum trajectories. SIAM J. Control Optim. 62, 2834–2857 (2024).
Liang, W., Ohki, K. & Ticozzi, F. Exploring the robustness of stabilizing controls for stochastic quantum evolutions. SIAM J. Control Optim 63, S148–S174 (2025).
Aharonov, Y. & Vardi, M. Meaning of an individual “Feynman path”. Phys. Rev. D. 21, 2235–2240 (1980).
Hacohen-Gourgy, S., García-Pintos, L. P., Martin, L. S., Dressel, J. & Siddiqi, I. Incoherent qubit control using the quantum zeno effect. Phys. Rev. Lett. 120, 020505 (2018).
Guillaud, J. & Mirrahimi, M. Repetition cat qubits for fault-tolerant quantum computation. Phys. Rev. X 9, 041053 (2019).
Lewalle, P., Zhang, Y. & Whaley, K. B. Optimal zeno dragging for quantum control: a shortcut to zeno with action-based scheduling optimization. PRX Quantum 5, 020366 (2024).
Ahn, C., Doherty, A. C. & Landahl, A. J. Continuous quantum error correction via quantum feedback control. Phys. Rev. A 65, 042301 (2002).
Ahn, C., Wiseman, H. M. & Milburn, G. J. Quantum error correction for continuously detected errors. Phys. Rev. A 67, 052310 (2003).
Ahn, C., Wiseman, H. & Jacobs, K. Quantum error correction for continuously detected errors with any number of error channels per qubit. Phys. Rev. A 70, 024302 (2004).
Sarovar, M., Ahn, C., Jacobs, K. & Milburn, G. J. Practical scheme for error control using feedback. Phys. Rev. A 69, 052324 (2004).
van Handel, R. & Mabuchi, H. Optimal error tracking via quantum coding and continuous syndrome measurement. https://doi.org/10.48550/arXiv.quant-ph/0511221 (2005).
Oreshkov, O. & Brun, T. A. Continuous quantum error correction for non-Markovian decoherence. Phys. Rev. A 76, 022318 (2007).
Mascarenhas, E., Marques, B., Cunha, M. T. & Santos, M. F. Continuous quantum error correction through local operations. Phys. Rev. A 82, 032327 (2010).
Atalaya, J., Korotkov, A. N. & Whaley, K. B. Error-correcting Bacon-Shor code with continuous measurement of noncommuting operators. Phys. Rev. A 102, 022415 (2020).
Mohseninia, R., Yang, J., Siddiqi, I., Jordan, A. N. & Dressel, J. Always-on quantum error tracking with continuous parity measurements. Quantum 4, 358 (2020).
Atalaya, J. et al. Continuous quantum error correction for evolution under time-dependent Hamiltonians. Phys. Rev. A 103, 042406 (2021).
Convy, I. & Whaley, K. B. A logarithmic Bayesian approach to quantum error detection. Quantum 6, 680 (2022).
Livingston, W. P. et al. Experimental demonstration of continuous quantum error correction. Nat. Commun. 13, 2307 (2022).
Convy, I. et al. Machine learning for continuous quantum error correction on superconducting qubits. N. J. Phys. 24, 063019 (2022).
Misra, B. & Sudarshan, E. C. G. The Zeno’s paradox in quantum theory. J. Math. Phys. 18, 756–763 (1977).
Presilla, C., Onofrio, R. & Tambini, U. Measurement quantum mechanics and experiments on quantum zeno effect. Ann. Phys. 248, 95–121 (1996).
Sarandy, M. S. & Lidar, D. A. Adiabatic approximation in open quantum systems. Phys. Rev. A 71, 012331 (2005).
Facchi, P. & Pascazio, S. Quantum Zeno dynamics: mathematical and physical aspects. J. Phys. A: Math. Theor. 41, 493001 (2008).
Venuti, L. C., Albash, T., Lidar, D. A. & Zanardi, P. Adiabaticity in open quantum systems. Phys. Rev. A 93, 032118 (2016).
Burgarth, D., Facchi, P., Nakazato, H., Pascazio, S. & Yuasa, K. Quantum Zeno dynamics from general quantum operations. Quantum 4, 289 (2020).
Kumar, P., Romito, A. & Snizhko, K. Quantum Zeno effect with partial measurement and noisy dynamics. Phys. Rev. Res. 2, 043420 (2020).
Snizhko, K., Kumar, P. & Romito, A. Quantum Zeno effect appears in stages. Phys. Rev. Res. 2, 033512 (2020).
Burgarth, D., Facchi, P., Gramegna, G. & Yuasa, K. One bound to rule them all: from Adiabatic to Zeno. Quantum 6, 737 (2022).
Albert, V. V., Bradlyn, B., Fraas, M. & Jiang, L. Geometry and response of Lindbladians. Phys. Rev. X 6, 041031 (2016).
Harrington, P., Mueller, E. & Murch, K. Engineered dissipation for quantum information science. Nat. Rev. Phys. 4, 660–671 (2022).
Blumenthal, E. et al. Demonstration of universal control between non-interacting qubits using the quantum Zeno effect. npj Quantum Inform. 8, 22 (2022).
Lewalle, P. et al. A multi-qubit quantum gate using the Zeno effect. Quantum 7, 1100 (2023).
Gautier, R., Mirrahimi, M. & Sarlette, A. Designing high-fidelity Zeno gates for dissipative cat qubits. PRX Quantum 4, 040316 (2023).
Paz-Silva, G. A., Rezakhani, A. T., Dominy, J. M. & Lidar, D. A. Zeno effect for quantum computation and control. Phys. Rev. Lett. 108, 080501 (2012).
Dominy, J. M., Paz-Silva, G. A., Rezakhani, A. T. & Lidar, D. A. Analysis of the quantum zeno effect for quantum control and computation. J. Phys. A Math. Theor. 46, 075306 (2013).
Cohen, J. & Mirrahimi, M. Dissipation-induced continuous quantum error correction for superconducting circuits. Phys. Rev. A 90, 062344 (2014).
Mirrahimi, M. et al. Dynamically protected cat-qubits: a new paradigm for universal quantum computation. New J. Phys. 16, 045014 (2014).
Lihm, J.-M., Noh, K. & Fischer, U. R. Implementation-independent sufficient condition of the Knill-Laflamme type for the autonomous protection of logical qudits by strong engineered dissipation. Phys. Rev. A 98, 012317 (2018).
Lebreuilly, J., Noh, K., Wang, C.-H., Girvin, S. M. & Jiang, L. Autonomous quantum error correction and quantum computation. https://doi.org/10.48550/arXiv.2103.05007 (2021).
Gertler, J. M. et al. Protecting a bosonic qubit with autonomous quantum error correction. Nature 590, 243–248 (2021).
Shtanko, O., Liu, Y.-J., Lieu, S., Gorshkov, A. V. & Albert, V. V. Bounds on autonomous quantum error correction. Quantum 9, 1804 (2025).
Guryanova, Y., Friis, N. & Huber, M. Ideal projective measurements have infinite resource costs. Quantum 4, 222 (2020).
Krom, M. R. The decision problem for a class of first-order formulas in which all disjunctions are binary. Math. Log. Q. 13, 15–20 (1967).
Gent, I. P. & Walsh, T. Easy problems are sometimes hard. Artif. Intell. 70, 335–345 (1994).
Korotkov, A. N. Continuous quantum measurement of a double dot. Phys. Rev. B 60, 5737–5742 (1999).
Steinmetz, J., Das, D., Siddiqi, I. & Jordan, A. N. Continuous measurement of a qudit using dispersively coupled radiation. Phys. Rev. A 105, 052229 (2022).
Lindblad, G. On the generators of quantum dynamical semigroups. Commun. Math. Phys. 48, 119–130 (1976).
Gorini, V., Kossakowski, A. & Sudarshan, E. C. G. Completely positive dynamical semigroups of N-level systems. J. Math. Phys. 17, 821–825 (1976).
Ruskov, R., Combes, J., Mølmer, K. & Wiseman, H. M. Qubit purification speed-up for three complementary continuous measurements. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 370, 5291–5307 (2012).
Rønnow, T. F. et al. Defining and detecting quantum speedup. Science 345, 420–424 (2014).
Mézard, M., Parisi, G. & Zecchina, R. Analytic and algorithmic solution of random satisfiability problems. Science 297, 812–815 (2002).
Goerdt, A. A threshold for unsatisfiability. J. Comput. Syst. Sci. 53, 469–486 (1996).
Leyton-Brown, K., Hoos, H. H., Hutter, F. & Xu, L. Understanding the empirical hardness of NP-complete problems. Commun. ACM 57, 98–107 (2014).
Crawford, J. M. & Auton, L. D. Experimental results on the crossover point in random 3-SAT. Artif. Intell. 81, 31–57 (1996).
Mitchell, D., Selman, B. & Levesque, H. Hard and easy distributions of sat problems. In Proc. Tenth National Conference on Artificial Intelligence, AAAI’92, 459–465 (AAAI Press, 1992).
Akshay, V., Philathong, H., Morales, M. E. S. & Biamonte, J. D. Reachability deficits in quantum approximate optimization. Phys. Rev. Lett. 124, 090504 (2020).
Zhang, B., Sone, A. & Zhuang, Q. Quantum computational phase transition in combinatorial problems. npj Quantum Inf. 8, 87 (2022).
Albash, T. & Lidar, D. A. Demonstration of a scaling advantage for a quantum annealer over simulated annealing. Phys. Rev. X 8, 031016 (2018).
Dantsin, E., Kreinovich, V. & Wolpert, A. On quantum versions of record-breaking algorithms for sat. ACM SIGACT News. 36, 103–108 (2005).
Stoudenmire, E. M. & Waintal, X. Opening the black box inside Grover’s algorithm. Phys. Rev. X 14, 041029 (2024).
Touzard, S. et al. Coherent oscillations inside a quantum manifold stabilized by dissipation. Phys. Rev. X 8, 021005 (2018).
Martin, L. S., Livingston, W. P., Hacohen-Gourgy, S., Wiseman, H. M. & Siddiqi, I. Implementation of a canonical phase measurement with quantum feedback. Nat. Phys. 16, 1046–1049 (2020).
Wootters, W. K. Entanglement of formation of an arbitrary state of two qubits. Phys. Rev. Lett. 80, 2245–2248 (1998).
Acknowledgements
This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Systems Accelerator. P.L. is grateful to the UMass Lowell department of Physics & Applied Physics for their hospitality during part of this manuscript’s preparation. This document was written without the use of AI. Simulations and calculations were performed with the help of Python, C++, and Mathematica.
Author information
Authors and Affiliations
Contributions
K.B.W. conceived the initial idea. Y.Z. and P.L. developed the protocols. P.L. provided the convergence analysis. Y.Z. performed numerical simulations, analyzed data and generated figures on 3 qubits or more. All authors contributed to discussions and the writing and editing of the manuscript.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Zhang, Y., Lewalle, P. & Whaley, K.B. Solving k–SAT problems with generalized quantum measurement. npj Quantum Inf 11, 170 (2025). https://doi.org/10.1038/s41534-025-01069-y
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41534-025-01069-y














