Abstract
Gaussian Boson Sampling (GBS) is a promising candidate for demonstrating quantum computational advantage and can be applied to solving graph-related problems. In this work, we propose Markov chain Monte Carlo-based algorithms to sample from GBS distributions on undirected, unweighted graphs. Our main contribution is a double-loop variant of Glauber dynamics, whose stationary distribution matches the GBS distribution. We further prove that it mixes in polynomial time for dense graphs using a refined canonical path argument. Numerically, we conduct experiments on unweighted graphs with 256 vertices, larger than the scales in former GBS experiments as well as classical simulations. In particular, we show that both the single-loop and double-loop Glauber dynamics improve the performance of original random search and simulated annealing algorithms for the max-Hafnian and densest k-subgraph problems up to 10 ×. Overall, our approach offers both theoretical guarantees and practical advantages for efficient classical sampling from GBS distributions on unweighted graphs.
Introduction
Recent years have witnessed increasing efforts to demonstrate quantum computational advantage over classical computers using real quantum devices1,2. In particular, Gaussian Boson Sampling (GBS), implemented with quantum photonics, has shown promise with experimental demonstrations achieving advantages over classical methods2,3,4,5,6. GBS has been applied to graph-related problems, such as the densest k-subgraph problem7 and max-Hafnian calculations8 through numerical simulations. These applications are based on the encoding of a graph’s adjacency matrix into a Gaussian state, where the probability of measuring a specific photon-number pattern is proportional to the squared Hafnian of the corresponding submatrix9,10,11. More recently, Deng et al.12 demonstrated GBS on a noisy quantum device, enhancing classical algorithms for solving graph problems.
On the other hand, classical simulation algorithms for GBS have been actively explored. Quesada and Arrazola13 introduced an exact classical algorithm for simulating GBS by sequentially sampling the number of photons in each mode conditioned on the previously sampled modes, which runs in polynomial space and exponential time. Oh et al.14 developed a classical algorithm for Boson sampling based on dynamic programming, which exploits the limited connectivity of the linear-optical circuit to improve efficiency. Separately, Oh et al.15 proposed a tensor-network-based classical algorithm to simulate large-scale GBS experiments with photon loss, requiring relatively modest computational resources. In the context of graph-theoretical applications of GBS, an important property is that the adjacency matrix encoded in the Gaussian Boson sampler is non-negative, which is believed to make the problem more tractable than in the general case. The aforementioned result13 reduced the simulation of GBS with non-negative matrices to the problem of estimating Hafnians of non-negative matrices, which can be done efficiently for the adjacency matrix of certain strongly expanding graphs16. Oh et al.17 also took advantage of this non-negativity property to design a quantum-inspired classical algorithm for finding dense subgraphs and their numerical results suggest that the advantage offered by a Gaussian Boson sampler is not significant. However, an open question remains whether a classical algorithm for GBS on general graphs with provable performance guarantees can achieve a computational cost comparable to that of a Gaussian Boson sampler.
In this work, we adopt Markov chain Monte Carlo (MCMC) algorithms to sample from GBS distributions on unweighted graphs. MCMC is a standard class of sampling algorithms with well-established theoretical guarantees18,19. Among MCMC methods, Glauber dynamics20 is particularly widespread due to its simplicity and rigorous analytical foundations. Glauber dynamics generates samples from the matchings of an undirected and unweighted graph by iteratively adding or removing edges with biased transition probabilities. The resulting stationary distribution is proportional to the power of the number of edges in the matchings. For each matching sampled from Glauber dynamics, we consider its support vertex set as a sampled subset of vertices. The probability of sampling a given vertex set is further weighted by the number of perfect matchings within that set, which is also the Hafnian of the adjacency matrix of the graph.
We propose a double-loop Glauber dynamics with a rigorous theoretical guarantee that its stationary distribution is identical to the sampling distribution of GBS on unweighted graphs. Specifically, unlike the standard single-loop Glauber dynamics, which yields a stationary distribution proportional to the Hafnian of subgraphs, the double-loop approach ensures a stationary distribution proportional to the square of the Hafnian, which coincides with the sampling distribution of GBS. Concretely, in the double-loop Glauber dynamics, when deciding whether to remove an edge, we run a secondary Markov chain to uniformly sample a perfect matching from the current subgraph. The removal decision is then based on whether this edge reappears in the newly sampled matching. Furthermore, for dense graphs, we prove that the double-loop Glauber dynamics has a polynomial mixing time, demonstrating its computational feasibility in these cases. The key to demonstrate the rapid mixing lies in the canonical path technique introduced in ref. 18, which routed flows between every pair of matchings without creating particularly congested “pipes”, and bounded the mixing time by estimating the maximum congestion of possible transitions. More specifically, for complete graphs, we establish an enhanced canonical path framework that permits multiple paths between any two matchings. By constructing specially designed paths with favorable symmetry properties to distribute flows efficiently, we ultimately estimate maximum congestion through calculating total congestion of symmetrical transitions. For dense graphs, the maximum congestion is bounded by estimating the ratio between the congestion of dense graphs and complete graphs, thereby yielding a polynomial mixing time. This result is particularly significant because the dense graph represents the regime where classical methods are challenging. Sparse graphs often permit classical shortcuts that dense graphs lack. For instance, the Maximum Clique problem, a canonical task that GBS has been proposed to solve21, is NP-hard in general. However, its classical complexity is greatly reduced on sparse graphs, where algorithms can exploit structural properties such as low degeneracy to find maximal cliques efficiently22.
Our numerical simulations confirm that both single-loop and double-loop Glauber dynamics improve the performance of original random search and simulated annealing algorithms for the max-Hafnian and densest k-subgraph problems, providing empirical validations of our theoretical findings. Our experiments are conducted on unweighted graphs with 256 vertices, larger than the scales in former GBS experiments12 as well as classical simulations15. In verification, the variants enhanced by Glauber dynamics are up to 3× better than the original classical algorithms. On random graphs, the enhanced variants achieve score advantage up to 4×. On bipartite graphs, the enhanced variants are up to 10× better than the original classical algorithms.
The rest of the paper is organized as follows. In Section IIA, we review the definition of GBS. We introduce the standard Glauber dynamics for sampling matchings in Section IIB, and then propose our double-loop Glauber dynamics for sampling from GBS distributions on unweighted graphs with provable guarantee in Section IIC. We present all experimental results in Section IID.
Results
Gaussian Boson sampling for graph problems
Boson sampling is a quantum computing model where N identical photons pass through a M-mode linear interferometer and are detected in output modes23. In the standard Boson sampling paradigm, the probability of a given output configuration \(\bar{n}\) is related to the permanent of a submatrix of the interferometer’s M × M unitary matrix T, which we call TS:
where \({{{{\mathcal{S}}}}}_{N}\) is the set of all permutations on [N] ≔ {1, 2, …, N}, and TS is a matrix composed of the intersecting elements of the columns and the rows of T determined by the input positions and output \(\bar{n}\), respectively.
Gaussian Boson Sampling (GBS) is a variant that uses Gaussian states with squeezing parameters \({\{{r}_{i}\}}_{i=1}^{M}\) as inputs instead of single photons. In GBS, the output photon-number distribution is determined by a matrix function called the Hafnian. The Hafnian of a 2n × 2n matrix A is defined as
Specifically, the Hafnian of the adjacency matrix of an unweighted graph equals to the number of its perfect matchings11. The probability of measuring a specific photon number pattern \(\bar{n}=({n}_{1},{n}_{2},\ldots,{n}_{M})\) in an M-mode GBS experiment can be expressed in closed-form as9,10
where σ is the 2M × 2M covariance matrix of the Gaussian state and AS is the submatrix by selecting the intersection of columns and rows only according to output \(\bar{n}\) from the sampling matrix A = B ⊕ B* with
Given an arbitrary undirected graph with potentially complex-valued symmetric adjacency matrix Δ, we aim to engineer B = cΔ with an appropriate rescaling parameter c. It is possible to find such a T by the Takagi-Autonne decomposition (see in ref. 24 and Section 4.4 of25) when \(0 < c < 1/({\max }_{j}| {\lambda }_{j}| )\), where {λj} is the eigenvalue set of A. Subsequently, the sampling matrix becomes A = cΔ ⊕ cΔ*.
When \(N=O(\sqrt{M})\), with dominating probability all the click number ni ≤ 123. Then, all the factorials in (3) disappear since 0! = 1! = 1. On the other hand, the covariance matrix σ and the sampling matrix A is related by10
Thus, the probability of outputting the subgraph with vertex set S is given by
In all, the task of sampling from GBS distributions on a graph with real-valued adjacency matrix is equivalent to developing algorithms that sample a subgraph with probability proportional to
We remark that as a special case of the Hafnian, the permanent of a matrix with non-negative entries admits probabilistic polynomial-time approximation, in contrast to the #P-hardness for general complex matrices23,26,27. This suggests that the applications of GBS for max-Hafnian and densest k-subgraph of nonnegative-weight graphs may be simpler than the complex-valued version, and efficient classical sampling algorithms from GBS distributions on such instances are worth investigation.
Glauber dynamics for matching
Our algorithm is built upon the Glauber dynamics, a well-established Markov chain Monte Carlo method for sampling. In particular, the Glauber dynamics that samples across the space of all matchings of a graph is known as the monomer-dimer model. Given a graph G = (V, E) and a fugacity parameter λ > 0, we denote \({{{\mathcal{M}}}}\) to be the collection of all the matchings of G. We define the Gibbs distribution μ for the monomer-dimer model as μ(X) = w(X)/Z for \(\forall X\in {{{\mathcal{M}}}}\), where the weight w(X) = λ∣X∣ and Z is a normalizing factor known as the partition function. (This partition function is known as the matching polynomial, which has rich study in MCMC literature28. The matching polynomial is closely related to the loop hafnian, a variant of the Hafnian that can be used to count all matchings in a graph29,30.)
In a step t when the Glauber dynamics is at a matching \({X}_{t}\in {{{\mathcal{M}}}}\), it chooses an edge e uniformly at random from E. If \({X}^{{\prime} }\oplus \{e\}\) forms a new matching (either a larger matching or e is in \({X}^{{\prime} }\)), then we let \({X}_{t+1}={X}^{{\prime} }\) with probability \(w({X}^{{\prime} })/(w({X}^{{\prime} })+w({X}_{t}))\) and otherwise let Xt+1 = Xt. If \({X}^{{\prime} }\) and e do not form a new matching, simply set Xt+1 = Xt. This Glauber dynamics for matchings is formally presented in Algorithm 1.
Algorithm 1
Glauber dynamics for matchings
Input: A graph G = (V, E), number of steps T.
Output: A sample of matching X of G such that Pr[X] ∝ λ∣X∣.
1. Initialize X0 as an arbitrary matching in G;
2. Initialize t ← 0;
3. while t < T
4. Choose a uniformly random edge e from E;7D2
5 if e and Xt form a new matching
6 Set Xt+1 = Xt ∪ {e} with probability \(\frac{\lambda }{1+\lambda }\) and otherwise set Xt+1 = Xt;7D2
7 else if e is in Xt
8 Set Xt+1 = Xt\{e} with probability \(\frac{1}{1+\lambda }\) and otherwise set Xt+1 = Xt;
9 else
10 Set Xt+1 = Xt;
11 t ← t + 1;
12 Output the matching XT;7D2
It is straightforward to verify that the Markov chain is ergodic, aperiodic, and irreducible, meaning that it converges to a unique stationary distribution19. We can verify that the Glauber dynamics converges to the Gibbs distribution by checking that the detailed balance condition holds: For two matchings X and X ∪ {e}, we have
and thus
Furthermore, the convergence speed of an MCMC to its stationary distribution is characterized by its mixing time, defined as the time required by the Markov chain to have sufficiently small distance to the stationary distribution. Formally, let Pt(X0, ⋅ ) denote the distribution of matchings after t steps starting from X0. The total variation distance between Pt(X0, ⋅ ) and the stationary distribution μ is defined as
Thus, we can define the mixing time of the Markov chain:
It is known that the Glauber dynamics for matchings has a polynomial mixing time:
Theorem 1
(18) For a general graph G with n vertices and m edges, the mixing time of the Glauber dynamics for the monomer-dimer model on G with fugacity λ > 0 is \(O({n}^{2}m\log n)\).
Now, we examine the Gibbs distribution from an alternative perspective. For a sampled matching X, we take its vertex set S = V(X) as the final output, and let ν denote the stationary distribution of vertex sets. Since the number of perfect matchings on S is given by the Hafnian of the subgraph GS induced by S, which we briefly write as Haf(S), the stationary probability of S satisfies the following facts if we take λ = c2:
Similarly, we can denote \({P}_{t}^{{\prime} }({X}_{0},\cdot )\) as the distribution of vertex sets after t steps starting from X0, and define the total variation distance between \({P}_{t}^{{\prime} }({X}_{0},\cdot )\) and the stationary distribution ν as
The mixing time for vertex set sampling is defined as
Since
the mixing time of the Glauber dynamics for matchings is at most the mixing time of the Glauber dynamics for vertex sets.
We note that the distribution in Eq. 17 resembles the GBS distribution: The Gibbs distribution involves the Hafnian to the first power, whereas the GBS distribution weights each vertex set by the square of its Hafnian. This quadratic dependence amplifies the probability mass on larger vertex sets with numerous perfect matchings, resulting in a more concentrated distribution on such vertex sets that potentially gives favorable solutions to problems such as densest k-subgraph7 and max-Hafnian8.
Double-loop Glauber dynamics
Inspired by the standard Glauber dynamics, we develop several enhanced algorithms that achieve classical sampling from GBS distributions on unweighted graphs, i.e., sampling of a vertex set S with distribution Pr(S) ∝ c2∣S∣Haf2(S).
A simple idea is rejection sampling, which is a basic technique applied to generate samples from a target distribution by sampling from a proposal distribution and accepting or rejecting the samples based on a certain criterion. For sampling from GBS distributions, our rejection sampling algorithm works as follows:
-
Run two instances of the Glauber dynamics for matchings independently to sample two vertex sets S1, S2 with probability Pr[S] ∝ c∣S∣Haf(S).
-
Accept S1 if S1 = S2, otherwise, reject and repeat the process.
In this rejection sampling algorithm, the probability of accepting a vertex set S is
However, this method is inefficient as it may incur a large number of rejections. For general graphs, since the number of possible vertex subsets grows exponentially, if no vertex subset dominates the weight distribution, the acceptance probability for each rejection sampling attempt can be exponentially small.
In this work, we propose a novel double-loop Glauber dynamics that directly samples from the distribution Pr[S] ∝ c2∣S∣Haf2(S), where S is a vertex set of a graph G and c is a constant. Note that it is equivalent to realizing the sampling of matchings according to the distribution \(\Pr [X]\propto {({c}^{2})}^{2| X| }{{{\rm{Haf}}}}({G}_{X})\), where GX denotes the subgraph induced by the vertex set of the matching X.
In contrast to the standard Glauber dynamics for matchings (Algorithm 1), our approach introduces modified transition probabilities for edge removal, carefully calibrated to ensure the convergence to the desired stationary distribution. These probabilities are dynamically determined through an auxiliary inner Markov chain that operates in each step of the Glauber dynamics and samples a perfect matching from the subgraph induced by the current matching. Specifically, different from Line 8 in Algorithm 1, when the random edge e is in the current matching Xt, we first uniformly sample a perfect matching Et in the subgraph \({G}_{{X}_{t}}\) induced by Xt. If e is not in Et, we keep the current matching Xt. Otherwise, we remove e from Xt with probability 1/(1 + λ2) and otherwise keep Xt. Our algorithm is formally presented in Algorithm 2 with an illustration in Fig. 1.
The outer loop Glauber dynamics uniformly samples an edge e at each step. When e and the current matching X form a new matching, e is added to X with a certain probability. When e is in the current matching X, we use an inner loop Glauber dynamics in the subgraph induced by the current matching X. This inner loop uniformly samples a perfect matching X', and when e is in X', e is removed from X with a certain probability.
Algorithm 2
Double-loop Glauber dynamics
Input: A graph G = (V, E).
Output: A sample of vertex set S of G such that Pr[S] ∝ λ∣S∣Haf2(S).
1 Initialize X0 as an arbitrary matching in G;
2 Initialize t ← 0, set \(T=\tilde{O}({n}^{6})\);
3 while t < T do
4 Choose a uniformly random edge e from E;
5 if e and Xt form a new matching then
6 Set Xt+1 = Xt ∪ {e} with probability \(\frac{{\lambda }^{2}}{1+{\lambda }^{2}}\) and otherwise set Xt+1 = Xt;7D2
7 else if e is in Xt then
8 Uniformly sample a perfect matching Et in the subgraph induced by Xt (by running another MCMC on \({G}_{{X}_{t}}\)). If e ∉ Et, set Xt+1 = Xt. If e ∈ Et, set Xt+1 = Xt\{e} with probability \(\frac{1}{1+{\lambda }^{2}}\) and otherwise set Xt+1 = Xt;
9 else
10 Set Xt+1 = Xt;
11 t ← t + 1;7D2
12 Output the vertex set in matching XT;7D2
We first verify the convergence to the desired distribution. For the edge sets of any two matchings X and X ∪ {e}, on the one hand, from Line 6 in Algorithm 2,
On the other hand, if Xt = X ∪ {e}, the probability Pr[e ∈ Et] in Line 8 in Algorithm 2 is equal to the ratio of the number of perfect matchings of GX∪{e} that contains e to the number of perfect matchings of GX∪{e}. The numerator equals the number of perfect matchings of GX. Thus we have
where the second term comes from the probability of edge e being in Et. Recall that the stationary distribution of matchings should satisfy
thus if we take λ = c2,
Notice that the number of perfect matchings of GX is Haf(GX). Therefore, when we finally take the vertex set of the sampled matching as the output, the probability of the sampled set of vertices S is:
which simulates the output distribution by GBS. We remark that our Algorithm 2 works for any undirected and unweighted graph G.
Next, we rigorously establish that the mixing time of the double-loop Glauber dynamics on dense graphs is at most a polynomial. Specifically, for dense bipartite graphs, we have:
Theorem 2
Given a bipartite graph G = (V1, V2, E) with ∣V1∣ = m, ∣V2∣ = n, and m ≥ n. If the minimum degree of vertices in V1 satisfies δ(V1) ≥ n − ξ and the minimum degree of vertices in V2 satisfies δ(V2) ≥ m − ξ for some constant ξ, then for \(\lambda > \frac{1}{4}\), the mixing time of the double-loop Glauber is polynomially bounded in M and n, specifically \(\tilde{O}({m}^{2}{n}^{2\xi+4})\).
For dense non-bipartite graphs, we have:
Theorem 3
Given a non-bipartite graph G = (V, E) with ∣V∣ = 2n, if the minimum degree of G satisfies δ(V) ≥ 2n − ξ for some constant ξ, for \(\lambda > \frac{1}{4}\), the mixing time of the double-loop Glauber is polynomially bounded in n, specifically \(\tilde{O}({n}^{2\xi+6})\).
Our analysis primarily employs a proof technique in MCMC literature known as the canonical path method18, which establishes mixing time bounds by constructing a proper multicommodity flow problem, and then selecting suitable transition paths between states. For a pair of initial state I and final state F, we can conceptualize the problem as routing π(I)π(F) units of distinguishable flow from state I to state F, utilizing the Markov chain’s transitions as “pipes”. We can define an arbitrary canonical path from I to F for each pair I, F ∈ Ω, and the corresponding “congestion” as follows:
where γIF is the path from I to F, and ∣γIF∣ denotes the length of γIF. Ref. 18 proved that the mixing time of Markov chain is bounded by
The proof of Theorem 1 in ref. 18 applied the canonical path method, in which the path from matching I to matching F is defined by decomposing I ⊗ F into a collection of paths and even-length cycles, and then processing these components with some specific order. The approximation of Eq. (32) is achieved by constructing an injective mapping from (I, F) to another matching for each transition. However, the canonical path method is not directly applicable to the double-loop Glauber dynamics, as the inner Markov chain (Line 8) introduces additional Hafnian terms to the transition process, leading to a multiplicative error term that grows exponentially in the worst case when using original injection constructions. To address this challenge, we develop an alternative proof technique that leverages symmetries between different transitions.
In the original canonical path method, if the graph has sufficient symmetry, we can directly compute the congestion of each transition instead of constructing an injective mapping to bound the congestion. We begin our analysis with the complete graph case, focusing on direct computation of the congestion for each individual transition. We can design special canonical path to ensure that for any transition originating from a matching of size k, only paths transitioning between smaller matchings (∣M∣ < k) and larger matchings (∣M∣ > k) will utilize this transition. This insight enables us to collectively compute the total congestion across all transitions originating from size k matchings. Furthermore, we aim to design symmetric canonical paths such that the congestion values for symmetric transitions are the same in complete graphs. This symmetry enables us to compute the congestion of individual transitions by summing the contributions from all paths that pass through them. However, the original canonical path construction, which determines the order of components of I ⊕ F based on fixed vertex orderings, fails to preserve the necessary symmetry properties if the graph is not complete. To relax this limitation, we introduce an enhanced canonical path framework that permits multiple distinct paths between each state pair (I, F). Specifically, our solution involves considering all possible permutations of the connected components, systematically constructing transformation sequences for each ordering (for more details, see Supplementary Note 1).
For complete graphs and complete bipartite graphs, this symmetric construction leverages the matching enumeration properties specific to each graph, enables precise congestion calculations and ultimately yields a polynomial mixing time bound through careful analysis of the path distribution and transition probabilities. The complete technical proof is provided in Supplementary Note 3A (for bipartite graphs) and Supplementary Note 3C (for non-bipartite graphs) in Supplementary Information. For sufficiently dense graphs where the Hafnian of each subgraph differs from the complete graph case by at most a polynomial factor, our analysis naturally extends to establishing polynomial mixing time bounds, as formalized in Theorem 2 and Theorem 3. The complete technical proof is provided in Supplementary Note 3B (for bipartite graphs) and Supplementary Note 3D (for non-bipartite graphs).
Another crucial aspect of our framework is the implementation of the inner Markov chain for uniform sampling of perfect matchings in subgraphs. Ref. 27 introduced an efficient algorithm that achieves an efficient approximation of uniform sampling of perfect matchings of an arbitrary balance bipartite graph:
Lemma 1
(Ref. 27) Given a balance bipartite graph G = (V1, V2, E) with ∣V1∣ = ∣V2∣ = n, then their exists an algorithm that achieves a uniform sampling of perfect matching of G in time \(O({n}^{11}{(\log n)}^{2}(\log n+\log {\eta }^{-1}))\), with failure probability η.
For non-bipartite graphs, Ref. 31 provided a polynomial-time algorithm that approximates uniform sampling of perfect matchings in a dense graph:
Lemma 2
(Ref. 31) Given a non-bipartite graph G = (V, E) with ∣V∣ = 2n, if the minimum degree of vertices in V satisfies δ(V) ≥ n, then their exists an algorithm that achieves a uniform sampling of perfect matching of G in time \(\tilde{O}({n}^{14}{(\ln {\eta }^{-1})}^{2})\), with failure probability η.
We briefly discuss these approximate uniform sampling methods for perfect matchings in Supplementary Note 2.
By integrating the outer Markov chain framework and the inner uniform sampling mechanism along with comprehensive error analysis, we establish our main theoretical results. On dense bipartite graphs, we have:
Theorem 4
Given a bipartite graph G = (V1, V2, E) with ∣V1∣ = m, ∣V2∣ = n and m ≥ n. If the minimum degree of vertices in V1 satisfies δ1(G) ≥ n − ξ, and the minimum degree of vertices in V2 satisfies δ2(G) ≥ m − ξ for some constant ξ, then for \(\lambda > \frac{1}{4}\), given error ϵ, we can achieve a sampling in time \(\tilde{O}({m}^{2}{n}^{(15+2\xi )}{(\log {\epsilon }^{-1})}^{2})\) such that the total variation distance between the sampling distribution and the ideal stationary distribution is at most ϵ.
On dense non-bipartite graphs, we have:
Theorem 5
Given a non-bipartite graph G = (V, E) with ∣V∣ = 2n, If the minimum degree of vertices in V satisfies δ1(G) ≥ 2n − 1 − ξ, for some constant ξ, then for \(\lambda > \frac{1}{4}\), given error ϵ, we can achieve a sampling in time \(\tilde{O}({n}^{2\xi+20}{(\log {\epsilon }^{-1})}^{3})\) such that the total variation distance between the sampling distribution and the ideal stationary distribution is at most ϵ.
We note that our theoretical performance bound in Theorem 4 is stronger for bipartite graphs. This advantage stems from the fact that the subroutine of uniformly sampling perfect matchings is known to be classically more efficient for bipartite graphs27. For context, the BipartiteGBS technique also leverages the unique properties of bipartite graphs, though for the different goal of encoding arbitrary matrices for hardness proofs32.
Numerical experiments
We conduct experiments to compare our algorithms with prior approaches. All results and plots are obtained by numerical simulations on a 10-core Apple M2 Pro chip with 16 GB memory and an NVIDIA L4 chip with 24 GB GPU memory via python 3.11.0. We use “networkx” library33 to store and manipulate graphs and “thewalrus” library34 to calculate Hafnian from adjacency matrix. Specifically, the graph problems we aim to solve are defined as follows:
-
Max-Hafnian: Given an undirected graph with non-negative adjacent matrix and target subgraph size k, find the subgraph of size k with the maximum Hafnian value defined as Eq. (2).
-
Densest k-subgraph: Given an undirected graph with non-negative adjacent matrix and target subgraph size k, find the subgraph of size k with the maximum density value. Density denotes the number of edges divided by the number of vertices.
We design various unweighted graphs and set their total number of vertices n = 256, which is larger than the system size of the 144-mode quantum photonic device Jiuˇzhang12 used in the GBS experiment solving graph problems as well as classical simulations. Specifically, the graphs we use are presented as follows:
-
G1 for max-Hafnian: The edge set contains a 16-vertex complete graph and all remaining vertex pairs have an edge with probability 0.2. As a result, finding the 16-vertex subgraph with maximal Hafnian essentially finds the 16-vertex complete graph.
-
G2 for densest k-subgraph: Vertex i has edges to vertices 0, 1, − , n − 1 − i, which means the degree of each vertex is decreasing. As a result, finding the 80-vertex densest k-subgraph essentially finds the induced subgraph of the first 80 vertices.
-
G3 for score advantage: An Erdös-Rényi graph as one of the most representative random graphs. For each vertex pair there exists an edge with fixed probability 0.4.
-
G4 for double-loop Glauber dynamics: A random bipartite graph with 128 vertices in each part, and there is an edge on each vertex pair between two parts with fixed probability 0.2.
-
G5 for sparse graph experiments: A random bipartite graph with 128 vertices in each part, and there are 10 ⋅ 256 edges chosen among all the 1282 possible pairs uniformly at random.
The classical algorithms and their variants enhanced by Glauber dynamics are presented in detail in Supplementary Note 5. We empirically choose sufficient and appropriate mixing time and post-select the edge sets with right size from the iterative process of Glauber dynamics.
In addition to Algorithm 1 and Algorithm 2 introduced in the previous sections, we also implement another typical MCMC algorithm by Jerrum18 (Algorithm 3) and a quantum-inspired classical algorithm by Oh et al.17 for comparison.
Algorithm 3
Jerrum’s approach: Glauber dynamics for matchings18
Input: A graph G = (V, E), number of steps T.
Output: A sample of matching X of G such that Pr[X] ∝ λ∣X∣.
1 Initialize X0 as an arbitrary matching in G;
2: Initialize t ← 0;
3 Whilet < T
4 Choose a uniformly random edge e from E;
5 if e and Xt form a new matching then
6 Set M = Xt ∪ {e};
7 else if e is in Xt then
8 Set M = Xt\{e};
9 else if e has a common vertex v with \({e}^{{\prime} }\) in Xt and the remaining vertex u is not covered by Xt then
10 Set \(M={X}_{t}\cup \{e\}\backslash \{{e}^{{\prime} }\};\)
11 else
12 Set M = Xt;
13 Set Xt+1 = M with prob. \(\min \{1,{\lambda }^{| M| -| X| }\}\) and otherwise set Xt+1 = Xt;
14 t ← t + 1;
15 Output the matching XT;
Glauber dynamics verification and comparison
We first verify that the Glauber dynamics can enhance random search and simulated annealing by outperforming their original algorithms as corresponding baselines. We plot our experimental results in Fig. 2. The instance G1 serves as a test on the classic planted clique problem35, while G2 acts as a sanity check with a predictable density structure. The objective is to confirm that our algorithms can correctly identify optimal solutions.
Iterations = 1000, mixing time ≤10000, fugacity λ = c2, annealing parameter γ = 0.95. a Hafnian Random Search on G1 with k = 16, c = 0.1. The three enhanced variants are on average 240−300% higher than the original classical algorithm, and the upper confidential intervals are about 350% better. b Hafnian Simulated Annealing on G1 with k = 16, c = 0.1. The three enhanced variants are on average 20−50% higher than the original classical algorithm, and the upper confidential intervals are about 55% better. c Density Random Search on G2 with k = 80, c = 0.4. The two enhanced variants are on average 30−35% higher than the original classical algorithm, and the upper confidential intervals are about 32% better. d Density Simulated Annealing on G2 with k = 80, c = 0.4. The two enhanced variants are on average 10−15% higher than the original classical algorithm, and the upper confidential intervals are about 12% better.
We note that the quantum-inspired classical algorithm in Oh et al.17 requires collision-free outcomes for a given target click number, which appears with very low probability when k is large. Therefore, it is not applied to the densest k-subgraph problem in our experiments. This to some extent, demonstrates the advantage of the Glauber dynamics for problem settings with an arbitrarily high click number k, since adding or decreasing edges poses no limitation on collision.
It can be seen from Fig. 2 that compared to the original random search and simulated annealing algorithms, substituting uniform update in each iteration with Algorithm 1, Algorithm 3, and the quantum-inspired classical algorithm all significantly improve their performance in terms of Hafnian and density values up to 3× . These results confirm the correctness of our methods and their effectiveness in locating ground-truth solutions.
Score advantage comparison with GBS
To evaluate performance on a more challenging average-case scenario, we use the Erdös-Rényi random graph G3. The score advantage is defined as the ratio of the maximum Hafnian/density value acquired by random search enhanced by the double-loop Glauber dynamics to that acquired by the original random search at the end of the stage with 100 iterations. We remark that too many iterations may result in collapse of score advantage to 1 since even the original approach can acquire the same optimal solution as the variants by the Glauber dynamics. We plot our experimental results enhanced by Algorithm 2 in Fig. 3, whose probability distribution on unweighted graph is comparable to GBS in ref. 12.
RS enhanced by Algorithm 2/original RS. Iterations = 100, mixing time ≤1000, fugacity λ = c2. a The left side reflects the Hafnian score advantage up to 4× on G3 with c = 0.6. b The right side reflects the density score advantage up to 120% on G3 with c = 0.6.
Ref. 12 used a 144-vertex complete graph with random complex weights, whereas our numerical experiment is conducted on the aforementioned 256-vertex Erdös-Rényi graph G3 with p = 0.4. Since all weights of the edges in the graph are real, we do not need the square of modulus. These may explain the difference between our score advantage and Fig. 3 of12. As shown in the left side of our Fig. 3, our Hafnian score advantage up to 4× is lower than the GBS experiments12, whereas in the right side of Fig. 3 our density score advantage is comparable to theirs. The key message is that our method is effective and robust for finding dense, high-Hafnian subgraphs within generic, average-case random graphs.
Double-loop Glauber dynamics on bipartite graphs
In this section, we further conduct experiments on a randomly connected bipartite graph G4 to verify our theory on double-loop Glauber dynamics in Section II C. This experiment is crucial because bipartite graphs are common in applications and are structures where our uniform perfect matching sampling subroutine is known to be efficient27. For the similar reason stated in Section II D 1, we do not apply the quantum-inspired classical algorithm in ref. 17 to the densest k-subgraph problem. We plot our experimental results in Fig. 4.
Iterations = 1000, mixing time ≤1000, fugacity λ = c2, annealing parameter γ = 0.95. a Hafnian Random Search on G4 with k = 16, c = 0.4. The four enhanced variants are on average 12×−15× higher than the original classical algorithm, and the upper confidential intervals are about 10× better. b Hafnian Simulated Annealing on G4 with k = 16, c = 0.4. The four enhanced variants are on average 50−100% higher than the original classical algorithm, and the upper confidential intervals are about 70% better. c Density Random Search on G4 with k = 80, c = 0.8. The three enhanced variants are on average 8% higher than the original classical algorithm, and the upper confidential intervals are about 9% better. d Density Simulated Annealing on G4 with k = 80, c = 0.8. The three enhanced variants are on average 3% higher than the original classical algorithm, and the upper confidential intervals are about 2% better.
It can be seen from Fig. 4 that compared to the original random search and simulated annealing algorithms, substituting the uniform update in each iteration with Algorithm 1, Algorithm 2, Algorithm 3, and the quantum-inspired classical algorithm all comparably improve their performance in terms of Hafnian and density values up to 10× . This substantial improvement, especially from the Algorithm 2 variant, validates our theoretical findings in Section IIC.
Sparse graph
In this work, we only provide theoretical guarantee of Algorithm 2 for dense graphs in terms of polynomial mixing time. As a complementary from the practical perspective, in this subsection we conduct experiments on a sparse random bipartite graph G5 whose number of edges m = O(n), while for G1, G2, G3, G4 the parameter m scales with n2.
It can be seen from Fig. 5 that the original random search and simulated annealing algorithms almost completely fail to find any subgraph with perfect matching, and the acquired Hafnian value is nearly zero. In contrast, substituting the uniform update in each iteration with Algorithm 1, Algorithm 2, and Algorithm 3, the classical sampling algorithms are still capable of acquiring considerable Hafnian values.
Iterations = 1000, mixing time ≤1000, fugacity λ = c2, annealing parameter γ = 0.95. a Hafnian Random Search on G5 with k = 16, c = 0.6. b Hafnian Simulated Annealing on G5 with k = 16, c = 0.6. In both cases, the four enhanced variants are capable of finding subgraphs with considerable Hafnian values, while the original random search and simulated annealing almost completely fail to find any perfect matching. c Density Random Search on G5 with k = 80, c = 0.8. d Density Simulated Annealing on G5 with k = 80, c = 0.8. Regarding the density values, the three enhanced variants are still 10% higher than the performance of original algorithms with the same stopping time as in dense graphs, demonstrating the convergence of our MCMC-based algorithms on this sparse graph.
Regarding the density values, the three enhanced variants are still 10% higher than the performance of original algorithms with the same stopping time as in dense graphs, demonstrating the convergence of our MCMC-based algorithms on this sparse graph.
Discussion
In this work, we have proposed the double-loop Glauber dynamics to sample from GBS distributions on unweighted graphs and theoretically prove that it mixes in polynomial time on dense graphs. Numerically, we conduct experiments on graphs with 256 vertices, larger than the scales in former GBS experiments, as well as classical simulations. In particular, we show that both the single-loop and double-loop Glauber dynamics outperform original classical algorithms for the max-Hafnian and densest k-subgraph problems with up to 10× improvement.
Our theoretical results for the rapid mixing time in double-loop algorithms leave several natural directions for future research. We have only proved polynomial mixing time results for dense graphs. A natural next step is to establish polynomial mixing time bounds for more general graphs (e.g., planar graphs, Erdös-Rényi graphs, etc.) to better support our empirical observations. In particular, we identify the following key challenges for further investigation:
-
For non-dense subgraphs, the ratio of the Hafnian to the corresponding complete subgraph’s Hafnian may become exponentially large, preventing direct application of the complete graph analysis, as is discussed in Supplementary Note 3E in Supplementary Information. Can we apply our enhanced canonical path method to deriving mixing time upper bounds for more general graphs, such as graphs with high symmetry?
-
For the analysis of mixing time for non-dense non-bipartite graphs, another technical barrier lies in the uniform sampling of perfect matchings on such graphs, whose mixing time is an open problem and directly impacts the convergence guarantees of our inner-layer MCMC procedure.
-
Our double-loop MCMC algorithm can also be extended to non-negative weighted graphs by slightly adjusting the transition probabilities of the Markov chain (see Algorithm S2 in Supplementary Note 5). However, the analysis of their mixing time becomes significantly more complicated and remains an open problem.
Data availability
The data and source code are available on our Github page https://github.com/Qubit-Fernand/GBS-MCMC. In addition, they are also available at Zenodo repository36.
References
Arute, F. et al. Quantum supremacy using a programmable superconducting processor. Nature 574, 505–510 (2019).
Zhong, H.-S. et al. Quantum computational advantage using photons. Science 370, 1460–1463 (2020).
Zhong, H.-S. et al. Phase-programmable Gaussian boson sampling using stimulated squeezed light. Phys. Rev. Lett. 127, 180502 (2021).
Deng, Y.-H. et al. Gaussian boson sampling with pseudo-photon-number-resolving detectors and quantum computational advantage. Phys. Rev. Lett. 131, 150601 (2023).
Madsen, L. et al. Quantum computational advantage with a programmable photonic processor. Nature 606, 75 (2022).
Deshpande, A. et al. Quantum computational advantage via high-dimensional Gaussian boson sampling. Sci. Adv. 8, eabi7894 (2022).
Arrazola, J. M. & Bromley, T. R. Using Gaussian boson sampling to find dense subgraphs. Phys. Rev. Lett. 121, 030503 (2018).
Arrazola, J. M., Bromley, T. R. & Rebentrost, P. Quantum approximate optimization with Gaussian boson sampling. Phys. Rev. A 98, 012322 (2018).
Hamilton, C. S. et al. Gaussian boson sampling. Phys. Rev. Lett. 119, 170501 (2017).
Kruse, R. et al. Detailed study of Gaussian boson sampling. Phys. Rev. A 100, 032326 (2019).
Brádler, K., Dallaire-Demers, P.-L., Rebentrost, P., Su, D. & Weedbrook, C. Gaussian boson sampling for perfect matchings of arbitrary graphs. Phys. Rev. A 98, 032310 (2018).
Deng, Y.-H. et al. Solving graph problems using Gaussian boson sampling. Phys. Rev. Lett. 130, 190601 (2023).
Quesada N. & Arrazola, J. M. Exact simulation of Gaussian boson sampling in polynomial space and exponential time. Phys. Rev. Res. 2 https://doi.org/10.1103/PhysRevResearch.2.023005 (2020).
Oh, C., Lim, Y., Fefferman, B. & Jiang, L. Classical simulation of boson sampling based on graph structure. Phys. Rev. Lett. 128, 190501 (2022).
Oh, C., Liu, M., Alexeev, Y., Fefferman, B. & Jiang, L. Classical algorithm for simulating experimental Gaussian boson sampling. Nat. Phys. 20, 1461–1468 (2024).
Rudelson, M., Samorodnitsky, A. & Zeitouni, O. Hafnians, perfect matchings and Gaussian matrices. Ann. Probab. 44, 2858 (2016).
Oh, C., Fefferman, B., Jiang, L. & Quesada, N. Quantum-inspired classical algorithm for graph problems by Gaussian boson sampling. PRX Quantum 5, 020341 (2024).
Jerrum, M. Counting, Sampling and Integrating: Algorithms and Complexity (Springer Science & Business Media, 2003).
Levin, D. A., & Peres, Y. Markov Chains and Mixing Times, Vol. 107 (American Mathematical Society, 2017).
Glauber, R. J. Time-dependent statistics of the Ising model. J. Math. Phys. 4, 294 (1963).
Banchi, L., Fingerhuth, M., Babej, T., Ing, C. & Arrazola, J. M. Molecular docking with Gaussian boson sampling. Sci. Adv. 6, eaax1950 (2020).
Eppstein, D., Löffler, M. & Strash, D. Listing all maximal cliques in large sparse real-world graphs. J. Exp. Algorithmics 18, 3 (2013).
Aaronson, S. & Arkhipov, A. The computational complexity of linear optics. In Proc. Forty-Third Annual ACM Symposium on Theory of Computing, STOC ’11 (Association for Computing Machinery, 2011).
Houde, M., McCutcheon, W. & Quesada, N. Matrix decompositions in quantum optics: Takagi/Autonne, Bloch-Messiah/Euler, Iwasawa, and Williamson. Can. J. Phys. 102, 497 (2024).
Horn R. A. & Johnson, C. R. Matrix Analysis (Cambridge University Press, 2012).
Valiant, L. G. The complexity of computing the permanent. Theor. Comput. Sci. 8, 189 (1979).
Jerrum, M., Sinclair, A. & Vigoda, E. A polynomial-time approximation algorithm for the permanent of a matrix with nonnegative entries. J. ACM (JACM) 51, 671 (2004).
Barvinok, A. Combinatorics and Complexity of Partition Functions, Vol. 30 (Springer, 2016).
Björklund, A., Gupt, B. & Quesada, N. A faster Hafnian formula for complex matrices and its benchmarking on a supercomputer. ACM J. Exp. Algorithmics 24, 1.11. https://doi.org/10.1145/3325111 (2019).
Qi, H. et al. Efficient sampling from shallow Gaussian quantum-optical circuits with local interactions. Phys. Rev. A 105, 052412 (2022).
Jerrum, M. & Sinclair, A. Approximating the permanent. SIAM J. Comput. 18, 1149 (1989).
Grier, D., Brod, D. J., Arrazola, J. M., Alonso, M. Bd. A. & Quesada, N. The complexity of bipartite Gaussian boson sampling. Quantum 6, 863 (2022).
Hagberg, A., Swart, P. J. & Schult, D. A. Exploring Network Structure, Dynamics, and Function Using NetworkX, Tech. Rep. (Los Alamos National Laboratory (LANL), 2008).
Gupt, B., Izaac, J. & Quesada, N. The Walrus: a library for the calculation of hafnians, Hermite polynomials and Gaussian boson sampling. J. Open Source Softw. 4, 1705 (2019).
Alon, N., Krivelevich, M. & Sudakov, B. Finding a large hidden clique in a random graph, https://onlinelibrary.wiley.com/doi/abs/10.1002/
Zhang Y. et al. GBS-MCMC https://doi.org/10.25080/TCWV9851 (2025).
Acknowledgements
We thank Weiming Feng and Yu-Hao Deng for helpful suggestions. We acknowledge the Bohrium platform developed by DP Technology for supporting our numerical experiments. This work was supported by the National Natural Science Foundation of China (Grant Numbers 92365117 and 62372006).
Author information
Authors and Affiliations
Contributions
T.L. conceived the project. Y.Z. provided the theoretical proof of the mixing time of MCMC-based algorithms with assistance from Z.Y. S.Z. conducted the numerical experiments with assistance from Z.W., R.Y. and Y.X. X.W. proposed the framework of the double-loop Glauber dynamics. All authors contributed to writing the paper. Y.Z., S.Z., X.W. and T.L. contributed to the review process.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Communications thanks Changhun Oh and the other, anonymous, reviewers for their contribution to the peer review of this work. A peer review file is available.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Zhang, Y., Zhou, S., Wang, X. et al. Efficient classical sampling from Gaussian boson sampling distributions on unweighted graphs. Nat Commun 16, 9335 (2025). https://doi.org/10.1038/s41467-025-64442-7
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41467-025-64442-7




