Abstract
The study of flocking in biological systems has identified conditions for self-organized collective behavior, inspiring the development of decentralized strategies to coordinate the dynamics of swarms of drones and other autonomous vehicles. Previous research has focused primarily on the role of the time-varying interaction network among agents while assuming that the agents themselves are identical or nearly identical. Here, we depart from this conventional assumption to investigate how inter-individual differences between agents affect stability and convergence in flocking dynamics. We show that flocks of agents with optimally assigned heterogeneous parameters significantly outperform their homogeneous counterparts, achieving 20−40% faster convergence to desired formations across various control tasks. These tasks include target tracking, flock formation, and obstacle maneuvering. In systems with communication delays, heterogeneity can enable convergence even when flocking is unstable for identical agents. Our results challenge existing paradigms in multi-agent control and establish system disorder as an adaptive, distributed mechanism to promote collective behavior in flocking dynamics.
Similar content being viewed by others
Introduction
In 1987, Reynolds introduced three basic rules to emulate the flocking behavior of animals1: (1) agents must avoid collisions with nearby flock mates (separation), (2) agents must match their velocity with nearby agents (alignment), and (3) agents must move towards the center of mass of the local group of flock mates (cohesion). Models based on Reynolds' rules, known as boids, remain a standard solution in computer graphics for animating group behavior2. Beyond computer graphics, these rules have also found interdisciplinary applications in the modeling of sociobiological systems, stimulating research on the conditions required for the emergence of self-organization3,4,5,6,7,8,9,10,11,12. The underlying distributed decision-making strategies observed in animal flocks, which are governed mostly by local interactions, have inspired the design of multi-agent engineering systems13, such as swarms of unmanned aerial vehicles (UAVs). Swarms of small vehicles offer a cost-effective alternative to large vehicles in a wide variety of applications, ranging from surveillance and reconnaissance14 to target tracking15, operation management16, and transportation17. However, the deployment of these technologies faces fundamental challenges associated with controlling a large number of agents18,19,20. Overcoming these challenges requires the discovery of scalable decision-making mechanisms that can adapt to dynamic environments, operate under data communication constraints, and coordinate hundreds of agents.
The analysis of flocking dynamics is often formulated in the context of multi-agent consensus problems. Agents are said to achieve consensus if they all eventually agree on a common state or behavior (e.g., a specified formation) despite operating only with local information on the state of the flock. Lack of consensus can lead to group fragmentation in the presence of stochastic disturbances, physical obstacles, and loss of communication21. Previous studies have focused mainly on the role of the interaction (communication) network22,23,24,25, including the impact of the time-varying topology26,27,28,29 and data communication constraints30,31,32,33,34. In this context, Lyapunov stability has been a major tool for deriving the conditions for flock formation in numerous control tasks19,21,35.
A common implicit assumption in multi-agent studies is that consensus is facilitated when agents are identical or nearly identical. Still, empirical research on animal behavior, such as fish schooling36,37 and ant synchronization38, has identified scenarios in which inter-individual differences can facilitate coordination39. In the study of network synchronization, disorder in the parameters of the oscillators has been shown to improve synchronization in various systems40,41, including power grids42, electronic circuits43,44, coupled lasers45,46, neuronal oscillators47, and chemical oscillators48. Experimental studies have explored similar effects in self-organization and pattern formation49,50,51,52. Despite these advances and the connections between synchronization and consensus53,54, heterogeneity among agents has yet to be explored as a potential framework to promote flocking.
In this paper, we investigate the impact of optimizing inter-individual differences in real-time as an adaptive mechanism to enhance flocking behavior. We show that the stability and convergence rate of the collective dynamics substantially improve for suitable heterogeneous parameters when compared to their homogeneous counterparts. Despite the (possibly nonlinear) time-varying dynamics, Lyapunov stability analysis shows that this optimization is tightly bounded by the minimization of the largest Lyapunov exponent of the system. Our formulation highlights the dependence of the flocking dynamics on the interplay between the parameters of the agents, the flock formation, and the underlying communication network. The results are established for several control tasks and flocking models, with increasing degrees of complexity. We first consider a system designed for target tracking and formation keeping. We then generalize the results to a time-delay consensus model and a gradient-based flocking model. The latter accounts for sparse communication networks, emergent formations, and obstacle avoidance. In all scenarios, we show evidence that heterogeneous parameter optimization can improve the flock convergence rate by 20−40% relative to homogeneous parameter optimization under the same constraints. We further show that consensus can be achieved over a larger range of communication delays for heterogeneous systems than for homogeneous ones.
Results
Flocking model with pre-assigned formation
To describe the dynamics of a flock of N agents in an m-dimensional Euclidean space, we represent each agent i = 1, …, N as a state variable xi = [qi, pi], where \({{\boldsymbol{q}}}_{i}\in {{\mathbb{R}}}^{m}\) and \({{\boldsymbol{p}}}_{i}\in {{\mathbb{R}}}^{m}\) are respectively the position and momentum of agent i. The model assumes unit mass and all quantities expressed in dimensionless form; time, position, and velocity are scaled relative to a chosen reference. The full-system state is denoted by the column vector \({\boldsymbol{x}}=[{{\boldsymbol{x}}}_{1},\ldots,{{\boldsymbol{x}}}_{N}]\in {{\mathbb{R}}}^{n}\), where n = 2 Nm. Here, the flock is tasked to maintain a pre-specified formation and follow a (physical or virtual) target moving in space, which is represented by the state vector xt(t) = [qt(t), pt(t)]. The target may represent a moving vehicle/animal or a pre-programmed trajectory. To perform this task, we consider the following multi-agent model22,55:
where \({{\boldsymbol{r}}}_{i}\in {{\mathbb{R}}}^{m}\) indicates the intended position of agent i within the desired formation relative to the target position qt. The parameters b = [b1, …, bN] and c = [c1, …, cN] are the controller gains associated with position and velocity feedback, respectively. This feedback control law ensures that the agents achieve target tracking: qi(t) → qt(t) + ri and pi(t) → pt(t) as t → ∞, ∀ i. The adjacency matrix \(A\in {{\mathbb{R}}}^{N\times N}\) encodes the pairwise coupling between agents, guaranteeing the formation keeping and velocity matching (per Reynolds’ rules #1 and #2, respectively). The damping parameter γ parameterizes the velocity feedback gain relative to the position gain. Figure 1a (inset) illustrates the model for a flock of agents sustaining a circular formation centered at the virtual target.
a Tracking error as a function of time for an optimal flock of N = 30 agents in the 2D space. The blue and orange lines represent flocks of heterogeneous and homogeneous agents, respectively, in which feedback gains are optimized in real time at every time step w = T = 0.1. For reference, the black line represents the non-optimal case of randomly assigned time-independent feedback gains. The solid lines represent the median over 100 realizations with different initial conditions (and parameters in the non-optimal case), while the shaded areas indicate the first and third quartiles. The insets show a snapshot of the agents' positions at t = 4 for the homogeneous flock (top inset) and heterogeneous flock (bottom inset). Agents are color coded by their velocity feedback gain ci. In this simulation, the agents start at a random stationary positions around the origin (\({{\boldsymbol{q}}}_{i}(0) \sim {\mathcal{U}}{[-2,2]}^{2}\)) and are tasked to track a virtual target (red dot) that starts far away from the agents (qt(0) = [100, 100]) and moves with constant velocity (pt(t) = [100, 0], ∀t ≥ 0). b Settling time ts as a function of the tolerance ϵ, where the relationship ts vs. ϵ is illustrated by the dashed lines in panel a. c Histogram of the steady-state error \(\left\Vert {\boldsymbol{e}}(t)\right\Vert\) of the heterogeneous (blue) and homogeneous (orange) flock across all realizations in panel a for 20 ≤ t ≤ 30. d, e Settling time ts as a function of the interaction range β (d) and the number of agents N (e) for heterogeneous (blue) and homogeneous (orange) flocks. The dots and error bars represent the average and one standard deviation over 100 realizations, respectively. All simulations implement Eq. (1) with additive Gaussian noise to probe the robustness to small perturbations. See Methods for details on the simulation parameters and Supplementary Movie 1 for an animation of the dynamics.
In flocking dynamics, both biological and artificial agents typically interact more strongly with nearby agents, either due to sensing and information-processing constraints9,56 or as a mechanism to ensure alignment and avoid collisions21,26. Accordingly, we define the adjacency matrix as a time-dependent matrix26 \({\tilde{A}}_{ij}(t)=K{({\rho }^{2}+{\Vert {{\boldsymbol{q}}}_{i}(t)-{{\boldsymbol{q}}}_{j}(t)\Vert }^{2})}^{-\beta }\), where ρ = 0.1, K > 0 is the coupling strength, and β ≥ 0 represents the interaction range (larger β corresponds to weaker interaction at long inter-agent distances, as illustrated in Fig. S6). To account for communication constraints in swarms of UAVs, we assume that positional and velocity data are exchanged periodically among agents56. Thus, the entries of the adjacency matrix are modeled as piecewise-constant functions57, that is, \({A}_{ij}(t)={\tilde{A}}_{ij}({t}_{k})\), for t ∈ [tk, tk + T], where T is the time interval between communication events and tk = kT, for \(k\in {\mathbb{N}}\), is the update time instant. Note that, even though agents operate with information about the network structure updated at discrete times, the dynamics of system (1) are still continuous. Denoting the corresponding graph of the adjacency matrix A(tk) as \({\mathcal{G}}(A({t}_{k}))\), it follows from ref. 56 (Theorem 2.31) that, since \({\bigcup }_{k}{\mathcal{G}}(A({t}_{k}))\) is undirected and connected, consensus is guaranteed to be achieved asymptotically: (qi(t) + ri) − (qj(t) + rj) → 0 and pj(t) − pi(t) → 0 as t → ∞ for all pairs (i, j).
Optimal flock formation for target tracking
Given that consensus is guaranteed theoretically, our goal is to optimize the controller parameters bi and ci to maximize the convergence rate towards the intended formation (centered at the target). To this end, we define the tracking error of each agent i as ei = [eq,i, ep,i] = [qi − (qt + ri), pi − pt], in which eq = [eq,1, …, eq,N] and ep = [ep,1, …, ep,N]. From Eq. (1), the tracking error dynamics are given by
where 0Nm is an Nm × Nm zero matrix, INm is an identity matrix of size Nm, and ⊗ denotes the Kronecker product. It follows that J1(t) = B + L(t) and J2(t) = γ(C + L(t)), where B = diag(b) is the position feedback matrix, C = diag(c) is the velocity feedback matrix, and diag( ⋅ ) denotes a diagonal matrix with the respective input vector elements along its diagonal. The Laplacian matrix is given by L(t) = D(t) − A(t), where D(t) = diag(∑jA1j(t), …, ∑jANj(t)).
Eq. (2) is a linear time-varying (LTV) system whose solution is given by e(t) = Φ(t, 0)e(0), where e(0) is the initial condition and Φ(t, 0) is the state-transition matrix. Since A(t) and hence J(t) are piecewise-constant matrices, it follows that \({\mathbf{\Phi }}(t,0)=\mathop{\prod }\nolimits_{k=0}^{t/T}{\mathbf{\Phi }}({t}_{k}+T,{t}_{k})=\mathop{\prod }\nolimits_{k=0}^{t/T}{e}^{J({t}_{k})T}\) (we assume for simplicity that \(t/T\in {\mathbb{N}}\)). Therefore, we have
where \(\eta={\prod }_{k}\Vert {U}_{k}\Vert \Vert {U}_{k}^{-1}\Vert\) and Uk is the transformation matrix in the Jordan decomposition \(J({t}_{k})={U}_{k}^{-1}{\tilde{J}}_{k}{U}_{k}\) (\({\tilde{J}}_{k}\) is the corresponding Jordan matrix). In this case, the convergence rate of e(t) is characterized by the spectral properties of J(tk), ∀k, and is upper bounded by the largest Lyapunov exponent \({\Lambda }_{\max }(J({t}_{k}))=\mathop{\max }\limits_{i}{\rm{Re}}\{{\lambda }_{i}(J({t}_{k}))\}\), where λi(J(tk)) is the ith eigenvalue of J(tk). To maximize the convergence time of the tracking error e(t), we formulate the optimization problem as
for each time step tk, where the inequality applies element-wise. Since \({\mathcal{G}}(A(t))\) is strongly connected for all t, L(t) has only one null eigenvalue and hence, by the Gershgorin’s disc theorem58, the eigenvalues of J1(t) and J2(t) have strictly negative real parts if the feedback gains satisfy the lower bound bi, ci > 0, ∀i. The upper bounds \({b}_{\max }\) and \({c}_{\max }\) represent physical limitations in the controller actuation.
At each time instant tk, b(k) and c(k) denote the optimal feedback gains given by the solution of Eq. (4). These optimal gains, which depend on the agents’ positions through J(tk), are set constant within each interval [tk, tk + w], where w is the optimization window size. They are then recurrently reoptimized for subsequent time windows. The optimization window size w is assumed to be synchronous with the interval T between communication events such that w = κT, where \(\kappa \in {\mathbb{N}}\). Except when noted otherwise, we set both windows to have the same size (i.e., κ = 1). In what follows, we implement the real-time optimization procedure for two scenarios:
-
1.
optimal flocks of homogeneous agents, where parameters are optimized subject to the constraint that all agents have identical gains, i.e., b(k) = [b(k), …, b(k)] and c(k) = [c(k), …, c(k)];
-
2.
optimal flocks of heterogeneous agents, where gains are optimized independently for each agent, i.e., \({{\boldsymbol{b}}}^{(k)}=[{b}_{1}^{(k)},\ldots,{b}_{N}^{(k)}]\) and \({{\boldsymbol{c}}}^{(k)}=[{c}_{1}^{(k)},\ldots,{c}_{N}^{(k)}]\).
Thus, the feedback matrices are also piecewise-constant functions: B(t) = diag(b(k)) and C(t) = diag(c(k)), ∀t ∈ [tk, tk + w]. The procedure to solve the optimization problem (4) is discussed later in this section.
Heterogeneous versus homogeneous flocks
Figure 1 compares the performance in the target tracking task for optimized flocks of heterogeneous and homogeneous agents by first considering a target that moves with constant speed. Figure 1a shows that the convergence of a flock to the desired formation centered at the virtual target is substantially faster for the optimal flocks when compared to a flock of agents with randomly assigned parameters (black line). However, the heterogeneous flock exhibits a noticeably faster convergence than the homogeneous one. As illustrated for the specified tolerance ϵ = 10−2, the heterogeneous flock converges on average within time ts = 7.16 while the homogeneous flock takes ts = 11.62, where the settling time ts is defined such that \(\Vert {\boldsymbol{e}}(t)\Vert < \epsilon\) for t ≥ ts. The optimized homogeneous flocks shown in Fig. 1a exhibit an underdamped response, as evidenced in the oscillations of the tracking error \(\Vert {\boldsymbol{e}}(t)\Vert\); in contrast, the heterogeneous flocks display a strongly damped response, with minimal oscillations around the target. Figure 1b shows the dependence of the settling time on the specified tolerance. The slopes \(\alpha=\Delta {t}_{{\rm{s}}}/\Delta {\log }_{10}\epsilon\) for the heterogeneous and homogeneous flocks are respectively αhet ≈ −1.32 and αhom ≈ −2.06, indicating that the optimal heterogeneous flocks converge on average 36% faster than their homogeneous counterparts for any threshold within the depicted range.
Figure 1c shows that, along with the improvement in the convergence rate, optimizing agent heterogeneity also enhances the robustness of flock formation against noise. (To comprehensively account for the effect of noise, additive Gaussian noise is incorporated into all simulations in the paper; see Methods for details.) Indeed, the tracking error at steady state is on average 13% smaller for heterogeneous flocks. An improvement is indeed expected given that the optimization problem (4) is also related to the stability of the flocking model (1) against small disturbances (in the linear regime); note that, for t → ∞, the time-varying matrix J(t) coincides with the Jacobian matrix of Eq. (1) evaluated at the equilibrium point e = 0. This analysis demonstrates that heterogeneity can also confer improved robustness in noisy environments.
The performance improvement promoted by heterogeneity depends on the choice of the network parameters, including the interaction range β and the number of agents N. Figure 1d shows that for large β the settling times ts of homogeneous and heterogeneous flocks increase and approach each other. This occurs because, at long inter-agent distances \(\Vert {{\boldsymbol{q}}}_{i}(t)-{{\boldsymbol{q}}}_{j}(t)\Vert > 1\), a higher β reduces the coupling Aij(t), leading to L(t) ≈ 0 and, consequently, J1(t) ≈ B(t) and J2(t) ≈ γC(t). Thus, in this case, assigning larger gains bi and ci for all i directly minimizes \({\Lambda }_{\max }(J(t))\). An analogous analysis can be conducted for sufficiently small β such that Aij(t) ≈ K for all pairs (i, j) and hence the Laplacian matrix L has all nonzero eigenvalues equal to KN. In summary, if J1 is dominated by L or B (and J2 is dominated by L or C), the optimal parameters b and c that minimize \({\Lambda }_{\max }(J)\) are given by a homogeneous solution. Therefore, it is the interplay between the network structure, encoded by L(t), and the nodal dynamics, encoded by the feedback gains B(t) and C(t), that enables heterogeneity to promote optimal stability and optimal convergence rate in the flocking model. Importantly, this mechanism is scalable in the sense that heterogeneous flocks exhibit a higher convergence rate even when the number of agents increases (Fig. 1e).
Performance analyses for different types of spatial formations, target trajectories (deterministic and stochastic), and optimization constraints are reported in the Supplementary Information (SI), Section S1. The results confirm that heterogeneous flocks also attain faster convergence in different scenarios.
Real-time parameter optimization
We now investigate the reliance of the results presented thus far on the real-time optimization procedure, specifically the choice of the time intervals T and w. Figure 2a shows that, on average, the settling time increases with larger T and larger w. The decline in performance results from an increased lag between the continuous changes in the agents’ positions and the discrete updates of the network structure (determined by T) and the agents’ parameters (determined by w). In particular, the settling time of heterogeneous flocks is more sensitive to the choice of T and w, which leads to a decrease in its relative improvement with respect to the homogeneous flock from 31% (for T = w = 0.1) to 21% (for T = w = 30; left panel) and 23% (for w = 10, T = 0.1; right panel). As shown in Eq. (3), improving the flock’s convergence time is tied to the minimization of the Lyapunov exponent \({\Lambda }_{\max }(J({t}_{k}))\). Figure 2b shows that \({\Lambda }_{\max }(J({t}_{k}))\) is generally smaller for heterogeneous flocks, even though the relative improvement may be smaller (or negative during short transients) depending on the choice of T. Nonetheless, the heterogeneous flock retains superior performance even when the optimization procedure is computed every κ = 100 rounds of communication events (e.g., w = 10 and T = 0.1), demonstrating the robustness of the approach with respect to the optimization window.
a Settling times ts as functions of the time interval T between communication events (left for w = T, ϵ = 0.01) and the optimization window size w (right for T = 0.1, ϵ = 0.01). The data points represent an average over 100 independent realizations of the initial conditions, and the error bars indicate one standard deviation. b Lyapunov exponent \({\Lambda }_{\max }\) of the optimal flock as a function of time for different parameter choices: T = 0.1 (left), T = 1 (middle), and T = 10 (right); w = T in all cases. The color scheme is the same as in panel a, with the blue and orange lines representing optimal flocks of heterogeneous and homogeneous agents, respectively. The solid lines represent the median over 100 realizations, and the shaded areas indicate the first and third quartiles. c Optimal position (top) and velocity (bottom) feedback gains as functions of time in a representative realization in a heterogeneous flock (a colored line for each agent) and a homogeneous flock (a common black line for all agents). The different columns correspond to the choices of T and w in panel b. In all cases, the number of agents is N = 30 and the simulation time is 30 (panels b, c are zoomed in to facilitate visualization). See “Methods” for details on the system parameters.
The higher sensitivity of heterogeneous flocks to parameter choices can be observed in Fig. 2c: the optimal gains b(k) and c(k) change non-trivially across time windows, whereas the optimal homogeneous gains remain roughly the same as the agents approach the desired formation. This suggests that changes in the agents’ relative positions impact the optimal assignment of gains more strongly in heterogeneous systems than in homogeneous ones. In the SI, Section S2, we also demonstrate that the optimization procedure is robust for networks with varying levels of connectivity, and that there is no simple relation between the agent’s optimal gains and its structural properties within the flock (e.g., distance to target, node in-degree, and network symmetries). As shown next, the optimal gains are instead strongly determined by the spectral properties of the Jacobian matrix J.
Lyapunov exponent minimization
To explicitly characterize the landscape of the optimization problem (4), let us consider a simplified setup in which the position gains and velocity gains are equal for each agent (i.e., bi = ci, ∀i). For a fixed, time-independent Laplacian matrix L, this leads to the following Jacobian matrix
where B = diag(b). In the case of homogeneous gain (B = bIN), ref. 56 (Theorem 5.8) establishes that all eigenvalues of \({J}^{{\prime} }\) have negative real part if \(\gamma > \bar{\gamma }\), where
and νi is the ith eigenvalue of − (bIN + L). Now, let us denote the optimal homogeneous gain as the N-dimensional vector \({{\boldsymbol{b}}}_{{\rm{hom }}}^{*}=[{b}_{{\rm{hom }}}^{*},\ldots,{b}_{{\rm{hom }}}^{*}]=\arg \min_{\boldsymbol{b}}{\Lambda }_{\max }({J}^{{\prime} }({\boldsymbol{b}}))\) subject to bi = b, ∀i. Based on Eq. (6), we can show that the optimal homogeneous gain is given by
where ℓ1 < … < ℓN are the eigenvalues of L (see SI, Section S3, for a derivation). Eq. (7) draws a direct link between the network structure and the agents’ parameters—a relation analogous to results previously established for the synchronization of coupled oscillators59,60 and power grids61,62.
Figure 3 illustrates the stability landscape, characterized by \({\Lambda }_{\max }({J}^{{\prime} })\), for flocks of different sizes. At the homogeneous optimum \({{\boldsymbol{b}}}_{{\rm{hom }}}^{*}\), \({\Lambda }_{\max }({J}^{{\prime} }({{\boldsymbol{b}}}_{{\rm{hom }}}^{*}))\) is non-differentiable and has positive directional derivative \(\frac{{\rm{d}}{\Lambda }_{\max }({J}^{{\prime} })}{{\rm{d}}{\boldsymbol{b}}}{| }_{{{\boldsymbol{b}}}_{{\rm{hom }}}^{*}} > 0\) along any vector \({\boldsymbol{b}}\in {{\mathbb{R}}}^{N}\) (SI, Section S3). Yet, the homogeneous gain \({{\boldsymbol{b}}}_{{\rm{hom }}}^{*}\) is not the best solution for \({\min }_{{\boldsymbol{b}}}{\Lambda }_{\max }({J}^{{\prime} }({\boldsymbol{b}}))\) when b is unconstrained. Although it may seem impossible to further minimize \({\Lambda }_{\max }({J}^{{\prime} }({{\boldsymbol{b}}}_{{\rm{hom }}}^{*}))\) locally, ref. 42 has shown the existence of curved paths out of \({{\boldsymbol{b}}}_{{\rm{hom }}}^{*}\) along which \({\Lambda }_{\max }\) locally decreases in the particular case of power-grid networks. Crucially, following these paths, the largest Lyapunov exponent reaches a minimum at some point \({{\boldsymbol{b}}}_{{\rm{het}}}^{*}\) corresponding to a heterogeneous choice of parameters. These results can also be extended to the multi-agent consensus model considered here (SI, Section S7). As illustrated in Fig. 3, such curved paths follow the surfaces of codimension one in the stability landscape where the Jacobian matrix \({J}^{{\prime} }\) is degenerate in the sense of having at least two eigenvalues with identical real parts. The fact that the paths connecting \({{\boldsymbol{b}}}_{{\rm{hom }}}^{*}\) and \({{\boldsymbol{b}}}_{{\rm{het}}}^{*}\) are locally curved hinders the effectiveness of first-order methods (e.g., gradient descent) in solving the optimization problem (4). To circumvent this issue, we employ solvers that incorporate higher-order approximations of the objective function \({\Lambda }_{\max }\) (e.g., by estimating the Hessian matrix), such as the interior-point method or quasi-Newton methods63. See Methods for computational details on the optimization solver used in the simulations.
a Lyapunov exponent \({\Lambda }_{\max }({J}^{{\prime} })\) as a function of the feedback gains (b1, b2, b3) for N = 3 agents. The color-coded section shows the stability landscape on a plane containing both the optimal homogeneous gain \({{\boldsymbol{b}}}_{{\rm{hom }}}^{*}\) (orange dot) and the optimal heterogeneous gain \({{\boldsymbol{b}}}_{{\rm{het}}}^{*}\) (green dot). The flock formation is stable (unstable) for \({\Lambda }_{\max } < 0\) (\({\Lambda }_{\max } > 0\)). b Lyapunov exponent \({\Lambda }_{\max }({J}^{{\prime} })\) for N = 10 on a plane (ξ1, ξ2) containing \({{\boldsymbol{b}}}_{{\rm{hom }}}^{*}\) and \({{\boldsymbol{b}}}_{{\rm{het}}}^{*}\). In both panels, the white curves indicate the cross-sections of a hypersurface (codimension 1) corresponding to a single degeneracy of the real parts of the eigenvalues of the Jacobian \({J}^{{\prime} }\).
Distributed optimization
The multi-agent system (1) is decentralized as agents rely primarily on local information from nearby peers, especially when β is large or A is sparse (see Fig. S7 for performance analysis on sparse networks). Yet, the optimization problem (4) has thus far been formulated in a centralized form that requires global knowledge of the adjacency matrix A(t) and hence the full vector of agent positions q(t). When w is large, the optimization operates on slow timescales, allowing enough time for decentralized agent-to-agent communication to gather state information and distribute optimized parameters. In contrast, for small w, the flocking dynamics may outpace the computational time required for data collection, optimization, and distribution. To address this challenge, we propose a distributed optimization variant of the approach that enables each agent to solve the optimization based on local information.
Figure 4 a illustrates the distributed approach. Each agent i is assumed to only access state information of the agents within a spatial neighborhood \({{\mathcal{N}}}_{i}({\boldsymbol{q}})=\{j\,:\,\Vert {{\boldsymbol{q}}}_{i}-{{\boldsymbol{q}}}_{j}\Vert \le R\}\), where R is the sensing range. This partial information defines a subgraph \({{\mathcal{G}}}_{i}\subseteq {\mathcal{G}}\) of the communication network, corresponding to the network of agents within \({{\mathcal{N}}}_{i}\). At each optimization time tk, every agent i independently performs the following steps: (i) retrieves the positions of neighboring agents within \({{\mathcal{N}}}_{i}\), (ii) constructs the corresponding subgraph \({{\mathcal{G}}}_{i}\), and (iii) solves a local, lower-dimensional optimization problem associated with subgraph \({{\mathcal{G}}}_{i}\) to determine its optimal gains \({b}_{i}^{(k)}\) and \({c}_{i}^{(k)}\) over the interval [tk, tk + w] (see “Methods” for details). Thus, each agent adapts its parameter based both on local information and local computation, reducing computational burden and enabling parallelization.
a Schematic diagram of the distributed optimization approach, where each agent has access only to local information within a specified range R and the associated subnetwork \({{\mathcal{G}}}_{i}\) (green edges). b Tracking error over time for a flock of N = 30 agents using heterogeneous distributed optimization with R = 2 (green), heterogeneous centralized optimization (blue), and homogeneous centralized optimization (orange). c, d Average neighborhood size \(| {{\mathcal{N}}}_{i}|\) (c) and optimal Lyapunov exponent \({\Lambda }_{\max }\) (d) as functions of R for the distributed method (green curve). In panel d, the blue and orange lines show the optimal values obtained by the centralized heterogeneous and homogeneous approaches, respectively.
We compare the flock convergence under three optimization scenarios: (i) the distributed formulation, (ii) the centralized formulation, and iii) the centralized formulation in which the agent gains are constrained to be homogeneous. Figure 4b shows that flocks optimized with the distributed method perform comparably to those optimized by the centralized (heterogeneous) method for a sensing range R = 2, while also converging 34% faster than their homogeneous counterparts (under the same simulation conditions as Fig. 1). In this context, \({{\mathcal{N}}}_{i}\) contains on average only 4.1 agents for R = 2 (Fig. 4c), substantially reducing the dimension of the local optimization problem and its computational burden. Figure 4d also shows that, as R increases, the optimal Lyapunov exponent obtained by the distributed approach converges to the global optimum obtained by the centralized approach, as expected. Notably, for R > 1.5 (\(| {{\mathcal{N}}}_{i}|=3.7\) on average), the distributed (heterogeneous) approach already outperforms the centralized homogeneous approach.
Extension to time-delay systems
Having established that agent heterogeneity can improve convergence, we now show that it can improve stability and lead to stable consensus even when a homogeneous flock is necessarily unstable. Consider the second-order consensus model with time delay33:
where \(L\in {{\mathbb{R}}}^{N\times N}\) is a (time-invariant) Laplacian matrix, τ is the time delay modeling the communication lag between agents, and ki is the coupling gain of agent i. Consensus is achieved if ∥xi(t) − xj(t)∥ → 0 as t → ∞ for all pairs (i, j) (illustrated in Fig. 5b, top). The system of delay differential equations (DDE) (8) reaches consensus, or is said to be asymptotically stable, if and only if all “eigenvalues” have negative real part (see SI, Section S4, for details). Lack of consensus leads to irregular fragmentation, a common pitfall where the flock breaks into subgroups of agents that diverge from each other in space.
a Lyapunov exponent \({\Lambda }_{\max }\) as a function of the time delay τ for an optimal flock of heterogeneous (blue) and homogeneous (orange) agents for N = 4. Consensus is stable (unstable) for all initial conditions x(0) if \({\Lambda }_{\max } < 0\) (\({\Lambda }_{\max } > 0\)). A flock is said to be optimal if the choice of parameters ki minimizes \({\Lambda }_{\max }\). b Dynamical evolution of the agents' positions qi(t) for an optimal flock of heterogeneous (top) and homogeneous (bottom) agents. The time delay is set as τ = 0.6. See SI, Section S4, for details on the optimization problem.
It follows from ref. 33 (Theorem 2) that, for a fixed homogeneous choice of coupling gain \({k}_{i}=\bar{k} > 0\), ∀i, consensus can be achieved if and only if τ < τ0 (where τ0 depends explicitly on \(\bar{k}\) and the eigenvalues of L; see Eq. (S11) in the SI). Based on this analytical relation, we can show that there exists a maximum delay \({\tau }_{0}^{*}=\mathop{\max }_{\bar{k}}{\tau }_{0}\), subject to the constraint \(\bar{k}\le {k}_{\max }\), for which consensus can be achieved using some homogeneous parameter assignment (see Fig. S9 depicting τ0 as a function of \(\bar{k}\)). Indeed, Fig. 5a shows that for N = 4 agents constrained by \({k}_{\max }=1\), there exists a flock of homogeneous agents that can achieve consensus if and only if \(\tau < {\tau }_{0}^{*}=0.306\) (corresponding to \(\bar{k}^{*}=0.724\)). In principle, this sets an upper bound on the largest communication delay for which consensus is possible. However, this limitation can be circumvented by optimizing the agents’ coupling gain in a heterogeneous manner (also subject to the constraint \({k}_{i}\le {k}_{\max }\)). Figure 5a shows that consensus can be achieved for flocks with much larger communication delay (up to τ = 0.98). For τ = 0.6, Fig. 5b confirms that the optimal heterogeneous flock reaches consensus, whereas the optimal homogeneous flock irregularly fragments into three separate groups.
Extension to free-flocking systems
Thus far, we have analyzed the convergence and stability of flocking models encompassing two of Reynolds’ rules. The multi-agent model (1) adheres to rules #1 and #2, but excludes rule #3 as each agent is assigned a fixed position within the flock formation. The consensus model (8) implements rules #2 and #3 instead, but lacks a mechanism to avoid collisions (note that qi(t) → qj(t) as t → ∞ if the system reaches consensus). We now show that heterogeneity can also promote optimal flocking in models that account for all three of Reynolds’ rules. This section extends our methodology to more complex dynamics.
We consider the following flocking model proposed by Olfati-Saber21:
for i = 1, …, N, where each term \(\{{{\boldsymbol{u}}}_{i}^{\alpha },{{\boldsymbol{u}}}_{i}^{\gamma },{{\boldsymbol{u}}}_{i}^{\beta }\}\) describes a different control objective between agent i and its environment. The agent-agent interaction is given by
for some gains \({k}_{1}^{\alpha },{k}_{2}^{\alpha } > 0\). In Eq. (10), the second term enforces the velocity consensus among agents (Reynolds’ rule #2), which is governed by a time-dependent adjacency matrix whose entries are inversely proportional to the agents’ relative distance: \({A}_{ij}({\boldsymbol{q}})\propto {\Vert {{\boldsymbol{q}}}_{i}-{{\boldsymbol{q}}}_{j}\Vert }^{-1}\). Furthermore, each agent i only interacts with agents located within the spatial neighborhood \({{\mathcal{N}}}_{i}({\boldsymbol{q}})=\{j:\Vert {{\boldsymbol{q}}}_{i}-{{\boldsymbol{q}}}_{j}\Vert < R,\,j\ne i\}\); hence, Aij = 0 if \(j\notin {{\mathcal{N}}}_{i}\) and the underlying communication network \({\mathcal{G}}(A({\boldsymbol{q}}))\) is sparse and possibly disconnected in this model. The gradient term in Eq. (10) involves a smooth collective potential V(q) whose local minima q* form lattices, as shown in ref. 21 (Lemma 3). That is, q* corresponds to a configuration of agent positions satisfying \(\Vert {{\boldsymbol{q}}}_{i}-{{\boldsymbol{q}}}_{j}\Vert=d\) for all \(j\in {{\mathcal{N}}}_{i}({\boldsymbol{q}})\) and some pre-specified distance 0 < d < R (ensuring no collisions according to Reynolds' rule #1). The precise definition of A(q) and V(q) follows ref. 21 and is reported in Methods. Since no formation is pre-assigned and any lattice is an admissible solution, Eq. (9) defines a free-flocking model. Figure 6a,b illustrates the agents converging to a lattice configuration.
a Initial positions of a group of 30 agents in the 2D Euclidean space. b, c Snapshot of the agents' positions at t = 2.7 for an optimal flock of heterogeneous (b) and homogeneous (c) agents. The agents are color coded by the feedback gain ci, and the edges indicate the underlying communication network. The target (red dot) is stationary at qt = [300, 0] (note that the target is distant from the neighborhood of the flock plotted in panel c). d Performance metrics of the flock convergence as functions of time for a group of heterogeneous (blue) and homogeneous (orange) agents; the performance of a non-optimal flock (black) with constant gains \({b}_{i}={b}_{\max }\) and \({c}_{i}={c}_{\max }\), ∀i, is also shown as a reference. The shaded areas indicate time intervals in which the network is fully connected (\({\mathcal{K}}=1\)) for the respective cases. e Lyapunov exponent \({\Lambda }_{\max }(J({t}_{k}))\), optimal position gain \({b}_{i}^{(k)}\), and optimal velocity gain \({c}_{i}^{(k)}\) solved at each time interval t ∈ [tk, tk + w] for w = 1. Top: Heterogeneous (blue) and homogeneous (orange) flocks. Middle and bottom: Heterogeneous flock (a colored line for each agent) and homogeneous flock (a common black line for all agents). See “Methods” for details on the system parameters and Supplementary Movie 2 for an animation of the dynamics.
The control terms \({{\boldsymbol{u}}}_{i}^{\gamma }\) and \({{\boldsymbol{u}}}_{i}^{\beta }\) encode the agent-target and agent-obstacle interactions, respectively. The mathematical modeling of \({{\boldsymbol{u}}}_{i}^{\beta }\) follows a structure similar to Eq. (10), in which the boundaries of the obstacles are represented by additional virtual agents (see Methods). For simplicity, we first present our results for scenarios with no obstacles (\({{\boldsymbol{u}}}_{i}^{\beta }=0\), ∀i), also known as flocking in free space. We define the feedback term modeling the agent-target interaction:
where xt(t) = [qt(t), pt(t)] represents the state of a target moving across space. As before, bi, ci > 0 are feedback gains sought to be optimized in a homogeneous or heterogeneous manner. Note that, unlike model (1), Eq. (11) does not specify the desired agents’ position relative to the target. Instead, each agent seeks to minimize its own distance to the target. As a consequence, agents navigate towards the flock’s center of mass in accordance with Reynolds’ rule #3. Following the convergence to a final formation, the position \({{\boldsymbol{q}}}_{{\rm{c}}}=\frac{1}{N}{\sum }_{i}{{\boldsymbol{q}}}_{i}\) and momentum \({{\boldsymbol{p}}}_{{\rm{c}}}=\frac{1}{N}{\sum }_{i}{{\boldsymbol{p}}}_{i}\) of the flock’s center of mass matches the target’s position and momentum.
To optimize the flock convergence, we measure the centering deviation [eq,i, ep,i] = [qi − qc, pi − pc] of each agent i with respect to the flock’s center of mass. Accordingly, we define eq = [eq,1, …, eq,N], ep = [ep,1, …, ep,N], and e = [eq, ep]. In the reference frame of the center of mass, the flock formation at steady-state is given by \({{\boldsymbol{e}}}^{*}=[{{\boldsymbol{e}}}_{q}^{*},0]\), where the agents form a lattice (satisfying \({\boldsymbol{\nabla }}V({{\boldsymbol{q}}}^{*})={\boldsymbol{\nabla }}V({{\boldsymbol{e}}}_{q}^{*})=0\)) and match the velocity of the target (pi = pt, ∀i). Using Lyapunov stability analysis, we prove that the centering deviation to a desired formation is upper bounded as
for each interval t ∈ [tk, tk + T] and constants η, ηk, α2, T > 0, where tk = kT, \(k\in {\mathbb{N}}\). The matrix J(tk) has a Jacobian-like structure and is defined as
where B(tk) = diag(b(k)), C(tk) = diag(c(k)), and L(q) is the Laplacian matrix associated with A(q). In contrast to model (1), A(q) is not a piecewise-constant function, but rather a continuous function. Nonetheless, due to the timescale separation between changes in the network structure and the motion of agents in space, we can approximate A(q) by a piecewise-constant function within each interval t ∈ [tk, tk + T], leading to Eq. (13). This approximation was used to derive the upper bound (12) (SI, Section S5).
Once again, by solving Eq. (4) at each interval [tk, tk + w] (where J is now given by Eq. (13)), we can optimize the convergence rate of the flock to a lattice formation. This procedure determines the optimal gains b(k) and c(k) for each instant tk.
Free-flocking performance
Figure 6 compares the free-flocking performance between optimal flocks of heterogeneous and homogeneous agents. Starting from the same initial condition \({{\boldsymbol{q}}}_{i}(0) \sim {\mathcal{U}}{[-60,60]}^{2}\) (Fig. 6a), the heterogeneous flock forms a connected, lattice-like formation (Fig. 6b), whereas the homogeneous flock remains largely disconnected within the same convergence time (Fig. 6c). To measure this heterogeneity-promoted improvement, we evaluate the flock convergence using the following three metrics reported in Fig. 6d:
-
1.
the relative connectivity of the agents’ communication network, \({\mathcal{K}}(t)=\frac{1}{N-1}{\rm{rank}}\left.\right(L({\boldsymbol{q}}(t))\), where 0 corresponds to a fully disconnected network and 1 to a network with a single connected component;
-
2.
the tracking error of the center of mass with respect to the target position, \({\mathcal{T}}(t)=\Vert {{\boldsymbol{q}}}_{{\rm{t}}}(t)-{{\boldsymbol{q}}}_{{\rm{c}}}(t)\Vert\), where \({\mathcal{T}}=0\) corresponds to full convergence;
-
3.
the formation deviation from a perfect lattice configuration, \({\mathcal{E}}(t)=\frac{1}{{N}_{{\rm{e}}}+1}\mathop{\sum }\nolimits_{i=1}^{N}{\sum }_{j\in {{\mathcal{N}}}_{i}}\psi (\Vert {{\boldsymbol{q}}}_{j}(t)-{{\boldsymbol{q}}}_{i}(t)\Vert -d)\), where Ne is the number of edges in \({\mathcal{G}}(A(t))\) and the function ψ( ⋅ ) is defined in Methods.
Note that \({\mathcal{E}}\) only measures the lattice deviation within connected components, and thus this quantity is most useful when the network comprises a single connected component (\({\mathcal{K}}=1\)).
The simulations show that allowing parameter heterogeneity in the optimization procedure can enhance the convergence rate of flocks by 36%. For the tolerance ϵ = 1, we observe a settling time of ts = 4.01 in the tracking task for heterogeneous flocks, which contrasts with ts = 6.27 for homogeneous flocks (here, ts is defined such that \({\mathcal{T}}(t)\le \epsilon\), ∀t ≥ ts). This result aligns with our expectations given that, following Eq. (12), \({\mathcal{T}}(t)\) is directly related to the optimized cost function \({\Lambda }_{\max }(J({t}_{k}))\). The improvement in \({\Lambda }_{\max }\) is illustrated in Fig. 6e; for instance, at t = 0 (initial formation) and t = 8 (final formation), \({\Lambda }_{\max }\) is respectively 97% and 19% smaller for the heterogeneous flock. Again, the gains change nontrivially over time, despite the tendency of b(k) to increase as the flock formation converges.
It is instructive to further examine the performance metrics in Fig. 6d. As a result of the faster decay in \({\mathcal{T}}(t)\), the heterogeneous agents form a single connected component 38% faster than their homogeneous counterparts. However, there is a trade-off between the flock’s convergence to a connected formation and its tracking error. For example, trivially setting \(({b}_{i},{c}_{i})=({b}_{\max },{c}_{\max })\), ∀i, leads to faster convergence to a fully connected formation than either of the optimized flocks (Fig. 6d, black line). Yet, this improvement comes at the expense of large underdamped oscillations of the flock’s center of mass around the target, yielding an overall slower decay of \({\mathcal{T}}(t)\).
The flock convergence can also be quantified using the lattice deviation \({\mathcal{E}}\). A constant value of \({\mathcal{E}}(t)\) over time indicates that the flock converged to a steady-state formation in which the relative motion between agents is negligible. As observed in the other performance measures, \({\mathcal{E}}(t)\) stabilizes more rapidly for heterogeneous flocks compared to homogeneous ones. Specifically, \({\mathcal{E}}(t)\) converges to a fixed value (within 10% deviation) at t = 5.25 and t = 7.81 for the optimal heterogeneous and homogeneous flocks, respectively. The spikes in \({\mathcal{E}}(t)\) are due to discontinuities in network connectivity. For all cases shown in Fig. 6d, the steady-state values of \({\mathcal{E}}(t)\) are small and comparable. This suggests that the resulting formations are consistently well-structured.
Obstacle maneuvering
Figure 7 compares the flocking of optimized heterogeneous and homogeneous agents navigating through an obstacle course. In this application, agents must track a virtual target moving with constant velocity along the x-axis while maneuvering around 15 static obstacles randomly placed along the course. The optimization procedure is given by Eq. (4), where J(tk) is a slightly modified version of Eq. (13) to handle the presence of obstacles (SI, Section S5). The time evolution is subdivided into three segments in which agents first navigate in free space (t < 6), then maneuver around obstacles (6 < t < 28), and finally leave the obstacle course (t > 28). This setup is such that the agents form a cohesive flock before encountering the first obstacle. See Supplementary Movie 3 for an animation of the flocking dynamics.
a Snapshot of the agents' positions at different time instants for optimal flocks of heterogeneous (top) and homogeneous (bottom) agents. The agents are color coded by the feedback gain bi, and the edges indicate the underlying communication network. The obstacles are represented by gray circles. The virtual target (red dot) moves with constant speed along the x-axis (pt(t) = [20, 0], ∀t ≥ 0, and qt(0) = [0, 0]). b Performance metrics of the flock convergence as functions of time for optimal heterogeneous (blue) and homogeneous (orange) flocks. If the heterogeneous (homogeneous) flock has a superior performance, then the area between the curves is colored in blue (orange). The obstacle course contains 15 circular objects, each with radius \({r}_{k} \sim {\mathcal{U}}[1,4]\), randomly distributed in the 2D-space [150, −20] × [600, 20]. The segments where flocks navigate in free space or in an environment with obstacles are marked in panel b. The agents start from the same random initial condition \({{\boldsymbol{q}}}_{i}(0) \sim {\mathcal{U}}{[-60,60]}^{2}\). See “Methods” for details on the system parameters.
The superior performance of heterogeneous flocks is characterized by an overall smaller lattice deviation \({\mathcal{E}}(t)\) and tracking error \({\mathcal{T}}(t)\). While flocking in free space, the heterogeneous agents converge faster to the lattice formation, which is also more symmetric, as seen at t = 5. The homogeneous flock formation, on the other hand, exhibits gaps in the lattice structure. These differences in the lattice structures are captured by the deviation measure \({\mathcal{E}}\). Overall, the performance metrics in the interval t ∈ [0, 6] are consistent with the behavior observed in Fig. 6, where the agents also flock in free space.
Along the segment with obstacles, the tracking error \({\mathcal{T}}\) is slightly smaller for the heterogeneous flock, with an average improvement of 14%. Crucially, \({\mathcal{E}}\) is consistently smaller for the heterogeneous flock throughout most of this segment, yielding an improvement of 30% on average. This improvement is illustrated in the snapshots at t = 11.8 and t = 18.6, showing that the homogeneous flock has agents undesirably close to each other (at a distance \(\Vert {{\boldsymbol{q}}}_{j}-{{\boldsymbol{q}}}_{i}\Vert\) much smaller than the specified lattice spacing d = 7), as indicated by a larger \({\mathcal{E}}\). The difference between \({\mathcal{E}}(t)\) for heterogeneous and homogeneous flocks is largest at t = 18.6, where the homogeneous flock is most susceptible to collisions among agents. The heterogeneous agents, on the other hand, sustain a more stable formation with less variability in \({\mathcal{E}}\). These results suggest that the proposed methodology can be applied to optimize consensus protocols for drone swarms operating in unmapped environments, since in our simulations agents only detect obstacles in real time.
Discussion
Our results show that heterogeneity, when appropriately designed, can improve the stability of flocking dynamics beyond what can be achieved in homogeneous systems. We have focused our analysis on three different consensus models as they account for a variety of control tasks and levels of complexity. The control tasks include trajectory tracking of static and dynamic targets, setpoint tracking and spatial formation, rendezvous operation, emergent flocking in free space, and obstacle maneuvering in unmapped terrain. The model complexity accounts for the sparsity and time-dependence of the communication networks, the presence of communication delays, the structure of the flock formation, and Reynolds' rules incorporated by each model. These features are summarized in Table 1.
This is not the first study to model consensus in heterogeneous multi-agent systems64,65,66. However, the literature has focused primarily on stability conditions to address the detrimental rather than the beneficial effect of heterogeneity considered here. Indeed, previous work generally assumes that heterogeneity inhibits consensus and thus seeks stability conditions characterizing the maximum level of heterogeneity under which consensus is still possible. The conclusion that heterogeneity is detrimental is reached because 1) the stability conditions are usually sufficient rather than necessary, and 2) the heterogeneity tends to be modeled as small and/or random deviations from a homogeneous baseline rather than as a design parameter that can be strategically optimized. That is, the main goal has been to determine when consensus can be achieved despite heterogeneity, whereas here we identify scenarios in which consensus can be achieved and enhanced because of heterogeneity.
This work leads to fundamental questions worth pursuing in future research. It is well known in control theory that the dynamical characteristics of a system, such as its settling time and overshoot, depend not only on the largest eigenvalue but also on the placement of all eigenvalues in the complex plane. Thus, while here we focused on the largest Lyapunov exponent, our formulation can be recasted as an optimal control problem in the context of model predictive control67,68, potentially leading to a tighter bound on the tracking error (see SI, Section S6, for a discussion of the challenges involved in this approach). To further enhance flock convergence, we could jointly optimize additional parameters of the model—such as the coupling strength K and exponent β—alongside the gains b and c. Another promising direction for future work is to extend our approach to multi-agent systems with leader-follower roles10 and/or predator behavior69. These scenarios introduce additional features, such as asymmetries in the interaction network, that could be exploited through our adaptive mechanisms for flocking control.
It is instructive to note that not all systems are expected to benefit from agent heterogeneity. Consider, for example, the first-order consensus model \(\dot{{\boldsymbol{x}}}=-KL{\boldsymbol{x}}\), where K = diag(k1, …, kN) represents the individual gains of each agent, satisfying \({k}_{i}\le {k}_{\max }\), ∀i. The convergence rate of this model, characterized by its largest (transversal) Lyapunov exponent, is globally optimized by homogeneously setting \({k}_{i}={k}_{\max }\), ∀i (a result that follows from Gershgorin’s disc theorem). Thus, homogeneity is better than heterogeneity for achieving consensus in this case, even though we have shown that heterogeneity can enhance consensus in the more complex models considered in this paper. Other models used to describe spin alignment, such as the XY model and the Vicsek model70, reduce to the first-order consensus model when linearized around the equilibrium, and thus are also consistent with the conclusion that homogeneity is preferable. The key factor enabling heterogeneity to promote consensus in our study is the second-order nature of flocking dynamics, which incorporates agent inertia as dictated by Newton’s second law.
Given the generality of our results, we suggest that the approach will find applications in the optimization of a broad range of other consensus problems in networks, including mobile sensor networks71,72, distributed state estimation73,74,75, opinion dynamics in social networks76,77,78,79, and energy management for the coordinated charging of electrical vehicles and other Internet-of-Things devices80,81. For applications in swarms of UAVs, the approach can also be extended to account for practical challenges, such as asynchronous communication82, data packet loss83, actuator saturation84, and cyber-attacks85.
Methods
Parameters of the pre-assigned formation model
We report the parameters used in simulations of the multi-agent system (1). Unless specified otherwise, all parameters are set as follows. The agents are constrained to move in the m = 2 dimensional space and the damping coefficient is set as γ = 1 (in Figs. 1, 2 and S1) or γ = 3 (in Fig. S2). The weights of the adjacency matrix \(\tilde{A}\) are computed for ρ = 0.1, β = 0.8, and K = 2 (in Figs. 1, 2 and S1) or K = 5 (in Fig. S2). The noise is added to the acceleration equation of each agent as \({\dot{{\boldsymbol{p}}}}_{i}={\boldsymbol{f}}({\boldsymbol{q}},{\boldsymbol{p}})+{{\boldsymbol{\nu }}}_{i}\), where f(q, p) represents the corresponding right-hand side in Eq. (1) and \({{\boldsymbol{\nu }}}_{i}(t) \sim {\mathcal{N}}{(\mu,{\sigma }^{2})}^{m}\) is a random variable drawn from an m-dimensional Gaussian distribution with mean μ and standard deviation σ. We report simulations for (μ, σ) = (0, 0.1), but we note that the results in Fig. 1c remain qualitatively consistent for the range σ ∈ [10−4, 100].
The initial conditions in the independent realizations are set as pi(0) = 0 and \({{\boldsymbol{q}}}_{i}(0) \sim {\mathcal{U}}{\left[-\sqrt{N/7.5},\sqrt{N/7.5}\right]}^{2}\) (in Figs. 1, 2 and S1) or \({{\boldsymbol{q}}}_{i}(0) \sim {\mathcal{U}}{[0,1000]}^{2}\) (in Fig. S2), where \({\mathcal{U}}{[a,b]}^{m}\) denotes an m-dimensional uniform distribution within the interval [a, b]. Since our goal in Figs. 1, 2, and S1 is to statistically evaluate the performance of flocks operating under different conditions, we also set the desired formation to be random according to \({{\boldsymbol{r}}}_{i} \sim {\mathcal{U}}{\left[-\sqrt{N/1.2},\sqrt{N/1.2}\right]}^{2}\); the only exception is the inset of Fig. 1a (and corresponding Supplementary Movie 1), where we adopted an ordered circular pattern for illustration purposes. Note that the size of the intervals containing the initial conditions qi(0) and relative positions ri are scaled to equalize the flock density for any number of agents N (Fig. 1e).
Free-flocking model
We define each of the terms in the free-flocking model (9). Starting with the agent-agent interaction in Eq. (10), the adjacency matrix is defined as
where the σ-norm is defined as \({\Vert z\Vert }_{\sigma }=\frac{1}{\varepsilon }\left(\sqrt{1+\varepsilon {\Vert z\Vert }^{2}}-1\right)\) and the bump function ρh is a scalar function given by
where 0 < h < 1. We set ε = 0.1 and h = 0.2 in the simulations. The interaction range is set to R = 1.2d, where d = 2 (in Fig. 6) or 7 (in Fig. 7) is the constrained distance between agents in the lattice structure.
The collective potential is \(V({\boldsymbol{q}})=\frac{1}{2}{\sum }_{i}{\sum }_{j\ne i}{\psi }_{\alpha }({\Vert {{\boldsymbol{q}}}_{j}-{{\boldsymbol{q}}}_{i}\Vert }_{\sigma })\). Let \({\psi }_{\alpha }(z)=\mathop{\int}\nolimits_{{\Vert d\Vert }_{\sigma }}^{z}{\phi }_{\alpha }(s){\rm{d}}s\), where \({\phi }_{\alpha }(z)={\rho }_{h}(z/{\Vert R\Vert }_{\sigma })\phi (z-{\Vert d\Vert }_{\sigma })\) and \(\phi (z)=\frac{1}{2}[(a+b)(z+c)/\sqrt{1+{(z+c)}^{2}}+(a-b)]\). It follows that
We set a = b = 5, \(c=| a-b| /\sqrt{4ab}\), \({k}_{1}^{\alpha }=30\), and \({k}_{2}^{\alpha }=2\sqrt{{k}_{1}^{\alpha }}\) in the simulations.
We now define the agent-obstacle interaction term:
where the gradient of the collective potential between the agents and obstacles is given explicitly by
For each obstacle k = 1, …, Nobs and agent i = 1, …, N, there is a virtual agent with position \({\hat{{\boldsymbol{q}}}}_{i,k}\) and momentum \({\hat{{\boldsymbol{p}}}}_{i,k}\), where Nobs is the number of obstacles. Note that an agent i can only perceive obstacles within its spatial neighborhood \({{\mathcal{N}}}_{i}^{\beta }=\{1,\ldots,{N}_{{\rm{obs}}}:\Vert {\hat{{\boldsymbol{q}}}}_{i,k}-{{\boldsymbol{q}}}_{i}\Vert < {R}^{{\prime} }\}\), where \({R}^{{\prime} }\) is an interaction range. Following ref. 21 (Lemma 4), spherical obstacles of radius rk and centered at \({{\boldsymbol{y}}}_{k}\in {{\mathbb{R}}}^{m}\) are represented by \({\hat{{\boldsymbol{q}}}}_{i,k}={\mu }_{i,k}{{\boldsymbol{q}}}_{i}+(1-{\mu }_{i,k}){{\boldsymbol{y}}}_{k}\) and \({\hat{{\boldsymbol{p}}}}_{i,k}={\mu }_{i,k}{P}_{i,k}{{\boldsymbol{p}}}_{i}\), where \({\mu }_{i,k}={r}_{k}/\Vert {{\boldsymbol{q}}}_{i}-{{\boldsymbol{y}}}_{k}\Vert\), \({{\boldsymbol{\eta }}}_{i,k}=({{\boldsymbol{q}}}_{i}-{{\boldsymbol{y}}}_{k})/\Vert {{\boldsymbol{q}}}_{i}-{{\boldsymbol{y}}}_{k}\Vert\), and \({P}_{i,k}={I}_{m}-{{\boldsymbol{\eta }}}_{i,k}{{\boldsymbol{\eta }}}_{i,k}^{{\mathsf{T}}}\). The gradient potential and adjacency matrix are respectively given by \({\phi }_{\beta }(z)={\rho }_{{h}^{{\prime} }}(z/{\Vert {d}^{{\prime} }\Vert }_{\sigma })((z-{\Vert {d}^{{\prime} }\Vert }_{\sigma })/\sqrt{1+{(z-{\Vert {d}^{{\prime} }\Vert }_{\sigma })}^{2}}-1)\) and \({A}_{ij}^{\beta }({\boldsymbol{q}})={\rho }_{{h}^{{\prime} }}(\parallel {\hat{{\boldsymbol{q}}}}_{i,k}-{{\boldsymbol{q}}}_{i}{\parallel }_{\sigma }/{\Vert {d}^{{\prime} }\Vert }_{\sigma })\). We set \({h}^{{\prime} }=0.9\), \({d}^{{\prime} }=0.6d\), \({R}^{{\prime} }=1.2{d}^{{\prime} }\), \({k}_{1}^{\beta }=300\), and \({k}_{2}^{\beta }=2\sqrt{{k}_{1}^{\beta }}\). We assign \({k}_{1}^{\beta }\gg {k}_{1}^{\alpha }\) so that agents prioritize collision avoidance with obstacles over retaining formation.
A MATLAB implementation of the free-flocking model (9) is provided in GitHub (see Data availability).
Solving the optimization problem
To solve the constrained optimization problem (4), we employ the interior-point method, as implemented by the function fmincon in MATLAB. At each time interval [tk, tk + w], we solve the optimization problem (4) for 10 random initial conditions \({b}_{i} \sim {\mathcal{N}}({b}_{\max }/2,0.01)\) and \({c}_{i} \sim {\mathcal{N}}({c}_{\max }/2,0.01)\), and then select the best solution. The upper bounds on the feedback gains are set as \({b}_{\max }={c}_{\max }=30\) (in Figs. 1, 2 and S1), 10 (in Fig. S2), or 5 (in Figs. 6 and 7). For the optimization of the flocking models (1) and (9), the Jacobian matrix J is defined in Eqs. (2) and (13), respectively. The optimization problem for the time-delay system (8) is described in SI, Section S4.
We note that the dimension m of the physical space of the agents does not impact the optimization time of the parameters b and c. Because of the Kronecker product structure of J, the set of eigenvalues of the Jacobian matrix J, denoted by the operator spec( ⋅ ), is given by
That is, for a specific dimension m, the spectrum of J consists of m repeated sets of eigenvalues of the Jacobian matrix J(1). Thus, to determine the largest Lyapunov exponent \({\Lambda }_{\max }(J)\), it suffices to calculate the Lyapunov exponents of the 2N × 2N matrix J(1).
Distributed optimization formulation
In this approach, each agent i computes its optimal gains bi and ci using only local information determined by its neighborhood \({{\mathcal{N}}}_{i}\). Thus, each agent has access to a subgraph \({{\mathcal{G}}}_{i}\subseteq {\mathcal{G}}\), where a node j belongs to \({{\mathcal{G}}}_{i}\) if \(j\in {{\mathcal{N}}}_{i}\) and an edge (j, k) exists in \({{\mathcal{G}}}_{i}\) if both \(j,k\in {{\mathcal{N}}}_{i}\). Let \({A}_{i}=A[{{\mathcal{N}}}_{i}]\) denote the submatrix of A formed by selecting rows and columns indexed by \(j\in {{\mathcal{N}}}_{i}\). Since A is the N × N adjacency matrix of \({\mathcal{G}}\), it follows that Ai is the \(| {{\mathcal{N}}}_{i}| \times | {{\mathcal{N}}}_{i}|\) adjacency matrix of \({{\mathcal{G}}}_{i}\). The neighborhoods \({{\mathcal{N}}}_{i}\) generally change over time, and the agent positions qj(tk), for \(j\in {{\mathcal{N}}}_{i}\), is sufficient to construct Ai(tk) at any time tk.
For each agent i, we define the local Jacobian matrix:
where \({B}_{i}=B[{{\mathcal{N}}}_{i}]\), \({C}_{i}=C[{{\mathcal{N}}}_{i}]\), and Li is the Laplacian matrix associated with Ai. At time step tk, each agent i solves—independently and in parallel—the following low-dimensional optimization problem:
where b(i, k) and c(i, k) represent the local \(| {{\mathcal{N}}}_{i}|\)-dimensional optimization variables at time tk. Accordingly, the local feedback matrices take the form Bi(tk) = diag(b(i, k)) and Ci(tk) = diag(c(i, k)). After solving the problem, each agent i extracts the entries of b(i, k) and c(i, k) corresponding to itself (i.e., the index \(j\in {{\mathcal{N}}}_{i}\) such that j = i) and assign them as the optimal gain of agent i for the time interval [tk, tk + w].
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Data availability
Codes and data are available through our Zenodo repository (https://doi.org/10.5281/zenodo.16920713)86.
Code availability
The Zenodo repository (https://doi.org/10.5281/zenodo.16920713) contains the codes used to simulate and optimize the flocking dynamics across all models considered in this study, including both centralized and distributed formulations86.
References
Reynolds, C. W. Flocks, herds and schools: a distributed behavioral model. In Proc. 14th Annual Conference on Computer Graphics and Interactive Techniques 25–34 (Association for Computing Machinery, 1987).
Silva, A. R. D., Lages, W. S. & Chaimowicz, L. Boids that see: using self-occlusion for simulating large groups on GPUs. Comput. Entertain. 7, 1–20 (2010).
Vicsek, T., Czirók, A., Ben-Jacob, E., Cohen, I. & Shochet, O. Novel type of phase transition in a system of self-driven particles. Phys. Rev. Lett. 75, 1226 (1995).
Helbing, D., Farkas, I. & Vicsek, T. Simulating dynamical features of escape panic. Nature 407, 487–490 (2000).
Gazi, V. & Passino, K. M. Stability analysis of social foraging swarms. IEEE Trans. Syst. Man Cybern. Part B 34, 539–557 (2004).
Couzin, I. D. Collective cognition in animal groups. Trends Cogn. Sci. 13, 36–43 (2009).
Katz, Y., Tunstrøm, K., Ioannou, C. C., Huepe, C. & Couzin, I. D. Inferring the structure and dynamics of interactions in schooling fish. Proc. Natl. Acad. Sci. USA 108, 18720–18725 (2011).
Marras, S. & Porfiri, M. Fish and robots swimming together: attraction towards the robot demands biomimetic locomotion. J. R. Soc. Interface 9, 1856–1868 (2012).
Pearce, D. J., Miller, A. M., Rowlands, G. & Turner, M. S. Role of projection in the control of bird flocks. Proc. Natl. Acad. Sci. USA 111, 10422–10426 (2014).
Gómez-Nava, L., Bon, R. & Peruani, F. Intermittent collective motion in sheep results from alternating the role of leader and follower. Nat. Phys. 18, 1494–1501 (2022).
Sinha, S., Krishnan, V. & Mahadevan, L. Optimal control of interacting active particles on complex landscapes. arXiv:2311.17039 https://doi.org/10.48550/arXiv.2311.17039 (2023).
Sar, G. K. & Ghosh, D. Flocking and swarming in a multi-agent dynamical system. Chaos 33, 123126 (2023).
Xiao, Y. et al. Perception of motion salience shapes the emergence of collective motions. Nat. Commun. 15, 4779 (2024).
Wang, P., Song, C. & Liu, L. Coverage control for mobile sensor networks with double-integrator dynamics and unknown disturbances. IEEE Trans. Autom. Control 68, 6299–6306 (2022).
Bertuccelli, L., Choi, H.-L., Cho, P. & How, J. Real-time multi-UAV task assignment in dynamic and uncertain environments. In Proc. AIAA Guidance, Navigation, and Control Conference, 5776 https://doi.org/10.2514/6.2009-5776 (AIAA (American Institute of Aeronautics and Astronautics), 2009).
Balázs, B., Vicsek, T., Somorjai, G., Nepusz, T. & Vásárhelyi, G. Decentralized traffic management of autonomous drones. Swarm Intell. 19, 29–53 (2024).
Nguyen, T.-H. & Jung, J. J. Swarm intelligence-based green optimization framework for sustainable transportation. Sustain. Cities Soc. 71, 102947 (2021).
Chen, F. & Ren, W. et al. On the control of multi-agent systems: a survey. Found. Trends Syst. Control 6, 339–499 (2019).
Beaver, L. E. & Malikopoulos, A. A. An overview on optimal flocking. Annu. Rev. Control 51, 88–99 (2021).
Leonard, N. E., Bizyaeva, A. & Franci, A. Fast and flexible multiagent decision-making. Annu. Rev. Control Robot. Autonomous Syst. 7, 19–45 (2024).
Olfati-Saber, R. Flocking for multi-agent dynamic systems: algorithms and theory. IEEE Trans. Autom. Control 51, 401–420 (2006).
Ren, W. Formation keeping and attitude alignment for multiple spacecraft through local interactions. J. Guid Control Dyn. 30, 633–638 (2007).
Nagy, M., Ákos, Z., Biro, D. & Vicsek, T. Hierarchical group dynamics in pigeon flocks. Nature 464, 890–893 (2010).
Baronchelli, A. & Diaz-Guilera, A. Consensus in networks of mobile communicating agents. Phys. Rev. E 85, 016113 (2012).
Griparic, K., Polic, M., Krizmancic, M. & Bogdan, S. Consensus-based distributed connectivity control in multi-agent systems. IEEE Trans. Netw. Sci. Eng. 9, 1264–1281 (2022).
Cucker, F. & Smale, S. Emergent behavior in flocks. IEEE Trans. Autom. Control 52, 852–862 (2007).
Valcher, M. E. & Zorzan, I. On the consensus of homogeneous multi-agent systems with arbitrarily switching topology. Automatica 84, 79–85 (2017).
Mikaberidze, G., Chowdhury, S. N., Hastings, A. & D’Souza, R. M. Consensus formation among mobile agents in networks of heterogeneous interaction venues. Chaos Solitons Fractals 178, 114298 (2024).
Amichay, G., Li, L., Nagy, M. & Couzin, I. D. Revealing the mechanism and function underlying pairwise temporal coupling in collective motion. Nat. Commun. 15, 4356 (2024).
Olfati-Saber, R. & Murray, R. M. Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Autom. Control 49, 1520–1533 (2004).
Blondel, V. D., Hendrickx, J. M., Olshevsky, A. & Tsitsiklis, J. N. Convergence in multiagent coordination, consensus, and flocking. In Proc. IEEE Conference on Decision and Control, 2996–3000 (IEEE, 2005).
Ren, W. On consensus algorithms for double-integrator dynamics. IEEE Trans. Autom. Control 53, 1503–1509 (2008).
Yu, W., Chen, G. & Cao, M. Some necessary and sufficient conditions for second-order consensus in multi-agent dynamical systems. Automatica 46, 1089–1095 (2010).
Zhang, J., Lyu, M., Shen, T., Liu, L. & Bo, Y. Sliding mode control for a class of nonlinear multi-agent system with time delay and uncertainties. IEEE Trans. Ind. Electron. 65, 865–875 (2017).
Ogren, P., Fiorelli, E. & Leonard, N. E. Cooperative control of mobile sensor networks: Adaptive gradient climbing in a distributed environment. IEEE Trans. Autom. Control 49, 1292–1302 (2004).
Jolles, J. W., Boogert, N. J., Sridhar, V. H., Couzin, I. D. & Manica, A. Consistent individual differences drive collective behavior and group functioning of schooling fish. Curr. Biol. 27, 2862–2868 (2017).
Niizato, T., Sakamoto, K., Mototake, Y.-i, Murakami, H. & Tomaru, T. Information structure of heterogeneous criticality in a fish school. Sci. Rep. 14, 29758 (2024).
Doering, G. N. et al. Noise resistant synchronization and collective rhythm switching in a model of animal group locomotion. R. Soc. Open Sci. 9, 211908 (2022).
Jolles, J. W., King, A. J. & Killen, S. S. The role of individual heterogeneity in collective animal behaviour. Trends Ecol. Evol. 35, 278–291 (2020).
Nishikawa, T. & Motter, A. E. Symmetric states requiring system asymmetry. Phys. Rev. Lett. 117, 114101 (2016).
Molnar, F., Nishikawa, T. & Motter, A. E. Network experiment demonstrates converse symmetry breaking. Nat. Phys. 16, 351–356 (2020).
Molnar, F., Nishikawa, T. & Motter, A. E. Asymmetry underlies stability in power grids. Nat. Commun. 12, 1457 (2021).
Mallada, E., Freeman, R. A. & Tang, A. K. Distributed synchronization of heterogeneous oscillators on networks with arbitrary topology. IEEE Trans. Control Netw. Syst. 3, 12–23 (2015).
Sugitani, Y., Zhang, Y. & Motter, A. E. Synchronizing chaos with imperfections. Phys. Rev. Lett. 126, 164101 (2021).
Nair, N., Hu, K., Berrill, M., Wiesenfeld, K. & Braiman, Y. Using disorder to overcome disorder: a mechanism for frequency and phase synchronization of diode laser arrays. Phys. Rev. Lett. 127, 173901 (2021).
Cao, H. & Eliezer, Y. Harnessing disorder for photonic device applications. Appl. Phys. Rev. 9, 011309 (2022).
Gast, R., Solla, S. A. & Kennedy, A. Neural heterogeneity controls computations in spiking neural networks. Proc. Natl. Acad. Sci. USA 121, e2311885121 (2024).
Zhang, Y., Ocampo-Espindola, J. L., Kiss, I. Z. & Motter, A. E. Random heterogeneity outperforms design in network synchronization. Proc. Natl. Acad. Sci. USA 118, e2024299118 (2021).
Teng, R. et al. Heterogeneity-driven collective-motion patterns of active gels. Cell Rep. Phys. Sci. 3, 100933 (2022).
Yang, J. F. et al. Emergent microrobotic oscillators via asymmetry-induced order. Nat. Commun. 13, 5734 (2022).
Nicolaou, Z. G., Case, D. J., Wee, E. Bvd, Driscoll, M. M. & Motter, A. E. Heterogeneity-stabilized homogeneous states in driven media. Nat. Commun. 12, 4486 (2021).
Ceron, S., Gardi, G., Petersen, K. & Sitti, M. Programmable self-organization of heterogeneous microrobot collectives. Proc. Natl. Acad. Sci. USA 120, e2221913120 (2023).
O’Keeffe, K. P., Hong, H. & Strogatz, S. H. Oscillators that sync and swarm. Nat. Commun. 8, 1504 (2017).
Ghosh, D. et al. The synchronized dynamics of time-varying networks. Phys. Rep. 949, 1–63 (2022).
Ren, W. Consensus strategies for cooperative control of vehicle formations. IET Control Theory Appl. 1, 505–512 (2007).
Ren, W. & Beard, R. W. Distributed Consensus in Multi-vehicle Cooperative Control: Theory and Applications, 27 (Springer, 2008).
Su, Y. & Huang, J. Stability of a class of linear switching systems with applications to two consensus problems. IEEE Trans. Autom. Control 57, 1420–1430 (2011).
Horn, R. A. & Johnson, C. R.Matrix Analysis (Cambridge University Press, 2012).
Pecora, L. M. & Carroll, T. L. Master stability functions for synchronized coupled systems. Phys. Rev. Lett. 80, 2109–2112 (1998).
Nishikawa, T. & Motter, A. E. Synchronization is optimal in nondiagonalizable networks. Phys. Rev. E 73, 065106 (2006).
Motter, A. E., Myers, S. A., Anghel, M. & Nishikawa, T. Spontaneous synchrony in power-grid networks. Nat. Phys. 9, 191–197 (2013).
Dorfler, F., Chertkov, M. & Bullo, F. Synchronization in complex oscillator networks and smart grids. Proc. Natl Acad. Sci. 110, 2005–2010 (2013).
Nocedal, J. & Wright, S. J. Numerical optimization (Springer, 1999).
Chen, F., Sewlia, M. & Dimarogonas, D. V. Cooperative control of heterogeneous multi-agent systems under spatiotemporal constraints. Annu. Rev. Control 57, 100946 (2024).
Lee, J. G. & Shim, H. A tool for analysis and synthesis of heterogeneous multi-agent systems under rank-deficient coupling. Automatica 117, 108952 (2020).
Zheng, Y., Zhu, Y. & Wang, L. Consensus of heterogeneous multi-agent systems. IET Control Theory Appl. 5, 1881–1888 (2011).
Zhan, J. & Li, X. Flocking of multi-agent systems via model predictive control based on position-only measurements. IEEE Trans. Ind. Inform. 9, 377–385 (2012).
Nascimento, I. B., Rego, B. S., Pimenta, L. C. & Raffo, G. V. NMPC strategy for safe robot navigation in unknown environments using polynomial zonotopes. In Proc. IEEE Conference on Decision and Control, 7100–7105 (IEEE, 2023).
Sar, G. K. et al. Dynamics of swarmalators in the presence of a contrarian. Phys. Rev. E 111, 014209 (2025).
Ginelli, F. The physics of the Vicsek model. Eur. Phys. J. Spec. Top. 225, 2099–2117 (2016).
Leonard, N. E. et al. Collective motion, sensor networks, and ocean sampling. Proc. IEEE 95, 48–74 (2007).
Shi, F., Tuo, X., Ran, L., Ren, Z. & Yang, S. X. Fast convergence time synchronization in wireless sensor networks based on average consensus. IEEE Trans. Ind. Inform. 16, 1120–1129 (2019).
Battistelli, G. & Chisci, L. Stability of consensus extended Kalman filter for distributed state estimation. Automatica 68, 169–178 (2016).
Soatti, G., Nicoli, M., Savazzi, S. & Spagnolini, U. Consensus-based algorithms for distributed network-state estimation and localization. IEEE Trans. Signal Inf. Process. Netw. 3, 430–444 (2016).
Montanari, A. N., Duan, C., Aguirre, L. A. & Motter, A. E. Functional observability and target state estimation in large-scale networks. Proc. Natl. Acad. Sci. USA 119, e2113750119 (2022).
Meng, X. F., Van Gorder, R. A. & Porter, M. A. Opinion formation and distribution in a bounded-confidence model on various networks. Phys. Rev. E 97, 022312 (2018).
Redner, S. Reality-inspired voter models: a mini-review. Comptes Rendus Phys. 20, 275–292 (2019).
Bernardo, C. et al. Achieving consensus in multilateral international negotiations: the case study of the 2015 Paris Agreement on climate change. Sci. Adv. 7, eabg8068 (2021).
Crabtree, S. A., Wren, C. D., Dixit, A. & Levin, S. A. Influential individuals can promote prosocial practices in heterogeneous societies: a mathematical and agent-based model. PNAS Nexus 3, pgae224 (2024).
Wang, L. & Chen, B. Distributed control for large-scale plug-in electric vehicle charging with a consensus algorithm. Int. J. Electr. Power Energy Syst. 109, 369–383 (2019).
Yi, L. & Wei, E. Optimal EV charging decisions considering charging rate characteristics and congestion effects. IEEE Trans. Netw. Sci. Eng. 11, 5045–5057 (2024).
Cao, M., Morse, A. S. & Anderson, B. D. Agreeing asynchronously. IEEE Trans. Autom. Control 53, 1826–1838 (2008).
Zhang, W., Tang, Y., Huang, T. & Kurths, J. Sampled-data consensus of linear multi-agent systems with packet losses. IEEE Trans. Neural Netw. Learn. Syst. 28, 2516–2527 (2016).
Wang, B., Wang, J., Zhang, B. & Li, X. Global cooperative control framework for multiagent systems subject to actuator saturation with industrial applications. IEEE Trans. Syst. Man Cybern. Syst. 47, 1270–1283 (2017).
Pasqualetti, F., Bicchi, A. & Bullo, F. Consensus computation in unreliable networks: a system theoretic approach. IEEE Trans. Autom. Control 57, 90–104 (2011).
Montanari, A. N., Barioni, A. E. D., Duan, C. & Motter, A. E. Optimal flock formation induced by agent heterogeneity (this paper). Zenodo repository, https://doi.org/10.5281/zenodo.16920713 (2025).
Acknowledgements
The authors thank Pietro Zanin for insightful discussions on the eigenstructure of the Jacobian matrix. This work is supported by the U.S. Army Research Office (Grant No. W911NF-23-1-0102), National Science Foundation (Grant No. DMS-2308341), and Office of Naval Research (Grant No. N00014-22-1-2200). The authors also acknowledge support from the NSF-Simons National Institute for Theory and Mathematics in Biology (NSF Grant No. DMS-2235451 and Simons Foundation Grant No. MP-TMPS-00005320) and the use of Quest High-Performance Computing Cluster at Northwestern University.
Author information
Authors and Affiliations
Contributions
A.N.M., C.D., and A.E.M. designed the research; A.N.M. and A.E.D.B. developed the theory; A.N.M. performed the numerical simulations; A.N.M. and A.E.D.B. analyzed the data; A.N.M. led the writing of the manuscript; all authors contributed to the interpretation of the results and editing of the final version of the paper.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. A peer review file is available.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Montanari, A.N., Barioni, A.E.D., Duan, C. et al. Optimal flock formation induced by agent heterogeneity. Nat Commun 16, 9626 (2025). https://doi.org/10.1038/s41467-025-64233-0
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41467-025-64233-0









