Introduction

The relationship between brain structure and dynamics remains an open question in neuroscience. While it is hypothesized that structural connectivity shapes neural dynamics1,2,3,4, the precise nature of this interaction is still unclear. Unraveling how dynamic brain activity emerges from a relatively fixed structure is crucial for understanding brain function in health and disease. Hence, analyzing the dynamics of simulated brain dynamics using artificial networks is an important tool for unraveling this complex relationship.

Brain connectivity, or the connectome4, refers to the physical links between brain regions and is most accurately represented by the Structural Connectivity (SC) matrix, derived from imaging techniques5. The SC matrix indicates whether two regions are connected by axonal fibers4. On the other hand, brain activity measurements using techniques like EEG6 and fMRI7 reveal dynamic activity within each region. Pairwise correlations of these signals uncover sub-networks of stable, synchronized activity, forming what is known as Functional Connectivity (FC).

Interestingly, relationships between sub-networks observed in SC and in FC have been found8,9,10. However, while SC is a static representation, FC is not, varying over time. As such, the FC variation is interpreted as transitions between multiple states, defining a dynamical regime known as Functional Connectivity Dynamics (FCD)11,12,13. Hence, clarification on how the static brain structure supports the dynamical activity exhibited by the FCD, is key for understanding brain function in health and disease.

Previous research on the causal relationship between structure and function, show that at local scale (cortical areas), the observed global dynamics may emerge from noisy activity of neurons and synapses, as well as their chaotic nature14,15,16,17,18. At mesoscale (coupling between areas), global coupling15,19,20 and internode delay19,21 could be the mechanisms that drive the system towards different dynamical states. Nevertheless, while some studies have established a link between a given SC and its associated FCD10,21,22, a causal relationship between the topological properties of SC, namely integration and segregation, and FCD’s dynamical characteristics, such as multistability or metastability, has not been systematically studied at the global scale (whole brain).

In this study, we aimed to investigate how SC’s integration and segregation correlate with FCD’s multistability and metastability. By using a Wilson-Cowan neural mass model over different network topologies, including a binarized version of the human connectome, we investigated how the broad spectrum of structural segregation and integration, characterized by the Small-world index \(\omega\)23, shapes network dynamics. By means of mutual information, we derived a relationship between network’s structure and dynamics.

Results

Set of structural connectivities

We built a set of adjacency matrices that span the axis of the integration-segregation property. This set comprises five types of networks or generating algorithms: Modular networks, Hierarchical networks, Watts-Strogatz small world networks, and Barabasi-Alberts scale-free networks. Each of these network-generating algorithms can be tuned to obtain a different degree of integration or segregation in the network. In addition, a binarized Human connectome following the Schaeffer-200 parcellation was used, which was also perturbed to obtain integrated or segregated versions (Fig. 1 and also see Methods).

Small-world index \(\omega\) describes network’s integration/segregation.

To characterize the degree of integration or segregation a network has, several metrics were calculated. Briefly, a segregated network is a network with a high amount of different communities or a latticed structure, while an integrated network could be characterized as a big single community where all paths are short like in a random network.

Segregation (Clustering coefficient, modularity, and path length), and integration (global efficiency), metrics were calculated24, by using the brain connectivity toolbox for Python bctpy (https://pypi.org/project/bctpy/).

In addition, Telesford’s Small-World metric25, ω, was also calculated and the networks span a wide range of this metric (Fig. 2 A). We seek to evaluate if ω could serve to describe the degree of the network’s integration/segregation.

For this, we plotted each metric against ω (Fig. 2 B–E). We observed that the clustering coefficient decreases as ω increases, indicating a transition from locally clustered to more random structures (Fig. 2 B). Modularity, a measure of global segregation, also decreases with increasing ω (Fig. 2 C). Path length, which is inverse to network integration, shows an inverse relationship with ω (Fig. 2 D). Finally, global efficiency, which measures network integration, increases with ω (Fig. 2 E).

In summary, ω correlates inversely with clustering coefficient, modularity, and path length25,26, while global efficiency is directly correlated with ω27, showing a direct link between ω and the integration-segregation properties of the networks.

Figure 1
figure 1

Network Types and Structural Properties: Structural connectivity, graph and degree distribution are shown for highly segregated to highly integrated networks. Watts-Strogatz networks (AC) transition from highly ordered (segregated) to more randomized (integrated) structures by increasing the rewiring probability p. Modular networks (DF), show increasing inter-module connections, enhancing integration. Hierarchical networks (GI) are constructed by iteratively adding inter-module connections in an ordered fashion to create a hierarchical structure. Power-Law networks (JL), generated with a modified Barabasi-Alberts algorithm, display variations in clustering by forming triangles. Finally, Human connectome networks (MO) are modified to increase either segregation or integration using a custom algorithm. Axis labels for each plot are shown in panel O.

Figure 2
figure 2

Integration and Segregation Metrics for different network topologies. (A) Network classification based on the Small-world index (\(\omega\)) into lattice, soft lattice, small world, soft random, and random categories. (BD) Relationship of \(\omega\) with clustering coefficient, modularity, and path length, respectively. (E) Relationship of \(\omega\) with global efficiency. Lattice: \(\omega \le -0.75\)); soft lattice: \(-0.75<\omega \le -0.25\); small world: \(0.25<\omega \le 0.25\); soft random: 0.25\(<\omega \le 0.75\); random: \(0.75<\omega\).

FCD is a proxy for network’s dynamical richness

To study how network topology determines its dynamics, we implemented a Wilson-Cowan neural mass model28 on each node. The model considers a homeostatic plasticity mechanism that allows a better exploration of the parameter space, as the nodes maintain their oscillatory behavior within a wide range of external inputs. The dynamics of all networks were characterized at different values of the coupling strength g, distributed logarithmically between 0.01 and 2.5. After filtering and down-sampling, a Hilbert Transform was used to obtain the phase and envelope of simulated signals in the 5–15 Hz band (Fig. 3 A). Time-resolved Functional Connectivity (FC) matrices were calculated as the pair-wise correlation between envelopes29, using a sliding window approach (window size = 2000 points, equivalent to 4 s; overlap = 75%; Fig. 3 A, bottom). Finally, the FCD matrix was calculated by computing the Euclidean distance between vectorized FCs (Fig. 3 B).

Figure 3
figure 3

Simulation Workflow. (A) Trace signals(gray) and signal envelopes are shown. In red, FC networks for different times are shown. (B) To represent network dynamics, Functional Connectivity Dynamics was built by calculating the euclidean distance between vectorized FCs.

The FCD matrix shown in Fig. 3 B represents how different (or similar) are the FCs through which the network transits. Therefore, the FCD is a representation of the network dynamics11. When the coupling between nodes is \(g\le 0.04\), the FCD appears as almost uniform green (Fig. 4 top), meaning that the difference between FCs is higher than 0 and mostly constant. Thus, the network is always found in different (un)synchronized states that are never revisited. On the other hand, when coupling \(g\ge 1.0\), the FCD matrix appears blue, meaning that the difference between FCs is zero and the synchronization pattern is only one static configuration throughout the simulation. Intermediate values of g cause the appearance of yellow and red patches in the FCD, meaning richer dynamics where FCs with both larger and smaller differences are observed (Fig. 4, top).

To account for the observed synchronization patterns, we used the variance of the off-diagonal values in the FCD, Var(FCD). Var(FCD) is minimal at small or large g (indicating constant or fixed correlation patterns) and maximal when the FCD matrix has a patchy structure (Fig. 4, bottom).

Figure 4
figure 4

Network Topology drives Network Dynamics The FCD patterns (top) of segregated (left), intermediate (middle), or integrated (right) networks are shown. Coupling value is shown at the top. Overall, intermediate networks are more dynamic, as their FC distance is higher (red patches). On the contrary, integrated networks exhibit less dynamics (green to blue patches). The variance of FCD is shown in the bottom plots. The X-axis represents coupling strength while the y-axis is the mean variance of FCD over 10 network realizations. Notice that Watts-Strogatz networks yield the higher values for Var(FCD) for intermediate or high levels of integration.

Network topology drives network dynamics

To account for the dynamical repertoire exhibited by each type of network in the Integration/segregation continuum, we calculated Var(FCD) for each global coupling value, and averaged the results of 10 different random seeds that governed the heterogeneity of each network. Figure 4 depicts the FCD matrices for three characteristic networks of each type, and at different values of g. At the bottom, the line plots summarize the average of Var(FCD) over the coupling range explored. All networks showed a maximum dynamical richness at intermediate values of global coupling, as it has been previously established in simulation studies16,20. Also, networks classified as ’intermediate’ between segregation and integration, show rich dynamics in a wider range of coupling values. Moreover, the networks of the Modular and Hierarchical types tend to show greater dynamics of FC.

Network topology imposes dynamical richness

To further explore the dynamical variations due to SC, we analyzed two other parameters that characterize network’s dynamic: Synchrony and Metastability (\(\chi\)). Synchrony measures the phase synchrony of all the signals of the network, ranging from no synchronization (0) to fully synchronized (1). Metastability (\(\chi\)), on the other hand, measures the variability in time of the global synchrony30. Both measures are plotted along with Var(FCD) for all networks studied and across the whole range of global coupling in Fig. 5. In all cases, there is a strong correspondence of shallow –sometimes staggered– synchronization curves with a higher occurrence of Metastability and Var(FCD). On the contrary, steep synchronization curves (typical for random networks) correlate with very small values of the measures associated with dynamical richness. Metastability and Var(FCD) mostly coincide for all networks but with some differences, such as in Connectome-based networks where the measures peak at clearly different values of coupling.

Figure 5
figure 5

Network topology modulates Dynamical parameters. Network’s behavior for three dynamical metrics(rows), and different types of networks densities are shown. Colors represent different integration-segregation values ranging from latticed to random (\(p=1.00\)). All values correspond to mean values over ten realizations.

Linking structural features to network dynamics

To establish a relationship between structure and dynamics, we reduced the data shown in Fig. 5 by calculating the slope of the phase synchrony, or the area under the curve for metastability and varFCD. This approach allowed the dynamics of any network, across the studied range of global coupling (g), to be summarized into a single value. Figure 6 illustrates the relationship between structural parameters and dynamical measures.

Figure 6
figure 6

Relationship Between Structural Features and Network Dynamics. Phase synchrony slope, and area under the curve for Metastability (X) and Var(FCD) plotted against clustering coefficient, global efficiency, \(\omega\) and modularity for all network tested. Symbol shapes and colors denote the different types of networks, as shown in the legend. Vertical lines in the Omega plot denote the range considered as small-world (\(-\)0.25;0.25). Networks with small-world characteristics (\(\omega\) close to zero) exhibit the highest dynamical richness, with Var(FCD) peaking at intermediate global coupling values. The plots highlight that structural features such as clustering coefficient, global efficiency, and modularity are key predictors of the dynamic behavior of networks, with small-world networks showing the optimal balance for dynamic flexibility.

For all networks, the slope of the adjusted sigmoid for phase synchrony (see methods), increased as the network transitioned from segregation to integration . Notice that, as the slope increases, i.e., as the transition to synchrony becomes sharper, its metastability is reduced.

The segregation measures tested Clustering Coefficient (CC) (Fig. 6, middle left) and Modularity (Q) (Fig. 6, middle right)—both follow a monotonous exponential relationship with respect to \(\chi\). However, the Clustering Coefficient (CC) showed distinct behavior in Barabási-Albert (scale-free) networks compared to other network types. At higher CC values, modular and hierarchical networks displayed trends that differed from those of Watts-Strogatz and connectome-based networks. Similarly, the integration metric (Global Efficiency, \(\eta\)) and the small-worldness index (\(\omega\)) (Fig. 6, middle) showed differing patterns for modular and hierarchical networks relative to Watts-Strogatz and connectome networks. This similarity between modular and hierarchical networks is unsurprising, as hierarchical networks are inherently modular in their structure.

On the other hand, Var(FCD) exhibited a broad inverted “U”-shaped relationship with structural features (Fig. 6, bottom), peaking when the small-worldness index (\(\omega\)) was between 0 and 0.5. This indicates an optimal balance between integration and segregation. Var(FCD) declined sharply when integration increased (Global Efficiency) or segregation decreased (Modularity or Clustering Coefficient), emphasizing the need for segregated sub-networks to sustain dynamic synchronization patterns. Modularity (Q), a global segregation measure, better captured the increasing trend in Var(FCD) compared to the local Clustering Coefficient (CC), which showed deviations, particularly for Barabási-Albert networks. Notably, Watts-Strogatz and connectome networks experienced a significant drop in Var(FCD) at high segregation levels, unlike modular and hierarchical networks, which maintained high values and did not exhibit the inverted “U” trend.

Modularity is a predictor for network dynamics

Finally, to link structural parameters with dynamical properties, we calculated the Mutual Information (MI) between each structural parameter, and \(\chi\) or varFCD (Table 1 and Fig. 7). MI quantifies the relationship between two variables, X and Y, by measuring the amount of information about Y that can be gained by observing X31. This method provided a model-agnostic framework to identify the structural parameter that best predicts the associated dynamical properties.

Metastability (\(\chi\)) consistently shows higher MI values with respect to Var(FCD), indicating that it is more strongly and predictably influenced by structural features. On the other hand, Var(FCD), is most effectively predicted by Efficiency and Modularity, highlighting their critical role in dynamic richness.

Among the structural metrics, Modularity (Q) exhibits the highest MI values for both \(\chi\) (t-test p<0.001, \(|D|>0.8\) for all pairs) and Var(FCD) (t-test p<0.001, \(|D|>0.8\) vs. CC;\(\omega\); \(D=0.74\) vs. \(\eta\)). This shows modularity’s role in mediating transitions across dynamical states by balancing localized specialization and global integration. Additionally, the relatively lower MI for \(\omega\) in Var(FCD) compared to \(\chi\) indicates that while small-worldness is significant for metastable states, it plays a less prominent role in dynamic functional connectivity variability, suggesting that modularity provides a more direct representation of the large-scale structural organization required to sustain rich and complex network dynamics.

Overall, global measures like Modularity (Q) and Efficiency (\(\eta\)) emerge as the most informative metrics for understanding network dynamics, particularly for the more complex patterns captured by Var(FCD).

Figure 7
figure 7

Mutual Information links network’s structure and dynamics. Bar shows the mutual information that each structural metric shares with dynamical ones, for all types of networks. Modularity (Q) is the structural parameter that shares more information with both dynamical parameter. Error bar represents SEM.

Table 1 Mutual information values[nats] for structural and dynamical metrics. Values shown are \(\bar{X}\pm SEM\).

Discussion

In this study, we explored how network structure influences brain dynamics, focusing on metrics that quantify the integration-segregation balance, particularly modularity and small-worldness. We found that networks with an \(\omega\) index close to zero or slightly positive –indicating small-world characteristics– exhibited the most dynamic behavior, as quantified by the variance of the FCD matrix. The networks that deviate from this tendency were found to have a high global efficiency, i.e. to be highly integrated. Additionally, the ability to show multiple dynamic states was characterized by metastability \(\chi\)30. Our experiments showed that highly segregated networks are more prone to transit to different states than integrated ones.

Previous research has extensively investigated the relationship between network structure and dynamics across different scales. At the local scale, random neural fluctuations and chaotic activity have been linked to phenomena such as dynamical wandering, where the brain transitions between various stable states over time14,15,16,18,32. At the mesoscale, mechanisms such as coupling between cortical areas, sub-networks, and delays between nodes have been proposed to drive state transitions by modifying the brain’s intrinsic, likely chaotic, oscillatory regime19,20,21. These studies collectively highlight the influence of structural connectivity on neural dynamics.

Building on these findings, our study focuses on the global scale, demonstrating that the presence of modules within a network, along with their interconnectivity, plays a crucial role in modulating global dynamics. By using mutual information to quantify the relationship between structural and dynamical parameters, we identified that structural features such as modularity is key for understanding how network structure shapes dynamic richness in a model-agnostic manner.

The relationship between structure and dynamics becomes evident when considering the level of modularity and interconnectivity in a network. At one extreme, networks composed of entirely disconnected modules exhibit flat dynamics, as limited communication between components leads to each module operating independently, with average distances between components too large to enable meaningful interaction. Conversely, enhancing communication between modules, for instance by reducing the mean path length, increases dynamical richness, enabling more complex interactions across the network. However, when interconnectivity becomes excessive, or modularity is absent as in random networks, the system transitions into a fully integrated state where all components synchronize, once again resulting in flat, homogeneous dynamics.

In addition to influencing dynamic richness, the presence of modules strongly correlates with high metastability. Networks with high modularity, such as modular or hierarchical networks, exhibit increased metastability, as the presence of distinct yet interacting modules creates conditions conducive to metastable regimes. This observation aligns with prior research demonstrating that modular organization promotes metastable dynamics33. Together, these findings underscore the critical role of modular structure in shaping both the richness and stability of network dynamics.

Our study also highlights the critical role of small-worldness in supporting the brain’s dynamical richness-its capacity to exhibit a wide range of dynamic states. The human connectome combines diverse topological characteristics, including small-world, modular, hierarchical, and power-law properties34,35,36,37. Among these, small-worldness stands out as particularly crucial because it balances integration (efficient global communication) and segregation (specialized local processing)38. Small-world networks, with their hallmark features of short path lengths and high clustering, uniquely facilitate both integration and segregation. Short path lengths enable rapid communication between distant brain regions, ensuring efficient global coordination, while high clustering supports local specialization within modules34,39. This dual capability surpasses the limitations of purely modular networks, which excel in segregation but lack efficient integration40, and power-law networks, which promote integration through highly connected hubs but lack strong local clustering. By combining these strengths, small-world networks provide the structural foundation for the brain’s dynamic flexibility and resilience. Reinforcing this idea, Watts-Strogatz networks are the ones that most closely follow the behavior of connectome networks, specially in the drop of varFCD and metastability when the networks are too segregated.

Our analysis using Mutual Information (MI) shows that modularity is a stronger predictor of network dynamics compared to other structural metrics, such as small-worldness (\(\omega\)). While \(\omega\) is effective in representing the integration-segregation balance at a local level, modularity provides a clearer representation of the overall network architecture. Modularity describes the organization of distinct yet interconnected modules, which are closely tied to large-scale structural features that influence dynamic interactions. This characteristic makes modularity better suited to explain complex dynamic behaviors, including Metastability (\(\chi\)) and Dynamic Functional Connectivity variability (VarFCD). In contrast, the small-worldness index (\(\omega\)), derived from the clustering coefficient (CC), emphasizes immediate node-level connections. Consequently, any limitations in MI observed for CC also affect \(\omega\), given their inherent connection. These findings suggest that modularity plays a key role in determining the dynamic properties of brain networks, offering insights that go beyond the capabilities of simpler metrics like clustering coefficient or global efficiency. The analysis illustrates how modular organization contributes to the complexity and adaptability of brain dynamics.

However, while these findings are promising, they should be interpreted with caution. One key limitation of our study is the exclusion of weighted networks, which restricts the generalization of our conclusions. Prior research has shown that variations in coupling strength between brain regions significantly influence overall dynamics, particularly in individuals with Alzheimer’s disease41, major depressive disorder (MDD)42, cognitive decline43, or as part of the aging process44. Future studies should investigate the role of coupling strength in shaping neural dynamics, particularly at the macroscale level.

Additionally, while the variance of functional connectivity dynamics (VarFCD) provides valuable insights into how brain region connectivity evolves over time, it may not fully capture the complexity of brain dynamics45. To gain a more comprehensive understanding of dynamical richness, it is crucial to investigate higher-order interactions (HOIs), which capture simultaneous relationships among multiple brain regions, moving beyond traditional pairwise correlations. This approach provides a more accurate depiction of the system’s dynamics and offers a broader perspective on its complexity46. Although the exact methodologies for studying HOIs are still under development47,48, several emerging approaches have shown promise in characterizing phenomena such as aging45, psychiatric disorders48, and topological properties of brain networks47. Using artificially generated network as in our present work, future studies may unveil which topological properties are more likely to sustain HOIs in a similar manner as observed in empirical brain recordings.

Lastly, given the study’s focus on binarized networks and pairwise correlations, future research could extend these findings by investigating weighted networks, where variations in connection strength may provide a deeper understanding of how modularity influences dynamic behavior. Furthermore, HOI analysis could offer deeper insights into the complex mechanisms underlying brain dynamics, capturing dependencies beyond pairwise relationships and enriching our understanding of network function.

In conclusion, our findings suggest that highly modular networks are particularly adept at transitioning between distinct dynamical states, underscoring their crucial role in system segregation. On the other hand, the degree of interconnectivity between modules significantly shapes how networks navigate shifts between low and high activity levels. Also, small-world networks exhibited the most dynamic behavior, as evidenced by the variance of the Functional Connectivity Dynamics (FCD) matrix. Finally, network’s modularity index may be used as a proxy for estimating network’s dynamics.

Methods

Networks

Networks of 240 nodes (200 in human connectome) were created as such that each network has a density of 0.075. This means an average degree of 18 connections per node in the 240-node networks and 15 connections in the 200-node networks. All networks are binary and undirected, that is, the adjacency matrices are symmetric.

Watts-Strogatz networks

These networks were generated using the Watts-Strogatz algorithm26. The algorithm starts with 240 nodes arranged in a circular lattice and each node is connected to its 18 nearest neighbors. Then, with probability \(p_r\), each connection is rewired to connect a randomly selected node within the network. The reconnection probability was varied from \(p_r=0\) (highly segregated lattice network) to \(p_r=0.5\) (highly integrated random network).

Modular networks

This type of network consists of a set of internally topologically independent subnetworks (modules), which are connected to each other through a reduced number of (inter-module) links. 240 nodes were arranged in 8 modules of 30 nodes. Nodes within the modules were connected randomly with a density of 0.6. Then, with probability \(p_{inter}\), intra-module connections were replaced by inter-module connections, such that the degree of the initial modular network remains constant. The inter-module connection was varied between \(p_{inter}=5\times 10^{-4}\) (highly segregated) to \(p_{inter}=0.07\) (highly integrated).

Hierarchical networks

These networks where initially set up in a similar manner as Modular Networks. 240 nodes were divided in 12 modules and the nodes within the modules were randomly connected with density 0.9. In this case, the modules were of variable size between 16 and 24. Then, the hierarchical structure is implemented by iteratively connecting module pairs, with a probability that decreased as the hierarchy increased. Finally, a number of random connections were added (replacing existing ones to preserve the degree, with probability ranging from 0 to 0.5), to gradually increase the network integration.

Scale-free networks

These networks were generated using the Holme and Kim algorithm49, as implemented in the NetworkX Python package50. This algorithm generates graphs with power-law degree distribution and a approximate average clustering coefficient. The algorithm begins with a small number of nodes (typically three) and grows the network by adding one node at a time. Each new node is connected to existing nodes with a preference for those that have higher degrees (rich-gets-richer). Additionally, after choosing a node based on its degree, the algorithm might create a triangle (triad closure) to increase clustering. In this way, by progressively increasing or decreasing the average clustering coefficient we obtained more segregated or integrated networks, respectively.

Human connectome networks

The Enigma toolbox51 was used to obtain a Human Connectome Project (HCP) connectome parcellated according to the Schaefer 200 parcellation. The weighted connectome was binarized using a threshold that left 7.5% of connections.

To integrate the human connectome network, we use the randmio_und_connected function of the Brain Connectivity Toolbox (BCT) package implemented in Python (https://pypi.org/project/bctpy/). Starting by determining the number of rewiring iterations, we applied the function iteratively, increasing the rewires each time to move the network towards a randomized state with increased integration. The process is stopped once a desired integration level or a set number of iterations is reached, ensuring that while the network becomes more integrated, its degree distribution remains consistent.

To segregate the human connectome network, we developed an algorithm based on node modularity. It starts by computing an agreement matrix, derives a consensus partition for the network, and then calculates the nodal participation coefficient and z-scored modularity for each node. The core operation iterates over nodes, prioritizing those with high modularity scores. For these nodes, it identifies and deletes certain intra-module connections (those within the same community) and establishes new inter-module connections (those outside its community). This rewiring aims to change up to three connections per iteration, preserving the graph’s structure while emphasizing nodes with the highest modularity values.

Structural metrics

Integration and segregation in the generated networks were quantified by several metrics. Global efficiency is an estimator for network integration, being the average of the inverse of the shortest path lengths52. Clustering coefficient is a local measure of segregation, effectively counting how many triangles are formed when two neighbors of a node are connected as well53. Modularity measures segregation more globally, relying on the initial detection of modules and then measuring the balance between intra-module and inter-module links54. Finally, small-world index \(\omega\) measures the integration/segregation balance by contrasting the Global Efficiency and Clustering Coefficient of each network with that of its corresponding latticed or random version26.

Dynamical model

Neural Mass Model: Wilson-Cowan with Plasticity

The neural dynamics of each constituent node in the obtained SC networks were simulated using an oscillatory neural mass model, the Wilson-Cowan model55, with the incorporation of an inhibitory synaptic plasticity (ISP) mechanism28. The activity in neural populations is governed by the equations28

$$\begin{aligned} \begin{aligned} \tau _e\frac{dE_k(t)}{dt}=&-E_k(t)+(1-r_{e}E)S(c_{ee}E_k(t)-c_{ie}^k(t)I_k(t)+P \\&\quad +G\sum _{j=1}^nW_{jk}E_j(t)+D ) \end{aligned} \end{aligned}$$
(1)
$$\begin{aligned} & \quad \tau _i\frac{dI_k(t)}{dt}=-I_k(t)+(1-r_{i}I)S(c_{ei}E_k(t)), \end{aligned}$$
(2)

where \(E_k\) and \(I_k\) correspond to the average firing rates in excitatory and inhibitory populations in the \(k^{th}\) brain region (node in the network), respectively. \(\tau _e\) and \(\tau _i\) are the excitatory and inhibitory time constants, \(c_{ab}\) is the local connection strength from population a to population b, and P is the excitatory input constant. \(r_e\) and \(r_i\) are parameters to take into account the refractory period of firing neurons. Long-range connections \(W_{jk} = \lbrace 0,1 \rbrace\) from region j to region k are multiplied by a global coupling constant G. D corresponds to additive noise, which is given by random values from a normal (Gaussian) distribution with zero mean and standard deviation \(D=0.002\). Indices k and j run across the total number of nodes n. The nonlinear response function S is a sigmoid function given by:

$$\begin{aligned} S(x)=\frac{1}{1+e^{-\frac{x-\mu }{\sigma }}} \end{aligned}$$
(3)

where \(\mu\) and \(\sigma\) are the location and slope parameters of the sigmoid, respectively.

The ISP mechanism is modeled as change of the \(c_{ie}\) local inhibitory synaptic connection, that depends on the activity of both the local excitatory and inhibitory populations, according to

$$\begin{aligned} \tau _{isp}\frac{dc_{ie}^k(t)}{dt}=I_k(t)(E_k(t)-\rho ) \end{aligned}$$
(4)

where \(\tau _{isp}\) is the learning rate, and \(\rho\) is the target excitatory activity level, with initial value \(c_{ie}^k(0)=3.75\).

Parameters used are: \(c_{ee}=3.5\), \(c_{ei}=2.5\), \(P=0.4\), \(\tau _e=0.01\), \(\tau _i=0.02\), \(\mu =1\), \(\sigma =0.25\), \(\rho =0.125\), \(\tau _{isp}=2\), \(r_e = r_i = 0.5\).

Simulation

Based on the obtained sets of networks, the neuronal dynamics of each network were simulated using the Wilson-Cowan oscillatory neural mass model with plasticity in each node. The global coupling strength parameter G was varied over a logarithmic space range of values \(\lbrace 0,..,2.512\rbrace\).

Maruyama-Euler method for stochastic ordinary differential equation was used, with integration time step \(dt = 0.0001s\). Simulations were performed for \(t = 102 s\), with a noiseless transient of  \(t_{trans} = 50 s\) for ensuring stabilization of the ISP.

To introduce heterogeneity in the obtained oscillatory signals among network nodes, the value of the excitatory input constant P was assigned randomly to each node, within a range of \(0.3-0.5\). The simulation parameters used were adjusted to ensure that the obtained signals exhibited oscillatory behavior. This simulation protocol was repeated 10 times (10 random seeds).

Analysis

Signal preprocessing

Every simulated time serie was subsampled to 0.5kHz, and filtered with a band-pass filter (Bessel 4th order,\(f_{low}=5Hz\); \(f_{high}=15Hz\)). Hilbert transform was then applied for obtaining the instantaneous phase and envelope of the slow oscillation. The first and the last second of simulation were discarded in order to get rid of artifacts that Hilbert transform may generate. This signal was used for the dynamical analysis.

Dynamical metrics

Functional connectivity (FC)

The statistical dependency of neural signals was measured through a Functional Connectivity (FC) matrix56. Every element of this matrix corresponds to the pairwise envelope correlation between nodes jk, over a time window of duration W (\(W_{time} = 4 s\), overlap = 75 %). The window length of 2000 samples or 4s was chosen as a compromise between time resolving changes in the seconds scale, and to properly capture slow components (around 0.6Hz) present in the envelope signal.

Functional connectivity dynamics (FCD)

Functional Connectivity Dynamics (FCD) matrix, which displays the dynamical repertoire of the system, was calculated using the Euclidean distance between the vectorized lower triangular of \(FC_t\) and \(FC_{t+\tau }\).

Synchrony

For assessing the overall synchrony of the network, Kuramoto’s order parameter R(t) was calculated.

$$\begin{aligned} R(t) = \frac{1}{N}\sum _{k=1}^N |e^{i\phi _k(t)}| \end{aligned}$$
(6)

Where i is the complex unit and \(\phi _k(t)\) is the instantaneous phase of the k-th node.

Metastability

Metastability30, was assessed by calculating the variance of R(t)

$$\begin{aligned} \chi = \frac{1}{\tau }\sum ^\tau (\langle R \rangle _T - R)^2 \end{aligned}$$
(7)

Where \(\tau\) is the number of points that the signal R has, and \(\langle R \rangle _T\) is the average in time of the global synchrony.

Multistability

Multistability is the numerous stable points that a system has30. As such, we used the variance of FCD, VarFCD, as a proxy for quantifying multistability16.

Metrics summarization

Phase synchrony

To characterize the phase synchronization curve into one value, a sigmoid function was fitted to each network’s phase response (averaged over 10 seeds).

$$\begin{aligned} f(x) = \frac{1}{1+e^{-k(x-x_0)}} \end{aligned}$$

Where k is the slope of the function, and \(x_0\) is the 50% point. Then, the slope of each fitted function was plotted against each structural metrics

Multi and metastability

To describe these metrics as a single value, the area under the curve (AUC) for the coupling range was calculated using Simpson’s rule with scipy’s function simpson. Again, the average of 10 seeds was used.

$$\begin{aligned} AUC = \int _{g_{value}} f(g)dg \end{aligned}$$

Where f(g) corresponds to multi or metastability.

Mutual information

The relationship between structural and dynamical parameters was evaluated using Mutual Information.

$$\begin{aligned} MI(X, Y) = \sum _{x \in X} \sum _{y \in Y} p(x, y) \log \left( \frac{p(x, y)}{p(x) p(y)} \right) \end{aligned}$$

where, \(p(x, y)\) is the joint probability distribution, and \(p(x)\) and \(p(y)\) are the marginal distributions of \(X\) and \(Y\), respectively.

For each structural parameter across the entire network set, Mutual Information with respect to multi- or metastability was computed using the scikit-learn function mutual_info_regression (https://scikit-learn.org/).

To ensure reliable estimates, Mutual Information was resampled31 using bootstrapping (N=43, B=2000), and Standard error Mean (SEM) was obtained from confidence interval of 95%. The normality of the bootstrap distributions was then assessed via the Kolmogorov-Smirnov test against a Gaussian function. Given significant variance differences (F-test, df1=df2=1999, p<0.001), mean differences were evaluated with Welch’s t-test for unpaired samples. Effect sizes were computed using Cohen’s d.57