Introduction

Understanding how brain functions arise from brain architecture and neuronal activity is a central question in neuroscience. Over the years, many theories have attempted to shed light on this problem. One of these is the Critical Brain Hypothesis, (CBH) which proposes that brain networks operate close to critical points or, more generally, phase transitions. The idea is that for the emergent network behavior that arises as the result of the interacting cells to be functional, the ‘brain state’ must be located at or near critical regions of the parameter space1. Critically operating systems typically display properties such as optimal computational performance for complex tasks2, efficient memory usage3, dynamic range4,5,6, and health behavior7,8. Approaching the brain from the CBH standpoint allows one to use extensive knowledge acquired on emergent collective phenomena in other systems and well-known tools and methods of statistical physics1,9.

Originally, CBH was related to the concept of Self-Organized Criticality (SOC)10. In SOC models, dynamical systems with extended spatial and temporal degrees of freedom naturally evolve to a critical state that is weakly stable and displays activity avalanches with power-law distributions of size and duration. The concept of SOC was readily applied to model earthquakes, and it was later shown that the model could equivalently be applied to networks of LIF neurons11. The theory gained support with empirical evidence of power-law avalanches in neuronal tissue12.

However, it later became clear that SOC is not truly applicable to brain networks because they are not conservative13. The extension of the theory for a non-conservative system is called Self-Organized quasicriticality (SOqC)14,15. Furthermore, SOC requires a complete separation of time scales between the stimulus that triggers an avalanche and the spreading of the avalanche itself13. If the separation of time scales is also relaxed, the problem can be tackled within the framework of the recently proposed Homeostatic Criticality (HC)16,17,18.

How can a neuronal network reach and stay near a critical region regardless of perturbations? A biologically plausible solution is to have a homeostatic mechanism based on negative feedback depending on the level of activity of the network. For example, if the activity is high, the neurons’ coupling (or excitability) decreases; if the activity is low, the coupling (or the excitability) increases. This can be achieved, among other possibilities, through dynamical synapses or dynamical neuronal gains. The Levina-Hermann-Geisel (LHG) synaptic dynamics is a popular HC solution19. On the other hand, a dynamic neuronal gain mechanism has the advantage of requiring a smaller number of parameters20,21. For a fully connected network of N neurons, for example, there are \({{\mathcal{O}}}({N}^{2})\) synapses, but only N neuronal gains.

The network topology plays a crucial role in determining the behavior of its dynamics. Experimental evidence suggests that the cerebral cortex is organized in a hierarchical modular (HM) fashion22,23,24,25. In the context of network theory, an HM network can be defined as a set of nodes segregated into distinct modules, with groups of modules, in turn, segregated into larger modules, and these larger modules being segregated into even larger modules, and so on. Each stage in this organization of modules within modules constitutes a hierarchical level, and at each hierarchical level, connections between nodes are denser within a module than between modules. Modeling studies have indicated some beneficial properties of this type of structural organization for the system’s functioning: it would prevent network activity overload26,27, provide greater stability to self-sustained activity28,29,30,31, and facilitate the propagation of information throughout the network32.

Most HC models have been studied using networks with simplified topologies, specifically homogeneous excitatory networks16,19,20,21, excitatory/inhibitory networks18,33,34 or sparse networks18. The few models that explored more complex topologies, such as HM networks, have used simple dynamic elements to represent neurons, such as cellular automata30. Here, we study critical behavior in HM neuronal networks of discrete-time stochastic excitatory LIF neurons with homeostatic adaptive mechanisms. We consider HM networks with three distinct types of connectivity between neurons within the same module (intramodular topology), all of them with undirected connections: (i) sparse and randomly connected Erdős-Rényi (ER) network with pairwise connection probability ε; (ii) sparse and regularly connected network with K neighbors per neuron (KN); and (iii) fully connected (FC) network. The first two are constructed by initially creating a single module of neurons and then subdividing it into smaller modules at successive levels28,29,31,32, while the latter is built by first forming fully connected neuron modules and then linking them into larger modules30. All three neuronal networks have two homeostatic mechanisms that make the critical region an attractor of the dynamics: (i) dynamical gains and (ii) dynamical synapses. We characterize the size and duration of avalanches displayed by the models and examine the scaling relationship between these quantities for each topology, comparing it to theoretical predictions35.

In this study, we investigate how hierarchical modular network topology influences the emergence of critical behavior in networks of stochastic spiking neurons with homeostatic adaptation. We show that sparse modular architectures–particularly those based on random and regular connectivity–more effectively sustain critical dynamics across multiple hierarchical levels, while fully connected architectures tend to drift into supercritical regimes. Our analysis reveals that homeostatic mechanisms can stabilize near-critical activity even in the presence of structural heterogeneities, and that such heterogeneities give rise to quasicritical dynamics and Griffiths-like phases30. These findings suggest that hierarchical modular organization plays a key role in shaping and sustaining the collective dynamics of neuronal networks near criticality.

Results

Hierarchical network architecture

The ER and KN networks were constructed by first establishing a single module of N neurons. This module is assigned the initial hierarchical level H = 0, and additional modules are progressively created by subdividing it into smaller modules at increasing hierarchical levels. This process results in a nested structure, where intramodular connections are denser than intermodular connections28,29,31,32. The key difference between the two networks lies in their connectivity rules: in the ER network, each pair of neurons is connected with a probability ϵ, making the degree of each neuron a random variable with an average value of 〈K〉 = ϵ(N − 1) ≈ ϵN36. In contrast, the KN network maintains a fixed degree of K = 40 for all neurons. The following algorithm generates ER networks of hierarchical levels H > 0 (see Fig. 1):

  • 1. Randomly divide each module into two modules of equal size:

    1. For each module M in the network do:

      1. *

        Split M into two new modules M1 and M2 of equal size.

  • 2. Replace intermodular connections with probability R:

    1. For each connection (ij) where i and j belong to different modules do:

      1. *

        Randomly select a neuron k from the same module as i.

      2. *

        Replace the connection (ij) with (ik).

      3. *

        Randomly select a neuron \({k}^{{\prime} }\) from the same module as j.

      4. *

        Replace the connection (ji) with \((j,{k}^{{\prime} })\).

  • 3. Recursively apply steps 1 and 2 for hierarchical levels:

    1. For hierarchical level H = 1 to Hmax do:

      1. *

        Perform steps 1 and 2 for all modules.

      2. *

        Update the network to have 2H modules.

  • 4. Termination:

    1. Stop when H = Hmax.

Fig. 1: Examples of Erdős-Rényi networks generated by the top-down method.
figure 1

a Schematic representation of the network for H = 0, 2, 3 and 4. In the figure, the network has N = 211 neurons. b Adjacency matrices for the corresponding networks in a. Each dot represents a connection between two neurons. The connections are denser in topologically closer modules than those farther apart. The K neighbors per neuron and fully connected networks are essentially identical in terms of their structure and appearance.

The algorithm for KN networks is similar, with the following addition to item 2 above:

  1. 1.

    Choose neurons m and \({m}^{{\prime} }\) to which k and \({k}^{{\prime} }\) are connected and delete these connections. The neurons m and \({m}^{{\prime} }\) will be the next ones in their respective modules to receive redirected intermodular connections.

The values adopted for ϵ and R were 0.01 and 0.9, respectively. This algorithm increases the connection density per module as the hierarchical level increases. For the ER network, it increases the total number of connections (undirected links) by one unit with each hierarchical level increment, whereas for the KN network it preserves the total number of connections.

Unlike the ER and KN networks, the FC network is built by first forming fully connected neuron modules, which are then progressively combined into larger modules through intermodular connections, resulting in a layered and nested structure30. In this case, we start with a network at hierarchical level H = 0 by splitting the N neurons into fully connected modules of size 2HG0. Then, follow the algorithm below to generate FC networks of higher hierarchical levels:

  1. 1.

    Cluster the existing modules into pairs of larger modules;

  2. 2.

    Establish connections between neurons in each pair of modules by checking all possible \({4}^{H}{G}_{0}^{2}\) undirected connections between them and creating a link with probability αpH+1, without repetitions, where H is the current hierarchical level, 0 < p < 1, and α ≥ 1 is a constant. If, after trying all possible connections between two modules, no connection is created, repeat the process until at least one connection is established;

  3. 3.

    Recursively apply steps 1 and 2 to build networks of higher hierarchical levels (H = 1, 2, 3, …). At each hierarchical level H, the network has N/2H+1 modules.

Neurons in different modules are recursively clustered into sets of higher-level blocks by intermodular links established using a level-dependent probability. We used α = 1 and p = 1/4 to ensure that the number of connections between modules remains constant across hierarchical levels37.

Stochastic neuron model for HM networks

We consider a network of discrete-time stochastic LIF neurons16,20,21. The state of each neuron is characterized by its membrane potential Vi[t] at time step t (where [t] denotes discrete time). The evolution of the membrane potential is governed by:

$${V}_{i}[t+1]=\left\{\begin{array}{ll}\mu {V}_{i}[t]+{I}_{i}[t]+\frac{1}{N}{\sum }_{j=1}^{N}{W}_{ij}{X}_{j}[t]\,\,\,\,\,,\quad &{{\rm{if}}}\,\,\,\,\,{X}_{i}[t]=0\\ {V}_{R}\,\,\,\,\,\quad &{{\rm{if}}}\,\,\,\,\,{X}_{i}[t]=1,\end{array}\right.$$
(1)

where Xi[t] is a binary variable indicating whether neuron i fires at time t: Xi[t] = 1 if neuron i spikes, and Xi[t] = 0 otherwise. When neuron i fires, its membrane potential is reset to the resting potential VR (here, VR = 0). Additionally, each neuron j that is postsynaptic to neuron i receives an increment in potential determined by the synaptic weight Wij.

If neuron i does not fire, its potential decays towards zero at each time step by a factor μ [0, 1], representing the effect of a leakage current. Moreover, neuron i may receive an external input Ii[t], which represents the influence of stimuli from outside the network.

The stochastic nature of firing is described by a firing function Φ(Vi), which provides the probability of neuron i firing at time t based on its current membrane potential Vi. This model is equivalent to the LIF neuron model with escape noise38,39,40, where the stochasticity of firing is inherent to the neuron’s dynamics and is governed by Φ(V). Such stochastic LIF models have been employed in various recent studies41,42,43,44,45,46.

The function Φ is any monotonically increasing function with boundary conditions Φ(x) = 0 for x < 0 and Φ(x) = 1 as x → + . To avoid artificial deterministic cycles (see20), we choose the following rational form for Φ:

$$\Phi ({\Gamma }_{i}[t]{V}_{i}[t])=\frac{{\Gamma }_{i}[t]{V}_{i}[t]}{1+{\Gamma }_{i}[t]{V}_{i}[t]}\,\Theta ({V}_{i}),$$
(2)

where Γi[t] represents a time-dependent homeostatic gain (discussed below), and Θ(x) is the Heaviside step function.

Homeostatic criticality and LHG dynamics

Homeostatic criticality explains how neuronal networks can operate near criticality without requiring fine-tuned external parameters. Instead, adaptive mechanisms such as synaptic depression/recovery and neuronal gain adaptation dynamically regulate local parameters, stabilizing the system in a quasicritical state despite fluctuations17.

The Levina–Herrmann-Geisel (LHG) model exemplifies this concept by integrating short-term synaptic plasticity within a fully connected network, demonstrating that such mechanisms can induce quasicriticality autonomously19. In this framework, stochastic oscillations stabilize the system near criticality, balancing power-law-distributed neuronal avalanches with rare large-scale events known as “dragon kings”16.

A key feature of homeostatic criticality is its ability to sustain quasicritical behavior by separating timescales: slow adaptation of coupling strengths and firing thresholds counterbalance external inputs, allowing the network to maintain functional robustness and optimize information processing, storage, and transmission18,33.

Mean-field approximation for HM networks

The study of SOC in neuronal networks often relies on topology-based connection rules. In hierarchical modular (HM) networks, an increase in the number of modules leads to a higher density of intramodular connections. To analyze this, we propose a mean-field approximation that considers the number of modules while assuming uniform connectivity across the network.

In a typical mean-field approximation, the activity of the network is characterized by the density of firing neurons, defined as \(\rho [t]=(1/N){\sum }_{j = 1}^{N}{X}_{j}[t]\), which acts as the order parameter of the system. For HM networks, we define the density of firing neurons as the average activity per module, given by \(\rho [t]/M=(1/N)\mathop{\sum }_{j = 1}^{N}{X}_{j}[t]\), where M denotes the number of modules in the network. For networks constructed using a top-down approach, this definition leads to:

$$\rho [t]=\frac{{2}^{H}}{N}\mathop{\sum }_{j=1}^{N}{X}_{j}[t].$$
(3)

For networks constructed using the bottom-up approach, the corresponding expression is

$${\rho }_{BU}[t]=\frac{1}{{2}^{H+1}}{\sum }_{j=1}^{N}{X}_{j}[t].$$

Throughout the remainder of this section, Eq. (3) will be used to represent ρ[t].

Our focus is on the second-order phase transition that occurs when there is no external input, that is, Ii[t] = 0. Since the universality class of the phase transition remains unchanged for different values of the leakage parameter μ, we set μ = 0 for simplicity47,48. We will call the model with Ii[t] = μ = 0, the ‘static model’.

In addition, in the mean field limit, we will consider only the average values of synaptic weights and homeostatic gains, W = 〈Wij〉 and Γ = 〈Γi〉. Hence, Eq. (1) reads V[t + 1] = 2HWρ[t]. The density ρ[t] can be calculated using the probability density P[t](V) of the potential V at time t:

$$\rho [t]=\int\Phi (V){P}_{[t]}(V)\,dV,$$
(4)

where P[t](V) dV is the fraction of neurons with potential in the range [V,  V + dV] at time t. Neurons that fire between t and t + 1 have their potential reset to zero. They contribute to P[t+1](V) a Dirac impulse at potential V = 0 with amplitude given by Eq. (3).

The evolution of ρ[t] in the general case was thoroughly explored elsewhere20,21. In the following, we will derive the map for ρ[t] close to stationarity. After a transient, where all neurons spike at least once, Eq. (1) leads to a voltage that has two Dirac peaks, that is, P[t](V) = 2Hρ[t]δ(V) + [1 − 2Hρ[t]]δ(V − 2HWρ[t]). Inserting this together with Eq. (2) into Eq. (4) yields

$$\rho [t+1]=\frac{{2}^{-H}\Gamma W\rho [t]\left[1-{2}^{-H}\rho [t]\right]}{1+{2}^{-H}\Gamma W\rho [t]}.$$
(5)

The fraction 1 − 2Hρ[t] describes the density of silent neurons in an HM network with 2H modules in the previous time step. Equation (5) is a static model because W and Γ are fixed control parameters. Notice that for H = 0, Eq. (5) provides: ρ[t + 1] = [ΓWρ[t](1 − ρ[t])]/(1 + ΓWρ[t]), which is a result previously derived for nonmodular networks16.

Equation (5) has two stationary states: an absorbing state ρ0 = 0, which is stable for ΓW < 1 (unstable for ΓW > 1), and a firing state given by:

$${\rho }^{* }={2}^{H-1}\left(\frac{\Gamma W-1}{\Gamma W}\right),$$
(6)

which is stable for ΓW > 1. This means that the critical line ΓcWc = 1 is independent of the number of modules. It is important to mention that for H = 0, Eq. (6) yields ρ* = (ΓW − 1)/2ΓW, which is the non-trivial firing state found for a complete graph network model16.

The order parameter \(\rho \sim {(W-{W}_{c})}^{\beta }\) has the critical exponent β = 1, suggesting that the absorbing phase transition relates to the mean-field-directed percolation universality class. The critical line remains unchanged regardless of the hierarchical level H. As the hierarchical level increases, firing rates decrease at equilibrium because the mean-field approximation ignores correlations and internal structure, reducing activity to an average over number of modules.

When the parameters are set along the critical line ΓcWc = 1, the system should exhibit power laws for the distributions of the size and duration of the avalanche, with mean field exponents of  − 3/2 and  − 2, respectively20. It is crucial to note that these exponents are applicable only to homogeneous networks.

Homeostatic mechanisms

In this paper, we examine two mechanisms of the HC. The first mechanism is the LHG model for synaptic plasticity19, which is defined by the following equation:

$${W}_{ij}[t+1]={W}_{ij}[t]+\frac{1}{\tau }(A-{W}_{ij}[t])-u{W}_{ij}[t]{X}_{j}[t],$$
(7)

where τ represents the synaptic recovery time, A denotes the baseline or asymptotic value of the synaptic weight, and u is the fraction of the synaptic weight that is depressed after a firing event. This equation describes the decrease in synaptic strength after synaptic discharge due to depletion of neurotransmitter vesicles, as well as the subsequent slow recovery process.

The second mechanism is the dynamic neuronal gain model20,21, given by

$${\Gamma }_{i}[t+1]={\Gamma }_{i}[t]+\frac{1}{\tau }(A-{\Gamma }_{i}[t])-u{\Gamma }_{i}[t]{X}_{i}[t],$$
(8)

where τ is the recovery time for neuronal gain, A represents the asymptotic gain level, and u indicates the fraction of neuronal gain lost after a firing event. This model captures the dynamics of spike frequency adaptation, which is influenced by the inactivation of sodium channels at the axon initial segment during depolarization and their gradual recovery afterward49,50.

Stability analysis for networks with LHG dynamic gains

In the case of the LHG gain dynamics, using W = 1, the static model has Γc = 1/W = 1. Averaging over the HM network with 2H modules (again, considering top-down models), the map for ρ[t] is now given by

$$\rho [t+1]=\frac{{2}^{-H}\Gamma [t]\rho [t]\left(1-{2}^{-H}\rho [t]\right)}{1+{2}^{-H}\Gamma (t)\rho [t]},$$
(9)
$$\Gamma [t+1]=\Gamma [t]+\frac{1}{\tau }\left(A-\Gamma [t]\right)-{2}^{-H}u\Gamma [t]\rho [t].$$
(10)

This system of equations has two fixed points: the fixed point of the trivial absorbing state (ρ0,  Γ0) = (0,  A), which is stable for A < 1; and a non-trivial fixed point (ρ*,  Γ*). At the stationary state the activity is given by 2Hρ, which implies that

$${\rho }^{* }=\frac{{2}^{H}\left(A-1\right)}{2A+\tau u},$$
(11)
$${\Gamma }^{* }=\frac{2A+\tau u}{2+\tau u}={\Gamma }_{c}+\frac{2(A-1)}{2+\tau u}.$$
(12)

If A = 1, this fixed point is exactly the critical point: \(\left({\rho }^{* },\,{\Gamma }^{* }\right)=\left(0,\,{\Gamma }_{c}\right)\). However, setting A = 1 by hand is not allowed in self-organizing systems. Thus, we must consider A > 1, so the condition for reaching the critical region is τu → , which implies \(\left({\rho }^{* },\,{\Gamma }^{* }\right)\to \left(0,\,{\Gamma }_{c}\right)\). In other SOC models, this limit corresponds to the infinite separation of time scales15,51.

The modularity of the network influences the stationary state and significantly impacts the stability of the ρ[t] map, as we will explore further. For H = 0, we obtain the order parameter dependent on the parameters A, τ, and u found by Kinouchi et al.16: \({\rho }_{H = 0}^{* }=(A-1)/(2A+\tau u)\). As H increases, the order parameter increases as a multiple of \({\rho }_{H = 0}^{* }\). Considering the initial value \({\rho }_{H = 0}^{* }=({\Gamma }^{* }-{\Gamma }_{c})/2\), a recurrence formula can be derived to show the effect of the previous hierarchical level on the order parameter of the next hierarchical level: \({\rho }_{H}^{* }=2{\rho }_{H-1}^{* }\).

After conducting a linear stability analysis on the fixed point (see Supplementary Methods), we determined that it represents a stable focus. The magnitude of the complex eigenvalues is

$$| {\lambda }^{+}{\lambda }^{-}| = \left\{\frac{{2}^{-H}(2+\tau u)\left[2A+\tau u+{2}^{-H}(2+\tau u)\right]-{\left[A-1+{2}^{-H}(2+\tau u)\right]}^{2}}{{2}^{H}{(1+A+\tau u)}^{2}}\right\}\times \\ \times \left[1-\frac{1}{\tau }-\frac{u(A-1)}{2A+\tau u}\right]+\frac{{2}^{-H}u(A-1)(2+\tau u)}{(2A+\tau u)(1+A+\tau u)}.$$
(13)

For large τu, we get:

$$| {\lambda }^{+}{\lambda }^{-}| ={2}^{-2H}\left[1-\frac{2(A-1)-{2}^{H}u(A-1)+Au}{\tau u}\right]+O({(\tau u)}^{-2})\,.$$
(14)

The map might be close to a Neimark–Sacker critical point for large τu. However, this proximity depends on the hierarchical level H. When H = 0, the map is very close to a Neimark–Sacker bifurcation for large τu where a limit cycle would appear. The stable focus can be perturbed by demographic noise. This can lead to stochastic oscillations in finite-size systems16. However, for H > 0 and large τu, the fixed point is not a focus at the border of indifference but rather depends solely on the hierarchical level of the network. In Fig. 2, we present the eigenvalues λ±, as a function of τ for several values of A and u and different hierarchical levels.

Fig. 2: Modulus of λ± as a function of recovery time for neuronal gain, τ, for several asymptotic values of A, and fraction of the synaptic weight, u.
figure 2

a From top to bottom, hierarchical levels H = 1, 2, 3 and 4, and A = 1. b From top to bottom H = 1, 2, 3 and 4, for u = 0.1. c For H = 1, from top to bottom A = 1.0, 1.5, 2.0 and 2.5. d From top to bottom u = 0.01, 0.1 and 1.0. For this set of parameters, the system moves away from the Neimark-Sacker critical line as H increases, and it becomes robust to stochastic oscillations, as seen in Eq. (14).

The rate at which asymptotic stability is achieved depends on the hierarchical level. As the value of H increases, the fixed point becomes progressively less stable, eventually approaching marginal stability. An increase in the parameters A and u leads to an unstable focus characterized by stochastic oscillations, as shown in Fig. 2a, b. It is noteworthy that the stable focus exhibits enhanced stability when compared to the case where H = 0 (refer to Fig. 2c, d). Specifically, the system shows resistance to stochastic oscillations near the Neimark-Sacker bifurcation point. The system lies on the critical Neimark–Sacker line when \(| {\lambda }^{\pm }| \equiv \sqrt{| {\lambda }^{+}{\lambda }^{-}| }=1\). From Eq. (14), and after some algebraic manipulation, we obtain

$${2}^{H}=\left(\frac{A-1}{2\tau }\right)\left[1+\sqrt{1+\frac{4\tau u(A+\tau )}{u{(A-1)}^{2}}-\frac{8\tau }{u(A-1)}}\right].$$
(15)

For this expression to have a physical meaning, the second term on its right-hand side must be positive. We define the critical hierarchical level of the network as

$${H}_{c}={\log }_{2}\left\{\left(\frac{A-1}{2\tau }\right)\left[1+\sqrt{1+\frac{4\tau u(A+\tau )-8\tau (A-1)}{u{(A-1)}^{2}}}\right]\right\}.$$
(16)

It is important to note that there exists a parameter set for which Hc satisfies the condition λ+λ = 1. Therefore, no critical hierarchical level can bring the dynamical system near the Neimark-Sacker bifurcation line, as the parameters (A,  u,  τ)  > 0.

Neuronal avalanches and scaling relations

We perform computational simulations to investigate the critical parameters Wc = 1, Γc = 1, and μ = 0 for various values of H in order to identify neuronal avalanches. Following a transient period of 104 ms, we define the size of an avalanche as the number of firing events, s, which occur between two consecutive states of zero activity. After reaching a zero activity state, a neuron is randomly selected to initiate further activity. An avalanche that begins at discrete time t = t0 and ends at t = tf has a duration d = tf − t0 and size \(s=N\mathop{\sum }_{{t}_{0}}^{{t}_{f}}\rho [t]\). Denoting S as a random variable for avalanche size and s as a specific observed value, we find that the avalanche size distribution follows a power law, \({P}_{S}(s)\equiv P(S=s)\propto {s}^{{\tau }_{s}}\), where τs is the critical exponent.

This implies that at the critical point, network activity must display peaks between two consecutive periods of zero activity (see Fig. 3). Finite-size fluctuations lead to increased network activity as the number of neurons decreases in a network of the same hierarchical level (Fig. 3a).

Fig. 3: Time series of the order parameter for hierarchical modular networks with three topologies at the critical point.
figure 3

a Network activity decreases as the number of neurons increases within each hierarchical modular level (H = 1). b As the number of modules increases, the intramodular connectivity also increases, leading to a higher number of spiking neurons. c Network activity in a hierarchical and modular Erdös-Rényi network with three distinct probability rewiring parameters (R). Network activity was measured after a transient period of 104 time steps.

In contrast, large avalanches with extended duration occur when the number of modules increases, particularly in the FC-HM network, as shown in Fig. 3b. In this case, the high density of both intramodular and intermodular connections elevates the firing rate, causing smaller avalanches to merge into larger ones.

The probability of rewiring, R, influences the density of connections within modules. However, its impact on overall network activity is minimal, as demonstrated in Fig. 3c. Figure 4 gives the size and duration profiles of the neuronal avalanches. For all three topologies, the distribution curves follow a power-law pattern.

Fig. 4: Power-law distributions of neuronal avalanche sizes and durations.
figure 4

a Distribution of avalanche sizes for a network with N = 8192 neurons across various hierarchical levels. The dashed reference line follows a power-law \(\propto {s}^{-{\tau }_{s}}\), with the exponent for the avalanche sizes, τs ≈ 1.5, indicating criticality. For both Erdős-Rényi (ER) and K neighbors per neuron (KN) topologies, the distributions for all hierarchical levels collapse onto a single curve. b Distribution of avalanche durations for different hierarchical levels H with N = 8192 neurons. The dashed reference line follows a power-law \(\propto {d}^{-{\tau }_{d}}\), with exponent for the avalanche durations, τd ≈ 2.0, also suggesting criticality. The reference line was obtained using the least-squares regression method. In all cases, the long tails in the distributions are due to large-scale, long-duration avalanches, known as “dragon kings'', which suggests the system is in a slightly supercritical state.

In the ER and KN networks, the distributions of neuronal avalanche sizes (and their durations) collapse onto the same curve, regardless of the network’s hierarchical level. In these sparsely connected topologies, activity propagation is primarily governed by the parameter ϵ.

Due to the low density of connections in both network types, activity tends to propagate only through specific regions. In the KN-HM network, when a neuron fires, it transmits signals to its 40 immediate neighbors. However, the spread of activity is limited by the fixed set of connections for each neuron, potentially resulting in fewer synapses compared to the ER-HM network, where each neuron can randomly connect to many others.

Because the average degree in the ER network is higher than in the KN network, larger avalanches are more likely to occur in the ER-HM network. This difference explains the distinct tails observed in the avalanche size distributions for the ER and KN networks, as shown in Fig. 4a.

The critical exponent for avalanche sizes in these networks aligns with those associated with mean-field-directed percolation. In contrast, the critical exponent for avalanche durations diverges, reflecting the influence of hierarchical modular topology. As usual in SOC models, neuronal avalanches are calculated in the absence of external inputs or with adjustments to the firing threshold, which offset external inputs and render them negligible18,33.

However, this condition does not persist in the cerebral cortex. Conservative estimates suggest that synaptic currents occur at a minimum frequency of 5 Hz34, implying that a cubic millimeter of cortex, which contains approximately 10,000 neurons, receives around 50,000 inputs per second. The system in that context is definitely not in a stable state52.

In a non-hierarchical modular topology displaying SOC behavior, neuronal activity is sufficiently strong to trigger avalanches but not so strong as to interfere with ongoing ones. This indicates that the timescales of neuronal activity and avalanches are separable, with neuronal firing periods being longer than avalanche durations.

In contrast, in HM networks, external inputs such as intermodular connections can merge two previously separate avalanches within a module. Frequent occurrences of this phenomenon can alter the distribution of avalanche sizes, leading to larger avalanches and extending the tails of the distributions. This also results in shallower power-law slopes, reflected in a decrease in the exponent values. It is important to note that this phenomenon affects both the sizes and durations of avalanches. Consequently, the critical exponents for ER and KN networks would be relatively higher in the absence of an HM network topology.

In the FC-HM network, the distributions of the avalanche sizes and durations are markedly different, as shown in Fig. 4 (right column). The power-law distributions no longer exhibit a collapse onto a single curve, and the slope gradually decreases with increasing hierarchical levels. The high intramodular connectivity in the FC network, combined with relatively low intermodular connectivity, leads to the emergence of much larger avalanches. When activity begins within a single module, it initially propagates within that module before spreading to neighboring modules. This intramodular propagation allows large avalanches to form before they extend intermodularly, resulting in amplified avalanche sizes.

Within the network, certain regions may display local parameters that differ significantly from the global average, allowing these regions to sustain activity even when the rest of the network is quiescent or exhibits reduced activity. In disordered systems, such “rare” regions can remain active despite the overall network being inactive on average. This behavior is characteristic of Griffiths phases, which represent an intermediate phase between active and inactive states30.

In the Griffiths phase regime, critical phenomena occur over a broad range of parameters rather than converging at a single critical point30. FC-HM networks inherently exhibit structural disorder, creating regions of increased connectivity and enhanced propagation rates. These conditions promote the development of rare regions capable of sustaining prolonged activity, thereby manifesting Griffiths phase behavior across a wide range of parameter values.

Because the avalanche distributions become noisy for large sizes s and durations d, we instead analyze the complementary cumulative distribution function (CCDF), \({C}_{X}(x)\equiv P(X\ge x)=\mathop{\sum }_{k = x}^{\infty }{P}_{X}(k),\,X\in \{s,d\}\), which gives the probability of observing an avalanche with size or duration at least s or d. Figure 5 displays the CCDFs for avalanche size and duration corresponding to those in Fig. 4. In both the ER and KN networks, the CCDFs across all hierarchical levels collapse onto a single power-law curve, \({C}_{S/D}(s/d)\propto {(s/d)}^{-{\tau }_{s/d}^{{\prime} }}\) over wide scaling ranges. Although the expected relation \({\tau }_{s/d}^{{\prime} }={\tau }_{s/d}-1\)53, is not exactly observed (likely due to finite-size effects), the overall scaling remains consistent.

Fig. 5: Complementary cumulative distribution functions (CCDFs) of neuronal avalanche sizes and durations.
figure 5

a Avalanche sizes. The CCDF for avalanche sizes is shown for the three network topologies. The cyan dashed line indicates a power-law decay \({C}_{S}(s)\propto {s}^{-{\tau }_{s}^{{\prime} }}\). b Avalanche durations. The CCDF for avalanche durations is likewise displayed for the three topologies. The red dashed line represents a power-law scaling \({C}_{D}(d)\propto {d}^{-{\tau }_{d}^{{\prime} }}\).

In contrast, the CCDF curves for the FC network (Fig. 5, third column) do not converge, suggesting that the high density of intramodular connections in FC networks produces pronounced avalanches that alter the scaling behavior. This finding reinforces the conclusion that the ER and KN networks operate near criticality, while the deviations observed in the FC network–especially at higher hierarchical levels–arise from its distinctive connectivity pattern.

It is widely acknowledged that power-law distributions in system dynamics do not necessarily indicate criticality. Conversely, genuinely critical systems may not always exhibit power laws in their measurable quantities. A more stringent test for criticality is the verification of the crackling noise scaling relation35:

$$({\tau }_{d}-1)/({\tau }_{s}-1)=\gamma ,$$
(17)

where γ describes how the mean avalanche size scales with duration, 〈s〉(d) ~ dγ. The exponent γ is related to the fundamental scaling exponent 1/σνz in renormalization-group theory, which governs how avalanche sizes grow with time. The scaling relation (17) follows directly from universality and renormalization-group arguments. Thus, if a system is truly critical, its measured exponents (τd, τs and γ) must satisfy this relation.

Figure 6 illustrates the scaling relation for the three network topologies across various values of H, and N = 8192. The red line represents the best-fit scaling relation obtained using least-squares regression, where the slope corresponds to the value of γ. For the ER and KN networks, the system is critical with γ ≈ 1.53, consistent with the “crackling noise relation”. In contrast, in the FC network, as the hierarchical level H increases, the occurrence of large avalanches drives the system into a self-sustained state, moving it away from the critical point.

Fig. 6: Scaling relationship between the average avalanche size and its duration.
figure 6

a For different hierarchical levels (columns) with a fixed number of neurons, N = 8192, the scaling exponent γ does not change significantly across the three topologies (rows). b The plot of γ versus H shows that the scaling relationship remains largely constant as the hierarchical level H increases, remaining within the error margins.

The ER and KN networks exhibit critical behavior, but their critical exponents deviate from those of the mean-field directed percolation universality class. The universality class of brain dynamics remains unsolved. To address this, Muñoz and colleagues introduced the concept of self-organized quasicriticality (SOqC), proposing that externally driven neuronal networks do not self-tune to criticality but instead display apparent criticality, requiring fine-tuning for proper scaling14,15. In this framework, true scale invariance is absent, and finite-size scaling requires fine-tuning. SOqC challenges the notion that neuronal networks inherently self-organize toward criticality, suggesting instead that deviations observed in experiments may arise from insufficient tuning of external drive rather than an intrinsic quasicritical state.

In contrast, quasicriticality, as formulated by Williams-García et al. (2014), describes a dynamical regime in which neuronal networks operate near, but not exactly at, criticality due to ongoing external drive54. Rather than requiring fine-tuning, quasicritical systems dynamically adjust to external input while still obeying approximate scaling laws. Although critical exponents may vary, they remain consistent with dynamical scaling relations, suggesting an adptative mechanism that enables neuronal networks to retain critical-like properties under fluctuating conditions54,55.

While quasicriticality explains how neuronal systems can maintain near-critical behavior without fine-tuned homeostasis, the mechanisms that keep them in the quasicritical region remain unclear. We propose that hierarchical modular (HM) topology, which enhances neuronal activity within modules via intermodular connections, may regulate the likelihood of spontaneous neuronal activation. This parameter may regulate the system’s responsiveness to external inputs, determining its proximity to criticality and shaping neuronal network dynamics55. Our results suggest that network structure plays a key role in maintaining quasicriticality as an emergent, adaptive property of neuronal systems.

Discussion

This study explores how network topology influences the critical behavior of stochastic neurons. We investigated a hierarchical modular topology, which reflects several key features of the brain cortex–such as hierarchical connectivity, modularity, plasticity, and self-organization–using three different intramodular structures: sparse and random (ER), sparse and regular (KN), and fully connected (FC) networks.

Traditionally, there are two main approaches to studying the intrinsic behavior of neuronal networks: (1) networks with complex architectures (heterogeneous networks), such as those with hierarchical modular structures, but with highly simplified neuronal dynamics (e.g., cellular automata and contact processes); and (2) networks with complex neuronal dynamics, including homeostatic mechanisms that regulate network activity to maintain a critical state. In this study, we propose an approach that combines both complex dynamics and complex network architecture to better understand the fundamental behavior of such systems.

In conventional SOC models, typically studied in homogeneous networks, the system is static and exhibits a phase transition from an absorbing state to an active state. In the mean-field approximation, such static systems are described by a one-dimensional map, where the density of active sites serves as the order parameter. This phase transition is characterized by a transcritical bifurcation, with the critical point acting as an unstable equilibrium (an indifferent node). This indifferent equilibrium supports scale-invariant fluctuations in the order parameter, such as scale-invariant avalanches, but does not produce stochastic oscillations.

Phase transitions in neuronal networks can exhibit both first- and second-order characteristics, each with distinct implications for criticality. First-order transitions involve abrupt shifts between activity states, often associated with bistability and pathological events such as epileptic seizures. These transitions typically arise from saddle-node bifurcations and are preceded by bursts and avalanches of activity56,57. In contrast, second-order transitions involve smooth changes in activity, characterized by scale-invariant neuronal avalanches consistent with the critical brain hypothesis (CBH). Such transitions are often linked to Hopf bifurcations, where oscillatory dynamics emerge or vanish depending on input strength20.

Modularity plays a crucial role in both cases: During first-order transitions, it helps confine abrupt shifts to specific modules, preventing global instability. During second-order transitions, modularity supports the propagation of cascades that adhere to critical scaling laws, ensuring functional diversity across the network56,57.

In adaptive self-organized criticality (aSOC), which is also typically studied in homogeneous networks, homeostatic processes make the critical point an attractor for the system. Here, the control parameter becomes a variable that depends on the network’s activity, resulting in a system better represented by a two-dimensional map.

We performed a fixed-point stability analysis of the stochastic neuron model with LHG dynamics, accounting for the network’s heterogeneity, including its hierarchical levels and modular structure. Analytical results were obtained for the first-order approximation in the large limit τ. Notably, even for large τ, the real eigenvalues were found to depend on the hierarchical level of the network. The parameter H, which represents the hierarchical level, increases the damping of the system, thereby moving it away from the point of marginal stability and making it more resistant to stochastic fluctuations that could drive the system to the absorbing state. Despite this behavior, the critical exponent of the order parameter remains consistent with the mean-field-directed percolation universality class.

However, finite-size fluctuations can cause the network to approach the critical point where neuronal avalanches occur. The parameter ϵ, which is related to connection density and network connectivity, directly influences the size and duration of avalanches, thereby affecting the scaling relationship between them. In sparse topologies, the low density of connections seems to modulate avalanche size by limiting their spread throughout the network. Our model demonstrates that network modularity illustrates the impact of external input on the density of firing neurons within each module, influencing both the size and duration of avalanches. In a more realistic neuronal network model, quiescent periods for neurons are rare but still present. The scaling law indicates that the system reaches a critical state at low connection densities and low H, even as the external input to the network increases with higher hierarchical levels.

Our findings suggest that hierarchical modularity strengthens the robustness of quasicritical behavior and shapes how the network transitions through critical regimes. Specifically, modularity may serve as a structural regulator, balancing first- and second-order dynamics in neuronal systems. This stabilizing role aligns with the idea of Griffiths phases in neuronal networks30, where structural inhomogeneities extend the range of critical-like behavior. In this context, hierarchical modularity enhances the network’s stability near criticality, making it less prone to perturbations that would drive it away from this regime. However, when the density of connections becomes too large, modularity alone is no longer sufficient to maintain criticality, ultimately pushing the network into a supercritical state. A related perspective emerges from percolation models, where sparse intermodular connectivity acts as an external magnetic (or ghost) field, gradually shifting the system away from threshold behavior58. This interplay between modularity, connectivity density, and criticality suggests that hierarchical organization not only stabilizes quasicriticality but also defines the conditions under which self-organized phase coexistence can be sustained in systems. In conclusion, we propose that heterogeneous networks combined with adaptive self-organized criticality dynamics can exhibit critical behavior. Future work will explore how even more heterogeneous networks, such as hierarchical modular networks of stochastic excitatory-inhibitory neurons, may also display criticality.