Introduction

The brain’s cognitive capabilities, including perception, reasoning, learning, and decision-making, arise from the complex interplay among neuronal circuits. These circuits possess topologies and connectivity patterns specialized for dedicated computations. Neuronal network architecture is known to shape neuronal spiking activity1,2,3,4,5,6,7,8, yet the degree to which the spikes of individual neurons are diagnostic of this architecture remains an underexplored frontier. In this study, we venture into this frontier, making a pivotal discovery that opens the way for inferring functional statistical features of network topology from the spiking dynamics of its constituent neurons.

Faced with the limitations of brain sensing – such as the inability to continuously monitor all neurons in a brain region or identify their exact connectivity with each other – several critical questions arise:

  1. 1.

    Can we establish mathematical metrics to discern the structure of underlying networks and subnetworks solely from neuronal spiking behavior? The network topology determines signal propagation, which, considering the recurrent structure of brain circuits, gives rise to multiscale spiking dynamics. We therefore hypothesize that discovering the higher-order statistics of observed neuronal spiking dynamics can shed light on the unobserved network architecture.

  2. 2.

    Is it possible to develop mathematical metrics for inferring neuronal networks that are resilient to variations in circuit inputs? Subtle network changes, like modifying connectivity probabilities within the same architectural framework, affect firing rates. Yet, firing rates can also be influenced by the strength of inputs from external stimuli or other brain circuits. Thus, we need a robust approach that is sensitive to network structure while remaining as insensitive as possible to input strengths.

  3. 3.

    Can these metrics distinguish networks with different structures designed for varied computational tasks? Networks executing diverse functions necessitate distinct architectures. Exploring how these functional shifts manifest in individual neuronal activity is crucial for a holistic understanding of the interplay between network structure and neuronal response.

To bridge these knowledge gaps, we introduce a computational framework that aims to infer functional statistical features of recurrent neuronal networks from the spiking dynamics of neurons (Fig. 1a–c). Given that neuronal spiking dynamics exhibit non-stationary, non-Markovian, and non-Gaussian characteristics, we employ a multifractal analysis approach to investigate higher-order statistics (detailed in the “Methods” section). Figure 1d–f illustrates this process, which analyzes interspike intervals across multiple timescales and characterizes their higher-order statistics using the q-order Hurst exponent and multifractal spectrum.

Fig. 1: Multifractal analysis of spiking dynamics as a tool to infer functional network topology.
figure 1

Although it is well established that connectivity patterns (a) shape spiking dynamics of neurons (b) in brain circuits, inferring the connectivity from neuronal activity is fraught with challenges. We ask if specific features of spiking dynamics are diagnostic for the topological features that determine the circuit’s computations and function (c). df Multifractal analysis of spiking dynamics. Recurrent networks are characterized by propagation of signals in intricate loops with different lengths, which give rise to spiking dynamics with multiscale temporal characteristics and multifractal properties. We hypothesize that the higher-order statistics of spiking activity of individual neurons carry a signature of the network’s topological features, adequate for identifying key architectural differences across circuits. We show that the generalized Hurst exponents with different orders (q) efficiently capture the diagnostic higher-order spiking statistics. To calculate Hurst exponents for a neuron, we measure its successive interspike intervals (d) and use detrended fluctuation analysis to calculate the multifractal properties not captured by the first-order and second-order moments (e). Interspike intervals of real neurons exhibit nontrivial higher-order statistics over multiple timescales, reflected by a non-linear dependence of the q-order Hurst exponent as a function of the q-th magnification factor (f). This non-linear dependence (blue) is markedly different from the trend expected for a Poisson process. The example for a single neuron in (df) is recorded from the macaque motor cortex (adopted from ref. 42).

We validate our approach by simulating a variety of biological spiking neural networks, in which the probability and strength of connections among excitatory and inhibitory neurons vary as a function of their distances, and recurrent spiking neural networks trained to perform a variety of cognitively relevant computations. Our multifractal analysis successfully distinguishes different network topologies, is robust under partial observation scenarios, is robust to changes in circuit inputs, and effectively differentiates networks performing different functions without prior assumptions about connectivity rules.

Results

A heterogeneous spiking network with a similar connectivity profile and spiking statistics as the sensory cortex

Excitatory and inhibitory neurons in the cerebral cortex make intricately interconnected networks. The probability of a synaptic connection in these networks decreases with the distance between pairs of neurons9. By exploiting this principle, we simulate a simplified, two-dimensional cortical sheet composed of excitatory cells (pyramidal cells) and inhibitory cells (fast-spiking interneurons)9, as illustrated in Fig. 2a. Figure 2b shows the Gaussian profile of the connectivity probability of two neurons as a function of their distance. Moreover, we simulated the weights of the connections to be proportional to the inverse of distance, as shown in Fig. 2c (see the “Methods” section for the parameters of edge probability and weight assignment).

Fig. 2: Simulated spiking neural network with a biologically inspired architecture.
figure 2

a Network schematic. In the mammalian sensory cortex, excitatory and inhibitory neurons are laterally connected with higher connection probabilities for nearby neurons. We simulated a 2D sheet populated by interspersed excitatory and inhibitory spiking neurons in the same ratio found in the cortex (4:1). A specific sensory stimulus (e.g., an auditory tone) provides excitatory thalamic inputs to a fraction of cells (neurons selective to the tone’s frequency; shaded region at the center of the sheet). b Probability of synaptic connection between different types of cells as a function of spatial distance of neurons. The x-axis represents the distance between nodes measured in normalized units (i.e., distance of two horizontal or vertical adjacent cells is 1), and distance of two diagonal adjacent cells is \(\sqrt{2}\). c Weight of synaptic connections between different cell types as a function of spatial distance. d Temporal profile of thalamic excitation following the stimulus onset (time 0) received by cells in different locations of the 2D sheet. e Raster plots of representative excitatory and inhibitory neurons shortly after the stimulus onset and at a later time when thalamic inputs have subsided to baseline. Neurons continue to generate spikes after the stimulus due to network reverberations. f Heatmap of spike counts of excitatory cells on the 2D sheet. The central square with more pronounced spiking corresponds to the sub-region directly excited by the thalamic inputs (shaded region in a). g Spiking activity of simulated neurons is stochastic. The average Fano factor of spike counts of excitatory cells is slightly above one before the stimulus onset, and briefly plummets after the stimulus onset, matching past experimental observations (e.g.,15). Fano factors calculated with sliding windows of different lengths show similar trends.

To simulate external stimuli, we designed a transient input signal that followed the shape of a log-normal function over time (Eq. (1), “Methods” section), akin to the signals that the sensory cortex receives from the thalamus following the onset of a brief stimulus (e.g., an auditory tone). In Fig. 2a, the shaded area represents the region where neurons receive direct thalamic inputs. We employed the Izhikevich spiking neuron model, implementing an integrate-and-fire type model capable of replicating the spiking and bursting behavior observed in sensory cortical neurons10,11.

Our initial validation involved examining the network’s spiking statistics. We simulated a neuronal network consisting of 900 excitatory cells and 225 inhibitory cells. Following the conventions of ref. 9, our simulated network corresponds to a volume of  ~220 μm × ~220 μm × 200 μm of auditory cortex compressed into a 2D sheet. The cells are arranged on a 2D grid with equal spacing in a 30 × 30 layout. Cells inside the region [6, 25] × [6, 25] in the grid received the thalamic signal in addition to a Gaussian noise, while cells outside this region only received the Gaussian noise. Figure 2d illustrates the magnitude and time course of the thalamic input for different types of neurons. The corresponding raster plot of spiking events for a subset of neurons is shown in Fig. 2e. During the initial period (t = 0 to 500 ms) following stimulus onset, neurons inside the signal input region (colored as dark blue for excitatory neurons and red for inhibitory neurons) were activated by the stimulus, resulting in an increasing firing rate. After the signal input decayed to nearly zero at t = 7000–8000 ms, the neurons were driven solely by noise and reverberations within the network, resulting in sustained low firing rates. Neurons outside the signal input region (colored as light blue for excitatory neurons and yellow for inhibitory neurons) also exhibited spiking activity during both periods, indicating activation via synaptic connections from neighboring spiking neighbors. Figure 2f shows a heatmap of spike counts for excitatory neurons on the 2D sheet. Elevated activity is observed in the central region, corresponding to neurons directly driven by the thalamic input. In addition, surrounding neurons—although not directly stimulated—exhibit spiking due to recurrent connectivity within the network, reflecting reverberatory activity.

In addition to the sustained activity arising from reverberations, we explored the firing variability of the neurons. Figure 2g shows the Fano factor computed for time windows of size 50, 100, 150, and 200 ms around stimulus onset. A key signature of the spiking activity of cortical neurons is their high Fano factors. A Fano factor of 1 is characteristic of a Poisson process, where the interspike intervals follow an exponential distribution and spike times are unpredictable. Such a process automatically arises in balanced networks where a large population of excitatory and inhibitory neurons is sparsely connected with moderately strong synapses12,13. Cortical neurons often generate Fano factors slightly above 1, which can typically arise when the activity of neurons embedded in a balanced network is modulated over time14, e.g., through slow reverberations. Further, the Fano factor of cortical neurons typically shows a sharp reduction when neural responses are aligned to external stimuli, decisions, or actions15,16, as they constrain the set of possible neural activity patterns. Our simulated networks replicated these key signatures. In the absence of a stimulus, neurons in the network generated spiking activity with Fano factors slightly above 1. Following the stimulus onset, we observed a sharp decrease in the Fano factor. Overall, our biologically inspired spiking network replicated several properties of spike trains of cortical neurons.

Single-population systems

We applied multifractal detrended fluctuation analysis (MFDFA) to estimate the long-range memory and higher-order statistical behavior of neuronal spiking. Similar to characterizing whether a random walker trajectory exhibits memoryless or long-range memory behavior, MFDFA first detrends the interspike time series \({\left\{{x}_{t}\right\}}_{t=1}^{T}\) over different segments of size s, and then calculates scale-dependent fluctuations. The overall multifractal q-order fluctuation function FMFDFA(qs) is then estimated as the summation across all scale (s) dependent fluctuations using q-order statistical moments. The positive and negative q values amplify and focus on large and small local fluctuations, respectively. Multifractal time series \({\left\{{x}_{t}\right\}}_{t=1}^{T}\) exhibit a power-law relationship between the overall fluctuation function and the scale of the time window as FMFDFA(qs) sH(q). We exploit the generalized q-order Hurst exponent H(q), the multifractal spectrum with q-order singularity exponent α(q) and the q-order singularity dimension f(α(q)) derived from H(q) as the multifractal metrics to characterize the complexity of the neuronal spiking dynamics (more details of the multifractal analysis are provided in the “Methods” section).

To examine the multifractal properties of neuronal spiking dynamics and their correlation with varying connection densities in neuronal networks, we generated networks with different topologies by scaling synaptic strengths. We varied the amplitude of Gaussian distributions for the linking probability of the excitatory-to-excitatory cell connections (αinhibitoryexcitatory) as 0.07, 0.11, and 0.15. These networks included the same number of excitatory and inhibitory neurons as in Fig. 2a. Recurring stimuli, modeled as right-skewed log-normal functions, were applied to neurons inside the region [6, 25] × [6, 25] with input strengths (A = 5K, 10K, 15K, 20K, 25K, 30K), while additive white noise was applied to all neurons regardless of their locations. We collected interspike intervals (ISIs) from neuronal spiking data and performed multifractal analysis on the ISI time series of all excitatory cells in the neuron sheet. Figure 3a displays the log–log relationship of the q-order fluctuation FMFDFA(qs) as a function of the scale s for an excitatory cell located in the center of the neuron sheet. The straight lines represent the power-law fitting, and the corresponding q-order Hurst exponents are indicated in the legend. A power-law relationship between the fluctuations and the scales is evident for positive q values, while it is weaker for negative q values due to limited data. Therefore, in the remainder of the paper, we focus on the multifractal pattern for positive q values. Supplementary Fig. S1 further shows the relationship of q-order fluctuation FMFDFA(qs) and scale s for neurons at different locations in the sheet.

Fig. 3: Multifractal analysis of neuronal spiking dynamics captures variations in network connectivity while being robust to changes in the thalamic input strength.
figure 3

a A power-law relationship between the q-order fluctuations and the scale indicates a multifractal structure for spiking dynamics of excitatory neurons of the network in Fig. 2. Axes are plotted using a base-2 logarithmic scale. The average q-order Hurst exponent (b) and the multifractal spectrum (c) of the excitatory cells under different lateral connection probabilities and thalamic input strengths. Curves cluster based on connection probability, not thalamic input strength. d Spike counts of excitatory neurons scale linearly with the intensity of thalamic inputs. Spike counts are calculated for the whole simulation with a duration of 500s. In contrast to the large changes in spike counts, the q-order Hurst exponent (e) and multifractal spectrum (f) are largely invariant to the thalamic input strength. Yet, both multifractal metrics reliably capture changes in network topology (lateral connection probabilities). The average q-order Hurst exponent (g) and multifractal spectrum (h) of Erdös-Rényi (ER), Barabási-Albert (BA), and Watt-Strogatz (WS) network (net) model with varying thalamic input strengths.

Figure 3b, c shows the average q-order Hurst exponent and multifractal spectrum of all excitatory cells in the system. Each curve is calculated on a simulation with different combinations of network connectivity and input signal strength. For clarity of the figure, here, we choose to only display selected network connectivity and input strength. Line color visualizes the peak edge probability of inhibitory to excitatory cell connection αinhibitoryexcitatory, and line style represents the strength of input stimuli. It can be observed that the lines with the sample color are grouping together, indicating that the ISIs of neuronal networks with the same connectivity have similar multifractal patterns, regardless of the intensity of stimuli.

To investigate whether multifractal analysis can inform us about network density and assess robustness in response to various stimulus strengths, we present several multifractal metrics in Fig. 3e, f. These metrics were measured for a range of network connectivities (αinhibitoryexcitatory) and input strengths (A). In line with the power-law behavior of the q-order fluctuations as a function of the scale magnitude (shown in Fig. 3a), we observe that the q-order Hurst exponent and q-order singularity dimensions, as shown in Fig. 3b, c, respectively, exhibit a non-uniform, non-linear behavior. The behavior confirms the existence of multifractality in the spiking activity of individual neurons. Additionally, Fig. 3d illustrates a linear relationship between the average total spiking count for all excitatory cells and the input strength, highlighting sensitivity to input stimuli.

Figure 3e, f show the right tip of the q-order Hurst exponent H(q = 5) and the left tip of the multifractal spectrum corresponding to the q-order singularity exponent α(q = 5) for varying strengths of the input signals under different network connectivities (αinhibitoryexcitatory = 0.07, 0.11, 0.15). In other words, the spike count analysis fails to distinguish changes in network connectivity from changes in input strengths, whereas the multifractal analysis can do so.

The simulated networks have the connection probability decreasing with the distance between neurons. This structure is equivalent to a modified Erdös-Rényi (ER) model incorporating spatial information. We also expanded our analyses to include modified Barabási-Albert (BA)17 and Watt-Strogatz (WS)18 models, which simulate scale-free and small-world network architectures, respectively, while preserving the biological property that neurons are more likely to connect to nearby cells. All three network classes (ER, BA, and WS) have identical network densities and consist of Izhikevich spiking neurons. The results, presented in Fig. 3g, h, show clustering of networks with these different thalamic input strengths within each class, similar to the patterns observed in Fig. 3b, c, as well as separation of networks from different classes. These findings further support that the MFDFA measures are robust to variations in input strength and sensitive to network connectivity structure. Details are provided in the Supplementary Information.

Multi-population systems

The brain comprises multiple circuits distributed across different regions, each with specific functions. To investigate and understand the multifractal phenomena in a multi-circuit system, we simulated a modular neuronal network consisting of two subnetworks with distinct connectivities. Each subnetwork comprised 900 excitatory cells and 225 inhibitory cells on a 30 × 30 neuronal sheet. The maximum amplitudes of the Gaussian distribution of excitatory-to-excitatory cell connections for subnetwork 1 and subnetwork 2 were set to 0.07 and 0.15, respectively. Inter-subnetwork connections were established with a much lower probability of 0.025 and had a low connection weight (w = 2), resulting in a neuronal network with a community structure19.

The two subnetworks received separate thalamic inputs with different strengths, as illustrated in Fig. 4a. We validated the community structure by providing a non-zero input signal to subnetwork 1 with additive Gaussian noise, while subnetwork 2 only received the noise from its thalamic inputs (Fig. 4b). Figure 4c shows the spiking activity of neurons in the two-population system. Dark blue and red points represent the excitatory and inhibitory neurons in subnetwork 1, while light blue and yellow dots represent the spikes for excitatory and inhibitory neurons in subnetwork 2. At stimulus onset (t = 0 ms), both subnetworks elicited sporadic action potentials. As input to subnetwork 1 increased, the spiking activity in this subnetwork increased with a latency of  ~100 ms. Subnetwork 2 was then ignited by the reverberation from subnetwork 1 via the inter-subnetwork connections, with a delay of  ~20 ms (right edge of box in the raster plot).

Fig. 4: Multifractal analysis primarily detects variations within neurons’ native circuits and is minimally influenced by adjacent network structures.
figure 4

a Schematic of a network that comprises two interconnected subnetworks of spiking neurons. Subnetwork structures follow the same principles as in Fig. 2. Subnetworks receive distinct thalamic inputs. The connection probability between the subnetworks is 0.025. b, c Activation of one subnetwork by thalamic inputs reverberates to the other subnetwork through their connections. d, e Average multifractal spectrum of all excitatory cells in subnetwork 1 and subnetwork 2 for variations of thalamic input strength into the two subnetworks. The curves cluster based on input strength to the neurons' native subnetwork. f, g Quantified multifractal tokens of subnetwork 1 minimally vary with input strength to subnetwork 2 (f). Similarly, the multifractal spectrum of subnetwork 2 is maximally sensitive to input strength to subnetwork 2 and minimally influenced by input to subnetwork 1.

We further varied the thalamic input strength for each subnetwork (A = 0K, 2.5K, 5K, 15K). As before, the input signal, along with additive Gaussian noise, was applied to all excitatory and inhibitory cells, regardless of their locations in the neuronal sheet. To investigate the multifractal behavior in this two-population system, Fig. 4d, e shows the average q-order singularity dimension f(q) as a function of the q-order singularity exponent α(q) calculated from higher-order fluctuations of the ISI (spiking) trajectories of excitatory cells in the two subnetworks. Line styles depict different input strengths for subnetwork 1, and line colors represent different input strengths for subnetwork 2. For each subnetwork, we observe a grouping of f(α(q)) curves based on input strength to that subnetwork only. This suggests that the multifractal pattern of ISIs from a certain subnetwork in a community structure is more dominated by its own activation than by inputs from the other communities.

To quantitatively measure the discrepancy among the various multifractal spectrum curves, we use the q-order singularity exponent at q = 0, i.e., α(q = 0), as a multifractal metric (Fig. 4f, g). This corresponds to the x-axis coordinate of the right tip of the multifractal spectrum. Figure 4f shows the relationship between the α(q = 0) of the multifractal spectrum for subnetwork 1 and the intensity of input applied to subnetwork 2. Each line represents a certain strength of the input stimuli on subnetwork 1. Notably, there is no overlap in the range of α(q = 0) across different input strengths, indicating that the multifractal analysis on the selected subnetwork is primarily affected by the magnitude of the signal input on the same subnetwork, and is marginally influenced by the neuronal activity in the interconnected subnetwork. In addition, the slope of the curves is also decreasing as the strength of input on subnetwork 1 increases, suggesting that the impact of subnetwork 2 on the multifractal pattern of subnetwork 1 diminishes as subnetwork 1 is stimulated with stronger input signals. A similar observation can be made when calculating α(q = 0) for subnetwork 2, as shown in Fig. 4g. This observation further supports the proposal that multifractal analysis can robustly characterize the spiking behavior of a network controlled by an input signal in partially observed cases where only one subnetwork can be accessed in a multi-population system.

Goal-directed networks

The diverse topologies of neuronal networks enable the brain to implement complex computations. However, capturing these diverse network structures with a single generating rule remains a challenge. Furthermore, understanding the direct relationship between the network structure and its emerging functionality remains unclear. Despite these challenges, we can train artificial neuronal networks to perform diverse computations essential for goal-directed, cognitive behavior20. We can explore the relationship between the topology and activity of these networks through the formalism we developed in the previous sections.

We trained recurrent, spiking neural networks (SNNs) with the First-Order, Reduced and Controlled Error (FORCE) algorithm21,22. Each network consisted of 1000 Leaky integrate-and-fire (LIF) neurons (Fig. 5a). The dynamics of neuronal activity were shaped by recurrent synaptic weights, which enable communication among neurons, as well as by input signals and feedback. The output of the SNN was computed as the inner product of the output weights and the vector of neuronal firing rates. The recurrent synaptic weights and output weights of the SNN were trainable, while we kept the input weights and feedback weights fixed for simplicity (these weights were generated as Gaussian random variables). Trainable weights were updated to minimize the mean squared error between the network output and a target signal (Tension package in Python23). Although the second type of network is trained to perform functional tasks, our focus remains on analyzing its structural connectivity rather than functional connectivity. That is, we study the intrinsic neuronal wiring rather than statistical dependencies inferred from neural activity patterns.

Fig. 5: Goal-directed networks trained to perform distinct computations have distinct single-neuron spiking dynamics arising from differences in connectivity patterns.
figure 5

a Recurrent SNN architecture. A one-dimensional input is projected to the recurrently connected units within the SNN. The output is fed back to the network. We trained SNNs to perform integration, differentiation, or delayed replication of inputs. b Representative trials of three trained SNNs. Networks successfully performed the computations they were trained for. c Kolmogorov–Smirnov distance of node-based fractal dimensions or recurrent weights between pairs of trained SNNs. Distances of positive (top) and negative (bottom) recurrent weights are shown separately. Networks are ordered by their trained computations. d Same as (c) by for the outer product of output and feedback weights. e Distributions of generalized Hurst exponent at q = 2 of excitatory units in the SNNs performing different computations. f Average multifractal spectrum of the interspike interval of SNNs performing different computations. g Network connectivity patterns are closely correlated with single-unit spiking dynamics. The x-axis shows the averaged Kolmogorov–Smirnov distance of the node-based fractal dimensions of trained SNNs from the 10 SNNs performing delayed replication. The y-axis shows the difference in the Hurst exponent of SNNs from the representative SNN performing delayed replication. Note the positive correlation of network connectivity and single neurons' spiking dynamics, as well as clustering of SNNs in the scatter plot based on their trained computations.

We trained three groups of SNNs, each dedicated to a specific computational task: integration, differentiation, or delayed replication (see the “Methods” section). To account for the variability that arises from different starting values for synaptic weights, we trained 10 SNNs with different starting weights for each of the three target computations. The trained networks successfully performed integration, differentiation, or delayed replication of their inputs, as illustrated by representative trials from three example networks in Fig. 5b (the output of each SNN in blue matches well with the target signal in red). The mean absolute error averaged on 10 trained SNNs for integration, differentiation, and delay was low (0.1356, 0.0025, and 0.2537, respectively).

Using the ISIs of individual neurons in each SNN, we computed their multifractal properties. Figure 5e shows the distribution of the q-order Hurst exponent of neurons for networks performing different computations (calculated at q = 2). Figure 5f illustrates the average multifractal spectrum for positive q values. The multifractal patterns differentiated among the networks trained for different tasks, indicating that different computations are associated with distinct multifractal properties in spiking statistics.

How well do these multifractal properties of single-neuron spiking activity reflect differences in network connectivity? We addressed this question by quantifying the topological similarity of different SNNs and then calculating how well they mapped to similarities of q-order Hurst exponents. The exact connection matrices of networks trained to perform the same computation (e.g., integration) are quite distinct, as the input and output weights of individual neurons are rarely replicated across networks. However, the networks performing the same computation must share higher-level connectivity patterns that enable the target computation. To compare these connectivity patterns across SNNs, we calculated the node-based fractal dimension, which captures scale-dependent topology of complex networks24 (see the “Methods” section). We calculated the node-based fractal dimension separately for the positive and negative recurrent synaptic weights and the outer product of the output weights and feedback weights. Then, we quantified the distance between the connectivity patterns of pairs of SNNs as the Kolmogorov–Smirnov test statistic of the distribution of node-based fractal dimensions. Figure 5c, d shows these pairwise distances for our 30 trained, grouped by their computations: integration, differentiation, and delayed replication. Node-based fractal dimension successfully captured similar connectivity patterns of networks performing similar computations and differences of networks performing different computations. Following this success, we quantified whether differences in the Hurst exponent of single-neuron ISIs across SNNs correlated with the Kolmogorov–Smirnov distance of node-based fractal dimensions. Figure 5g shows a correlation between the dissimilarity of spiking patterns and the dissimilarity of network connectivity (Spearman correlation coefficient r = 0.87). This finding indicates that the multifractal features of spiking activity are not only influenced by the function performed by the network but also reflect differences in its underlying structural organization. Together, these results suggest that multifractal analysis captures information about both network function and connectivity.

We further conducted analyses using real experimental data from the Visual Coding Neuropixels dataset from the Allen Brain Institute25,26, which provides large-scale electrophysiological recordings from awake mice during visual stimulation. Our analysis focused on four brain regions: the anterior pretectal nucleus (APN), involved in visual processing and sensorimotor integration; the hippocampal CA1 region, critical for memory formation and spatial navigation; the dorsolateral geniculate nucleus (LGd), the primary thalamic relay for visual information; and the rostrolateral visual cortex (VISrl), involved in higher-order visual processing. Recordings were collected during presentation of several stimulus classes, including “natural scenes” (images of real-world environments), “static gratings” (stationary sinusoidal patterns with varying orientations and spatial frequencies), and “flashes” (brief full-field luminance changes). This diversity allowed us to evaluate multifractal features across distinct circuits and input conditions. Supplementary Fig. S2 shows the results from two experimental sessions. First, we observed that distinct region-specific multifractal signatures were evident. The CA1 region consistently exhibited higher Hurst exponents than LGd, VISrl, or APN, suggesting stronger long-range temporal correlations in hippocampal activity. Second, within each brain region, multifractal properties remained remarkably robust across stimulus types. For example, in CA1, the multifractal spectra under natural scenes and static gratings were highly overlapping, despite the very different visual inputs. Third, across all analyzed regions, the q-order Hurst exponents varied nonlinearly with q, confirming the multifractal nature of ISI statistics in biological networks. These results support the utility of MFDFA for capturing intrinsic network-level properties of neural circuits. The differences observed between hippocampal and sensory regions reflect their distinct underlying architectures and functional specializations. Our analysis of real neurophysiological data thus provides strong empirical validation for our theoretical framework: MFDFA features are robust to variations in external inputs and sensitive to differences in network connectivity. Details are provided in the Supplementary Information.

Discussion

The spiking activity of neurons in brain circuits is intricately related to and controlled by the topology of neural circuits, the network of synaptic connections through which neurons communicate. Several past studies have established that the wiring diagram of networks shapes the spiking of neurons1,2,3,4,5,6. However, it remains underexplored which features of network connectivity patterns could be inferred from the spiking activity. We know that without sparsity assumptions, inference of the exact network topology from the covariance of spiking activity across neurons is an ill-defined problem27,28. We also know that for small networks (e.g., triplet interactions) connected with strong synapses, connection strengths can be approximated under simplifying assumptions29. Further, approximate inference of the connectivity statistics within a network may be possible with a statistical physics-inspired approach that captures the causal information flow in the network30,31. However, a metric that can establish differences in the underlying network topology based on the spiking statistics of individual neurons has been lacking up to now. Such a metric, if possible, would substantially advance our ability to distinguish network architectures, reducing the need for complex and costly experiments that directly measure synaptic connections in large neural circuits.

In this paper, we establish the feasibility of a metric that captures network topology patterns based on the spiking activity of individual neurons (Figs. 34). The key intuition is that the network topology creates a multitude of paths for signal propagation, causing reverberations that alter higher-order statistical features of spiking activity. Because of these higher-order statistical features, neuronal spiking in the brain is non-linear, non-stationary, and non-Markovian with long-range temporal correlations characterized by multiscale dynamics and multifractal properties32,33,34. The key mechanism underlying our approach is the observation that neuronal circuits exhibit intrinsic connectivity patterns that are rich in recurrence. These recurrent connections create reverberatory dynamics: spikes generated within the circuit propagate and influence future spiking activity over extended timescales. As a result, interspike interval (ISI) sequences carry long-range temporal dependencies that reflect the underlying connectivity structure. Multifractal detrended fluctuation analysis (MFDFA) is well-suited to capture such dependencies. By analyzing interspike interval fluctuations across multiple timescales and orders of statistical moments, MFDFA reveals higher-order temporal structure beyond what is accessible through simpler second-order or short-range statistics. These higher-order structures are critical for distinguishing network architectures (Fig. 3). The q-order Hurst exponent provides a more sensitive differentiation of network architectures at larger q, highlighting distinctions that remain subtle or undetectable for the second-order Hurst exponent at q = 2, or the lower range 0 < q < 2.

Multifractal detrended fluctuation analysis (MFDFA) has previously been applied to neuronal spiking data, demonstrating its ability to capture spike response tuning35, characterize neuronal responses to optogenetic activation36, and reveal multifractal firing patterns related to memory processing in hippocampal spike trains32. In contrast to these earlier studies, we focus on how MFDFA of spiking data can reveal the underlying structural connectivity of neural circuits. Specifically, we show that two multifractal measures–the generalized Hurst exponent and the multifractal spectrum–efficiently capture the higher-order statistical patterns of spiking dynamics, enabling us to distinguish hidden structural features of neural circuits based on single-neuron activity.

Moreover, we show that for a well-known connectivity motif in the mammalian cortex – laterally connected excitatory and inhibitory neurons receiving feedforward inputs – our multifractal metrics are sensitive to the statistical properties of recurrent connections and rather insensitive to the input strengths (Figs. 3 and 4). Finally, we show that when the topology of spiking neural networks is altered to acquire different functions, our multifractal metrics capture the functional changes of topology, while being largely insensitive to non-functional, idiosyncratic differences caused by pre-training topological differences or variability of training trajectory (Fig. 5). These achievements put our multifractal metrics in stark contrast with the more traditional single unit or population-level firing rate analyses which are strongly affected by external inputs to the circuit, and are equally sensitive to connectivity patterns that support functional and non-functional features of ensemble activity. A fruitful next step would be to extend out multifractal metrics to population-level activity to further boost its sensitivity to functionally relevant connectivity patterns.

The full characterization of synaptic connections and overall circuit topology remains challenging due to the immense scale and complexity of brain networks, as well as limitations in simultaneously measuring spiking activity and network connectivity. Our analyses suggest that electrophysiological data–even from single neurons or ensembles–contain rich information about underlying connectivity and may be leveraged to infer coarse-grained features of network topology, offering a complementary approach to existing anatomical methods.

A remarkable insight from our study is that it is unnecessary to record from all or even a large fraction of neurons to establish differential functional topologies between circuits. Even recordings from relatively small subsets of neurons can reveal meaningful distinctions. For example, in our analysis of the Allen Institute’s dataset, the neurons from each brain region represented less than one percent of the total neuronal population. Crucially, we also show that the topological signatures embedded in the spiking dynamics of individual neurons preferentially represent the computations carried out by the circuit. The connectivity of individual neurons in two circuits performing the same function (e.g., integration) could be substantially different, but at a network level, the circuits share connectivity patterns that implement their target computation. Interestingly, the average Hurst exponents of neurons reflect the topological features that shape the computation, and are rather insensitive to the variations in single-neuron connectivity patterns within and across networks (Fig. 5g).

Our multifractal analysis of neuronal spiking dynamics, combined with our multifractal topological analysis of neuronal connection strengths, forges a link between the spatial patterns in complex neural networks and their temporal activity patterns. Given that complex networks are prevalent in nature, our insights hold potential for broader applications, extending to social and biological networks of comparable complexity. An exciting future direction involves leveraging our methodology to glean deeper insights into both neuronal and non-neuronal network structures. This could open additional possibilities for controlling and manipulating network dynamics and functionality.

While our study examined networks with distinct connectivity rules and computational tasks, it did not address variability in network size or dynamically evolving architectures, which introduce additional complexity to network architecture and their representation. In the brain, for instance, connectivity patterns of neurons can change across developmental stages or during aging, even when performing the same task. Understanding how multifractal or other measures of neuronal spiking respond under such biologically realistic, time-varying conditions remains an important direction for future investigations. Future research directions should also include a more comprehensive study of multifractal patterns in real-world datasets, where neural activity is recorded in complex scenarios involving noise, background activity, and partial observations. Examining the robustness and generalizability of the MFDFA method under these conditions will be critical for advancing its application to empirical neuroscience data.

Methods

Neuronal networks

To investigate the interplay between the topological structure of a neuronal network and the emerging multifractal structure of the spiking activity of its individual neurons, we studied artificial neuronal networks consisting of spiking excitatory and inhibitory units connected by weighted synapses. We explored two classes of spiking neural networks. The first class mimicked a topological structure known to exist in biological circuits of the mammalian cortex. The second class included recurrent networks trained to perform key computations utilized during cognitive behaviors such as decision-making.

The spiking networks inspired by the known cortical topology included Ne = 900 excitatory units and Ni = 225 inhibitory units distributed uniformly on a 2D grid of size [30 × 30] to mimic the connectivity structure in a simplified cortical sheet, adapted from ref. 9. The connection probability of biological neurons drops as a function of the distance between neurons. We simulated this principle by connecting units in our network according to a Gaussian process \({P}_{ij}=\alpha \exp \left(-{d}_{ij}^{2}/(2{\sigma }^{2})\right)\), where dij is the Euclidean distance between units i and j. Connection probabilities between different unit types (excitatory to excitatory, excitatory to inhibitory, inhibitory to excitatory, or inhibitory to inhibitory) were differentially parameterized to match experimental results for the connection probability of excitatory pyramidal cells and fast-spiking interneurons9,37. In all these networks, if a synaptic connection existed between two units, the connection weight was defined as \(w(i,j)=\frac{32}{1+{d}_{ij}}\). This weight assignment ensured that neurons that were physically closer to each other had stronger connections, too.

To test the sensitivity of spiking dynamics of individual neurons to changes in the network topology, we modestly varied connection probabilities between excitatory–excitatory and inhibitory–excitatory unit pairs, as elaborated in Table 1. Specifically, we simulated three connectivity regimes by varying αexcitatoryexcitatory. To ensure that networks remained in a balanced excitation-inhibition regime, we chose αinhibitoryexcitatory for each network to be \(\frac{{N}_{e}}{{N}_{i}}\times {\alpha }_{excitatory\to excitatory}\). This ensured that each excitatory unit received balanced excitation and inhibition from its upstream units. Connection weights were defined as above.

Table 1 Gaussian parameters of neuronal connections

Individual neurons in the network were governed by the Izhikevich model, which offers an efficient way to simulate the membrane voltage of spiking neurons with Hodgkin–Huxley-type10,11. The Izhikevich model is well-suited for this purpose, as it captures a wide range of biologically realistic spiking behavior, including bursting, spike-frequency adaptation, and rebound spiking—properties that are important for studying intrinsic network dynamics in sensory circuits. We simulated the activity of each network for 500 s with a temporal precision of 1 ms (500,000 timestamps).

The simulation period included 10 stimuli separated by random intervals drawn from an exponential distribution. Each stimulus provided an input signal to the network that rose rapidly and decayed slowly according to a log-normal function:

$${I}_{0}(t;{t}_{k}^{onset})=\left\{\begin{array}{ll}A\frac{1}{(t-{t}_{k}^{onset})\sigma \sqrt{2\pi }}\exp (-\frac{{(\ln (t-{t}_{k}^{onset})-\mu )}^{2}}{2{\sigma }^{2}})\quad &t\ge {t}_{k}^{onset}\\ 0\quad \hfill&t < {t}_{k}^{onset}\end{array}\right.,$$
(1)

where μ = 7.5 s and σ = 1 s. The total input signal at each time, t, is \({t}_{k}^{onset}\) as \({I}_{signal}(t)={\sum }_{k}{I}_{0}(t;{t}_{k}^{onset})\), where k = 1…10. To mimic stimuli with different strengths, we chose A ranging from 0K to 30K in our experiments, yielding a maximum peak of Isignal(t) of around 10. The exponential distribution that governed the interval between consecutive stimulus onsets was \(p(\frac{1}{10}\Delta {t}^{onset}=\tau ;\lambda )=\lambda \exp (-\lambda \tau )\) with λ = 5 s. tonset > 500 is discarded in the simulation. To mimic the stochastic nature of inputs in a balanced, high-input regime, we added Gaussian noise, Inoise(t), to the stimulus input. The noise was generated as Inoise(t) = 0.6e, where \(p(e)=\frac{1}{\sqrt{2\pi }}\exp (-\frac{1}{2}{e}^{2})\). The input received by each unit at coordinates (xy) of the network at discretized times t = 1…500, 000 was \({I}_{type}(x,y,t)={s}_{type}*\left(c(x,y)*{I}_{signal}(t)+{I}_{noise}(t)\right)\), where c(xy) is an indicator function that determined whether the neuron located at coordinates (xy) receives the stimulus input, and type indicates whether the unit is excitatory or inhibitory. stype for the excitatory and inhibitory units were sexcitatory = 5 and sinhibitory = 2. Figure 2 shows the spike patterns derived from the network.

Multifractal detrended fluctuation analysis

Multifractal detrended fluctuation analysis (MFDFA), which generalizes the detrended fluctuation analysis (DFA)38, enables us to estimate the multifractal metrics (e.g., generalized Hurst exponent, generalized fractal dimension, Lipschitz–Holder exponent, and the multifractal spectrum) for non-stationary time series by investigating multiscale fluctuations39,40. In a nutshell, the calculation of the MFDFA consists of the following steps. Given a time series Xt, t = 1…T, we first construct a zero-centered time series \({X}_{t}^{{\prime} }\) by subtracting the original mean, \({X}_{t}^{{\prime} }=\mathop{\sum }_{i=1}^{t}\left({x}_{i}- < x > \right)\), where \( < x > =\frac{1}{T}\mathop{\sum }_{t=1}^{T}{x}_{t}\) is the average value of time series Xt. Next, we divide \({X}_{t}^{{\prime} }\) into Ns consecutive segments of size s. Within each segment j, a linear local trend Yj,t(s) = β1,j(s) + β2,j(s)t is computed via a least squares fitting such that \({\beta }_{1,j}(s),{\beta }_{2,j}(s)=\arg \mathop{\min }_{{\beta }_{1,j}(s),{\beta }_{2,j}(s)}\mathop{\sum }_{t=js+1}^{js+s}{\left({X}_{t}^{{\prime} }-{Y}_{j,t}(s)\right)}^{2}\).

The local root-mean square variation for segment j with scale s is calculated as \(F(j,s)=\sqrt{\frac{1}{s}\mathop{\sum }_{t=js+1}^{js+s}{\left({X}_{t}^{{\prime} }-{Y}_{j,t}(s)\right)}^{2}}\). In DFA, the total fluctuation function is defined as \({F}_{DFA}(s)=\sqrt{\frac{1}{{N}_{s}}\mathop{\sum }_{j=1}^{{N}_{s}}F{(j,s)}^{2}}\). Fractal time series exhibit a power-law relationship between the FDFA(s) and the scale s, in the form FDFA(s) sH with Hurst exponent H.

In contrast, the MFDFA further generalizes the DFA by computing q-order fluctuation \({F}_{MFDFA}(q,s)={\left(\frac{1}{{N}_{s}}\mathop{\sum }_{j=1}^{{N}_{s}}F{(j,s)}^{q}\right)}^{\frac{1}{q}}\). Positive q values amplify the effect of local variations with large amplitudes, whereas negative q values amplify local variations with small amplitudes. The scaling behavior is commonly depicted in a log–log plot of FMFDFA(qs) as a function of scale s,

$${F}_{MFDFA}(q,s)\propto {s}^{H(q)},$$
(2)

where H(q) is the q-order generalized Hurst exponent. Note that when q = 2, H(q) is equivalent to the Hurst exponent H obtained by the DFA. H < 0.5 suggests that xt arises from an anti-correlated process; H = 0.5 suggests that xt is equivalent to white noise or a Markovian (memoryless) process; lastly, H > 0.5 suggests that xt arises from a positively correlated or persistent process (i.e., possessing long-range memory or long-range-dependence properties).

To parameterize the structure of a multifractal time series, we further compute the multifractal spectrum, which serves as the distribution of scaling components, measuring the local regularity of the temporal signal. The corresponding q-order singularity exponent α(q) and q-order singularity dimension f(α(q)) are computed as

$$\tau (q)=qH(q)-1$$
(3)
$$\alpha (q)=\frac{d\tau (q)}{dq}$$
(4)
$$f(\alpha (q))=q\alpha (q)-\tau (q)$$
(5)

Spiking neural networks

Inspired by firing and spiking activity in biological neural systems, spiking neural networks (SNNs) utilize discrete events (spikes) to transmit and exchange information among neurons. The connectivity pattern of the networks is governed by the behavior and functionality of the SNNs. To generate networks with different topologies that contribute to diverse functions, we trained the SNNs to perform three kinds of computational tasks, namely integration, differentiation, and delay replication.

Each training sample (both input and target) consisted of a signal sequence lasting 5 s, with 1000 time steps, where each time step spanned 5 ms. The beginning and ending 100 time steps of the input signals were set as 0, and \({\left\{{x}_{t}\right\}}_{t=100}^{900}\) were generated using fractional Brownian motion (fBm)41, with a Hurst exponent of 0.5, equivalent to a Wiener process. We further applied the Savitzky-Golay filter to smooth the input signal with a window size of 75 time steps. The integration target \({\left\{{y}_{t}^{int}\right\}}_{t=1}^{1000}\) was implemented as a moving average of the cumulative sum, \({y}_{t}^{int}=\frac{1}{t}\mathop{\sum }_{{t}^{{\prime} }=1}^{t}{x}_{t}^{{\prime} }\). The differentiation target \({\left\{{y}_{t}^{diff}\right\}}_{t=1}^{1000}\) was computed as \({y}_{t}^{diff}=({x}_{t}-{x}_{t-1})*{g}_{t}\), where * is the convolution operation and gt is a one-side exponential window of size 125 time steps with a center of 0 time steps and a decay parameter of 75 time steps. The convolution operation ensures that the frequency of the target signal can be captured by the spiking neural networks. The delay replication task aimed at reproducing the signal with a 100-step delay (0.5 s), where \({y}_{t}^{delay}={x}_{t-100}\).

For each task, we employed 1000 training samples described above to train the SNN. The spiking neural network (SNN) consists of 1000 leaky integrate-and-fire (LIF) neurons arranged as a single recurrent reservoir. The network does not include distinct feedforward layers; rather, all neurons are part of a fully connected recurrent architecture. There are four sets of weights in the network: (1) input weights that project the external input signal to individual neurons; (2) recurrent weights that define the internal connectivity among neurons; (3) output weights that map neuronal activity to a network-level output; and (4) feedback weights that project the output signal back into the recurrent reservoir. During training, the recurrent and output weights were updated every 50 time steps using the FORCE learning algorithm, implemented via the Tension Python package23. Input and feedback weights remained frozen during training. The LIF model strikes a balance between biological plausibility and computational efficiency and is commonly used in the training of recurrent spiking networks to achieve computational tasks.

Node-based fractal dimension

Node-based fractal dimension measures the scale-dependent topological features of complex (weighted) networks24. For each node v in the network, let Mv(r) denote the number of other nodes whose distance to node v is smaller than a radius r. Using the box-growing method, the self-similarity of the graph at node v is characterized as \({M}_{v}(r)\propto {r}^{{d}_{v}}\), where dv is the node-based fractal dimension for node v capturing the expanding rule of the graph generated from a specific node.