Introduction

Resistor networks have essential applications in various fields of natural science and electronic circuit engineering. People used resistor networks’ reliability, repeatability, and controllability to precisely control the current flow in the system, enabling deep exploration of the electrical and magnetic properties of nanomaterials and devices1. The impact of nonlinear topological circuits on optical and electronic devices was investigated, grounded in the principles of linear topological circuits2. In condensed matter physics, resistor network models were employed to analyze and calculate the influence of defects on the surface of topological insulators, including changes in resistance due to defects3. Andre Geim won the Nobel Prize for pioneering graphene network research4,5. In organic electronics, resistor networks were essential for studying defect regions in organic thin films6. Equivalent circuit models and modeling methods were utilized to analyze and optimize the diffraction patterns of printed circuit board gratings7 and microstrip reflectarray antennas8. Materials science used numerical calculation methods based on resistor network models to research solid oxide fuel cells9. To locate the damage in carbon-reinforced materials (CFRP), resistor network theory was utilized to simulate and calculate resistance changes10. Utilizing an electrical analysis strategy grounded in resistance network principles allows for an in-depth investigation into the positional distortion issues encountered during the localization process of \(\gamma\)-ray imaging systems, enabling effective resolution of these challenges11. Therefore, the study of resistive networks holds significant importance and value.

Calculating potential and equivalent resistance are pivotal issues in resistor networks. Gustav Kirchhoff pioneered research on this topic in 184712. Wu skillfully expressed the resistivity between any two arbitrary nodes in a finite regular lattice network using the sophisticated Laplacian op-erator13. Nevertheless, this approach encounters difficulties with irregular lattices. Consequently, Lzmailian, et al. drawing on the sophisticated properties of the Laplacian matrix with notable insight, successfully adapted this method to impedance networks14. Following their work, many researchers have explored theoretical solution methods for resistor networks, deepening the knowledge within this field15,16,17,18,19,20,21. Despite these advancements, a notable oversight persists in these investigations of resistor networks: the significant fact that resistivity is a time-varying parameter has yet to be adequately addressed.

Neural networks demonstrate powerful capabilities in applications22,23,24,25,26. In particular, a recurrent neural network based on vector error, known as the zeroing neural network (also referred to as the Zhang neural network), proposed by Zhang in 200227, has received extensive application and development. For example, The proposed noise-resistant zeroing neural network significantly improves the accuracy and operational efficiency of the algorithm in noisy environments28,29,30. It has been successfully applied to rehabilitation robotics, achieving a technological breakthrough31,32,33. Discrete-time recurrent neural networks have made significant progress in numerical stability, robustness, and accuracy34,35,36,37, and these breakthroughs have led to successful applications in intelligent human-machine control, signal processing, and image processing38,39,40. Compared to traditional algorithms, zeroing neural network algorithms demonstrate superior ability in solving time-varying problems. This distinctive attribute makes it well-suited for solving dynamic resistor networks. Recent advancements in ZNN have significantly enhanced its robustness and applicability. For example, fuzzy-enhanced robust discrete-time ZNN (DZNN) improves the handling of uncertainties in dynamic systems41, while triple-integral noise-resistant RNN enhances robustness in noisy environments42. Additionally, discrete-time advanced zeroing neurodynamics offers higher precision for complex time-varying problems43. These developments underscore ZNN’s growing relevance in solving real-world challenges. Recently, Gradient neural networks (GNN) have also shown promise in addressing time-varying problems. For instance, Zhang et al. proposed GNN models with robust finite-time convergence for time-varying systems of linear equations44 and time-varying matrix inversion45. These advancements highlight the potential of GNN in dynamic systems. However, we chose ZNN for our study due to its extensive application and alignment with the structural characteristics of cobweb resistor networks.

ZNN’s versatility is evident across diverse fields. It has been applied to multi-robot tracking and formation for precise control46, mobile localization for improved accuracy in dynamic environments47, and inter-robot management for optimizing sensing and measurement48. In Medical IoT, ZNN enhances data processing and device management49, and in portfolio management, it optimizes risk-profit tradeoffs under transaction costs50. These applications highlight ZNN’s effectiveness in addressing complex, time-varying problems. Based on the unique structural features of cobweb resistance networks, this paper develops a structured zeroing neural network algorithm for efficiently solving dynamic cobweb resistance network problems. The main contributions of this paper are summarized as follows:

  1. 1.

    This paper designs the SZNNCRN model for solving the mathematical model of dynamic cobweb resistor networks. This algorithm not only enriches the existing methodology for solving resistor networks but also effectively addresses the dynamic cobweb resistor network solutions, thereby making it closer to practical application scenarios.

  2. 2.

    By deeply exploring the structural properties of the Laplacian matrix, the fast algorithm proposed in this paper significantly saves computational resources and substantially enhances the computational efficiency of SZNNCRN, enabling it to handle larger-scale dynamic cobweb resistor network problems.

  3. 3.

    Comparative experiments fully demonstrate the computational advantages of the SZNNCRN method. Meanwhile, theoretical analysis rigorously proves the convergence of the time-varying SZNNCRN, further validating the reliability and stability of this method.

  4. 4.

    The equivalent resistances obtained using the SZNNCRN method are highly consistent with exact results, which indirectly proves the accuracy of this method. Additionally, this method has been successfully applied to path planning, demonstrating its broad practical value and application prospects.

The structure of this paper is organized as follows: The first section reviews the construction of the cobweb resistor network and designs the dynamic cobweb resistor network. The second section proposes a structured zeroing neural network model applicable to the dynamic cobweb resistance network. The third section develops a fast algorithm for neural unit computation. The fourth section presents theoretical analyses. The fifth section contains numerical simulations. The final section demonstrates two applications of the SZNNCRN model.

Mathematical model and equivalent resistance for cobweb resistance network

Cobweb resistance network

The cobweb resistor network provided by N. Sh. Lzmailian, R. Kenna, and F. Y. Wu is reviewed firstly14.

Figure 1
figure 1

A \(m\times n\) cobweb network with \(m=5\) and \(n=8\), where the resistors in the longitudinal and latitudinal directions are denoted as r and s, respectively. The potential zero point is represented by the central point O.

The cobweb grid is a wheel lattice composed of n spokes and m concentric circles, with a total of \(mn+1\) nodes. The resistors in the latitude and longitude directions are denoted as r and s, respectively. Figure 1 shows that the cobweb network presents a cobweb structure with \(m=5\), \(n=8\), radial resistance r, and latitudinal resistance s. To calculate the resistance of the cobweb network, the central node O is chosen as node 0, with the Laplacian potential set to zero.

Derived from Kirchhoff’s Law, a Laplacian matrix equation, i.e., the mathematical model of the cobweb resistor network

$$\begin{aligned} {\Delta _{mn}}V = {{\mathcal {I}}}, \end{aligned}$$
(1)

is obtained14, where V and \(\mathcal {I}\) are mn-dimensional vectors, i.e.,

$$\begin{aligned} V= & {\left( {{V_1},{V_2},{V_3}, \ldots , {V_{mn}}} \right) ^T}, \\ {{\mathcal {I}}}= & {\left( {{{{\mathcal {I}}}_1},{{{\mathcal {I}}}_2},{{{\mathcal {I}}}_3}, \ldots , {{{\mathcal {I}}}_{mn}}} \right) ^T}, \end{aligned}$$

\(V_i\) represents the potential of the i-th node and \(\mathcal {I}_i\) represents the current of the i-th node, \(\Delta _{mn}\) is a mn-dimensional matrix,

$$\begin{aligned} {\Delta _{mn}} = {r^{ - 1}}L_n^\text {per} \otimes {E_m} + {s^{ - 1}}{E_n} \otimes L_m^\text {DN}, \end{aligned}$$
(2)

each element \(c_{i,j} (i, j = 1, 2, \ldots ,mn)\) of matrix \(\Delta _{mn}\) represents the inverse of the resistance value between nodes i and j, \(E_n\) and \(E_m\) are the \(n \times n\) and \(m \times m\) identity matrices, respectively, r and s are real numbers (denoted by \(\mathbb {R}\)), \(L_n^\text {per}\) is the Laplacian operator of a \(1\text {D}\) lattice with periodic boundary condition,

$$\begin{aligned}L_n^\text {per} = \left( {\begin{array}{*{20}{c}} 2& { - 1}& 0& \cdots & \cdots & 0& { - 1}\\ { - 1}& 2& { - 1}& 0& \ddots & 0& 0\\ 0& { - 1}& 2& { - 1}& \ddots & \ddots & \vdots \\ \vdots & 0& \ddots & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & 0\\ 0& \ddots & \ddots & 0& { - 1}& 2& { - 1}\\ { - 1}& 0& \cdots & \cdots & 0& { - 1}& 2 \end{array}} \right) ,\end{aligned}$$

and \(L_m^\text {DN}\) is the \(1\text {D}\) lattice Laplacian operator with Dirichlet-Neumann boundary condition,

$$\begin{aligned}L_M^\text {DN} = \left( {\begin{array}{*{20}{c}} 2& { - 1}& 0& \cdots & \cdots & 0& 0\\ { - 1}& 2& { - 1}& 0& \ddots & \ddots & 0\\ 0& { - 1}& 2& { - 1}& \ddots & \ddots & \vdots \\ \vdots & 0& \ddots & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & 2& \ddots & 0\\ 0& \ddots & \ddots & 0& { - 1}& 2& { - 1}\\ 0& 0& \cdots & \cdots & 0& { - 1}& 1 \end{array}} \right) ,\end{aligned}$$

r and s represent the longitudinal and latitudinal resistivity, respectively.

As there is no source or sink of current, the current satisfies the following constraint condition

$$\begin{aligned} \sum _{i=1}^{mn} \mathcal {I}_i = 0. \end{aligned}$$

Dynamic cobweb resistance network

Most electrically conductive materials’ resistivity tends to vary with time51. In this paper, a linear model is used to represent this change. According to (1), the mathematical model of the dynamic cobweb resistor network is

$$\begin{aligned} r(t) = r_0(1 + \rho t), s(t) = s_0(1 + \rho t), \end{aligned}$$
(3)

where \(r_0\) and \(s_0\) denote the initial longitudinal and latitudinal resistivity, respectively. \(\rho\) represents the rate of change of resistivity over time, while t stands for the time variable.

The mathematical model of dynamic cobweb resistor network is

$$\begin{aligned} {\Delta _{mn}(t)}V(t) = {{\mathcal {I}}}, \end{aligned}$$
(4)

where

$$\begin{aligned} {\Delta _{mn}(t)} = {r^{ - 1}(t)}L_n^\text {per} \otimes {E_m} + {s^{ - 1}(t)}{E_n} \otimes L_m^\text {DN}, \end{aligned}$$
(5)

and V(t) represents the time-varying potential.

Equivalent resistance

The equivalent resistance \(R_{\alpha ,\beta }\) between any two nodes \(\alpha\) and \(\beta\) is defined as follows: Connect nodes \(\alpha\) and \(\beta\) to an external battery, with no other nodes connected to an external source. Measure the current \(\mathcal {I}_{\alpha ,\beta }\) flowing through the battery. The potentials at nodes \(\alpha\) and \(\beta\) be \(V_{\alpha }\) and \(V_{\beta }\), respectively. According to Ohm’s law, \(R_{\alpha ,\beta }\) can be calculated

$$\begin{aligned} R_{\alpha ,\beta }=(V_{\alpha }-V_{\beta })/\mathcal {I}_{\alpha ,\beta }. \end{aligned}$$
(6)

When the center point O is excluded, the resistance \({R^{\text {cob}}}\) \(({r_1},{r_2})\) between arbitrary two points \({r_1} = \left( {{x_1},{y_1}} \right)\) and \({r_2} = \left( {{x_2},{y_2}} \right)\) in the cobweb network is14

$$\begin{aligned} R^{\text {cob}}(r_1,r_2)= & \frac{2r}{2m + 1} \sum _{i = 0}^{m - 1} \frac{S_1^2 + S_2^2 - 2S_1S_2\cosh \left[ 2\left| x_1 - x_2 \right| \Lambda _i \right] }{\sinh \left( 2\Lambda _i \right) }\coth \left( n\Lambda _i \right) \nonumber \\ & + \frac{2r}{2m + 1}\sum _{i = 0}^{m - 1} \frac{2S_1S_2\sinh \left[ 2\left| x_1 - x_2 \right| \Lambda _i \right] }{\sinh \left( 2\Lambda _i \right) }, \end{aligned}$$
(7)

where

$$\begin{aligned} {\phi _i}&= \frac{{(i + \frac{1}{2})\pi }}{{2m + 1}}, \quad {\Lambda _i} = 2 - 2\cos (2{\phi _i}), \quad {S_1} = \sin (2{y_1}{\phi _i}), \quad {S_2} = \sin (2{y_2}{\phi _i}), \quad i = 1, 2, \ldots , m - 1. \end{aligned}$$

When one point in the cobweb network is the center point O, and another point is \(P = \left( {x,y} \right)\), the resistance \({R^{\text {cob}}}(O,P)\) is14

$$\begin{aligned} {R^\text {cob}}(O,P) = \frac{{2r}}{{2m + 1}}\sum \limits _{i = 0}^{m - 1} {\frac{{\coth (n{\Lambda _i})}}{{\sinh \left( {2{\Lambda _i}} \right) }}{{\sin }^2}\left( {2y{\phi _i}} \right) }, \end{aligned}$$
(8)

where

$$\begin{aligned} {\phi _i} = \frac{{(i + 1/2)\pi }}{{2m + 1}},\quad {\Lambda _i} = 2 - 2\cos (2{\phi _i}),\quad i = 1,2, \ldots ,m - 1. \end{aligned}$$

Structured zeroing neural network for solving time-varying Laplacian linear equation system

First, the vector-valued error function is defined

$$\begin{aligned} E(t) = {\Delta _{mn}(t)}V(t) - {{\mathcal {I}}}\mathrm{{,}} \end{aligned}$$
(9)

where \(\Delta _{mn}(t)\) is given in (2), \(V(t) \in \mathbb {R}^{mn}\). If the error function E(t) is zero for all elements, then V(t) becomes the exact solution of Eq. (1).

To ensure \(\frac{dE(t)}{dt}\) tends toward zero, the negative of the gradient is used as the descent direction27,

$$\begin{aligned} \frac{\text {d}E(t)}{\text {d}t}= - \Gamma (t){{\mathcal {F}} }(E(t)), \end{aligned}$$
(10)

where the parameter \(\Gamma (t) \in {\mathbb {R}^{mn}}\) is a positive definite matrix used to control the convergence rate and \({{\mathcal {F}} }\left( \cdot \right) :{\mathbb {R}^{mn}} \rightarrow {\mathbb {R}^{mn}}\) is a vector-valued activation function.

To ensure the SZNNCRN model possesses good convergence capabilities, we take \(\Gamma (t) = \gamma {[\Delta _{mn}(t){\Delta _{mn}^T(t)} + \lambda {E_{mn}}]^k}\), which the convergence rate can be controlled by adjusting the design parameters \(\gamma , \lambda , k,\)

$$\begin{aligned} {\Delta _{mn}}(t)\mathop {V(t)}\limits ^ \bullet = - \Gamma (t)F\left( {{\Delta _{mn}}(t)V(t) - {{\mathcal {I}}}} \right) - \mathop {{\Delta _{mn}}(t)}\limits ^ \bullet V(t), \end{aligned}$$
(11)

where \(E_{mn}\) is a unit matrix of order mn, and the design parameters \(\lambda ,\gamma ,k\) satisfy \(\gamma > 0, \lambda \ge 1,k \ge 1\). The activation function \({{\mathcal {F}}}\left( \cdot \right)\) significantly impacts the performance of a neural network. The following are two widely utilized types of activation functions.

  1. 1.

    Monotonically increasing odd activation function:

    1. (a)

      Linear activation function (LAF)52

      $$\begin{aligned} f({u}) = {u}. \end{aligned}$$
    2. (b)

      Biploar sigmoid activation function (BSAF)52

      $$\begin{aligned} f(u) = \frac{{1 - \exp ( - \zeta u)}}{{1 + \exp ( - \zeta u)}}, \end{aligned}$$

      where \(\zeta \ge 1.\)

    3. (c)

      Power sigmoid activation function (PSAF)52

      $$\begin{aligned} f(u) = \left\{ \begin{array}{ll} u^p, & \text {if } |u| \ge 1 \\ \frac{{1 + \exp (-q)}}{{1 - \exp (-q)}} \frac{{1 - \exp (-q u)}}{{1 + \exp (-q u)}}, & \text {if } |u| < 1 \end{array} \right. , \end{aligned}$$

      where \(q > 2,p = 2s + 1,s = 1,2,3 \ldots\)

    4. (d)

      Smooth power sigmoid function (SPSF)52

      $$\begin{aligned} f({u}) = \frac{1}{2}{u} + \frac{{1 + \exp ( - q)}}{{1 - \exp ( - q)}}\frac{{1 - \exp ( - qu)}}{{1 + \exp ( - qu)}}, \end{aligned}$$

      where \(q > 2,p = 2s + 1,s = 1,2,3 \ldots\)

  2. 2.

    Activation function satisfy the following form: \({{\mathcal {F}}}\left( \cdot \right) = x \cdot \Psi \left( x \right) , \forall x, \Psi \left( x \right) > 0\).

    1. (a)

      Hyperbolic Tangent (Tanh)53

      $$\begin{aligned} f(u) = \tanh (u). \end{aligned}$$
    2. (b)

      Swish53

      $$\begin{aligned} f(u) = {u} \cdot \text {sigmoid}(u). \end{aligned}$$
    3. (c)

      Gaussian error linear unit (GELU)53

      $$\begin{aligned} f(u) = {u} \cdot \Phi (u), \end{aligned}$$

      where \(\Phi (u)\) is the standard normal cumulative distribution function.

    4. (d)

      Mish53

      $$\begin{aligned} f(u) = u \cdot \tanh (\ln (1 + {e^u})). \end{aligned}$$

Remark 1

The SZNNCRN model is a recurrent neural network, it follows from Eq. (11) that the dynamics of the i-th (with \(i=1,2,\dots , mn)\) neuron can be formulated as

$$\begin{aligned} \mathop {{V_i}}\limits ^ \bullet = \sum \limits _{j = 1}^{mn} {{p_{i,j}}{\mathop {{V_i}}\limits ^ \bullet } - \gamma \sum \limits _{j = 1}^{mn} {{q_{i,j}}{f_j} - \sum \limits _{j = 1}^{mn} {{\mu _{i,j}}{V_j}} } }, \end{aligned}$$
(12)

where

$$\begin{aligned} {f_j} = f\left( {\sum \limits _{k = 1}^{mn} {{\eta _{j,k}}{v_k} - {b_j}} } \right) . \end{aligned}$$
(13)

where \(V_i\) denotes the i-th entry of the neural state vector V(t) and the i-th neuron of the model (11), \(p_{i,j}\), \(q_{i,j}\), and \(\mu _{i,j}\) represent the ij-th elements of matrices \(E_{mn}- \Delta _{mn}(t)\), \(\left[ \Delta _{mn}(t) \Delta _{mn}(t)^T + \lambda E_{mn} \right] ^k\), and \(\dot{\Delta _{mn}(t)}V(t)\), respectively. \(\eta _{j,k}\) represents the jk-th element of vector \(\Delta _{mn}(t) V(t)\). \(b_i\) represents the i-th element of \({{\mathcal {I}}}_{mn}\). \(f(\cdot )\) represents the element of \(\mathcal {F}(\cdot )\), respectively. A schematic diagram of the neural network structure of the SZNNCRN model (11) is illustrated in Fig. 2.

Figure 2
figure 2

Circuit schematices of the SZNNCRN (11) in the form of (12).

Fast algorithm for SZNNCRN

Fast algorithms play a crucial role in computer science. The notable advantage is their ability to significantly enhance time efficiency, reduce the occupation of computational resources, and effectively address the requirements of large-scale data processing. This section aims to thoroughly explore and fully exploit the unique structural properties of \(\Delta _{mn}(t)\) to design an efficient algorithm tailored for SZNNCRN.

Fast algorithm for \(\Delta _{mn}(t)V(t)\)

Given the matrix \(\Delta _{mn}(t)\) and a time-varying vector V(t), for a moment \(v = V (t^*) = [V_1(t),V_2(t),\dots ,V_{mn}(t)]^T\), to compute the product \(\Delta _{mn}(t)v\), \((L_n^\text {per} \otimes {E_m})v\) and \((L_m^\text {DN} \otimes {E_n})v\) can be computed separately. \(L_n^\text {per} \otimes {E_m}\) is a circulant matrix determined by the elements of the first row. \({E_n} \otimes L_m^\text {DN}\) is a block diagonal matrix. Relying on the structure of these matrices, inspired by fast algorithms for specially structured matrices54,55,56,57,58,59,60,61,62,63,64, efficient methods for computing the matrix-vector multiplication will subsequently be presented. \(L_n^\text {per} \otimes {E_m}\) is generated by circularly shifting the first column c, where c is of the form

$$\begin{aligned} c = {\left\{ \begin{array}{ll} {[2, \underbrace{0, \ldots , 0}_{m - 1}, -1, \underbrace{0, \ldots , 0}_{m - 1}]^T}, & \text {if } n = 2 \\ {[2, \underbrace{0, \ldots , 0}_{m - 1}, -1, \underbrace{0, \ldots , 0}_{mn - 2m - 1}, -1, \underbrace{0, \ldots , 0}_{m - 1}]^T}, & \text {if } n > 2 \end{array}\right. }. \end{aligned}$$

The efficient method for computing \((L_n^\text {per} \otimes {E_m})v\) is presented in the following algorithm.

Algorithm 1
figure a

Fast algorithm for matrix-vector multiplication \((L_n^{\text {per}} \otimes {E_m})v\)

The product \(({E_n} \otimes L_m^\text {DN})v\) is essentially the independent product of n sub-blocks \(L_m^\text {DN}\) with the vector \(\xi _i\),

$$\begin{aligned} ({E_n} \otimes L_m^\text {DN})v = \left[ {\begin{array}{*{20}{c}} {L_m^{DN}\xi _1}& 0& \cdots & 0\\ 0& {L_m^{DN}\xi _2}& \ddots & \vdots \\ \vdots & \ddots & \ddots & 0\\ 0& \cdots & 0& {L_m^{DN}\xi _{n}} \end{array}} \right] , \end{aligned}$$

where \({\xi _i} = {[{v_{(i-1)m + 1}},{v_{(i-1)m + 2}},...,{v_{(i-1)m + m}}]}, \quad i = 1,2, \ldots , n.\)

Therefore, if \(z_i=L_m^\text {DN}\xi _i\) is computed, \(({E_n} \otimes L_m^\text {DN})v\) can be obtained by sequentially combining the n \(z_i\)’s.

Algorithm 2
figure b

Fast algorithm for \(({E_n} \otimes L_m^{\text {DN}})v\)

Based on Algorithms 1 and 2, and euqation

$$\begin{aligned} {\Delta _{mn}}(t)V(t) = {r^{ - 1}}(t)L_n^{per} \otimes {E_m}V(t) + {s^{ - 1}}(t)L_m^{DN} \otimes {E_n}V(t), \end{aligned}$$

the efficient computation of \(\Delta _{mn}V(t)\) is achieved. The calculation of \(\mathop {{\Delta _{mn}}(t)}\limits ^ \bullet V(t)\) requires first computing the derivative of \(\Delta _{mn}\), while the remaining steps are consistent with the computation of \(\Delta _{mn}V(t)\).

Fast algorithm for \({\left[ {\Delta _{mn}(t){\Delta _{mn}^T(t)} + \lambda {E_{mn}}} \right] ^k}{{\mathcal {F}}}\left( \cdot \right)\)

Given the matrix \({\left[ {\Delta _{mn}(t){\Delta _{mn}^T(t)} + \lambda {E_{mn}}} \right] ^k}\) and activated error vector \({{\mathcal {F}}}\left( \cdot \right)\) such that \({{\mathcal {F}}}\left( \cdot \right) = {\left[ {{f_1}\left( \cdot \right) ,{f_2}\left( \cdot \right) , \cdots {f_{mn}}\left( \cdot \right) } \right] ^T}\). Based on the hidden structural characteristics of the matrix, a fast algorithm for computing \({\left[ {\Delta _{mn}(t){\Delta _{mn}^T(t)} + \lambda {E_{mn}}} \right] ^k}{{\mathcal {F}}}\left( \cdot \right)\) will be designed as follows.

The Laplacian matrix \(\Delta _{mn}(t)\) is similar to a diagonal matrix by the transition matrix,

$$\begin{aligned}&{\left( {{F_n} \otimes S_m^{VI}} \right) ^{ - 1}}{\Delta _{mn}(t)}\left( {{F_n} \otimes S_m^{VI}} \right) \\&\quad = \left( {F_n^{ - 1} \otimes {{\left( {S_m^{VI}} \right) }^{ - 1}}} \right) \left( {{r^{ - 1}(t)}L_n^{\text {per}} \otimes {E_m} + {s^{ - 1}(t)}{E_n} \otimes L_m^{\text {DN}}} \right) \left( {{F_n} \otimes S_m^{VI}} \right) \\&\quad = {r^{ - 1}(t)}F_n^{ - 1}L_n^{\text {per}}{F_n} \otimes {\left( {S_m^{VI}} \right) ^{ - 1}}S_m^{VI} + {s^{ - 1}(t)}F_n^{ - 1}{F_n} \otimes {\left( {S_m^{VI}} \right) ^{ - 1}}L_m^{\text {DN}}S_m^{VI} \\&\quad = {r^{ - 1}(t)}{\Lambda _n} \otimes {E_m} + {s^{ - 1}(t)}{E_n} \otimes {\Lambda _m} \\&\quad = {\Lambda _{mn}(t)}, \end{aligned}$$
(10)

where \({F_n} = \left( {{f_{j,k}}} \right) _{j,k = 1}^n\) is the fast Fourier transform (FFT) matrix,

$$\begin{aligned} {f_{j,k}} = \frac{1}{{\sqrt{n} }}{e^{\frac{{2\pi i(j - 1)\left( {k - 1} \right) }}{n}}},\quad 1 \le j,k \le n,\quad i = \sqrt{-1}, \end{aligned}$$

\(S_m^{VI}\) is type VI discrete sine transform(DST-VI),

$$\begin{aligned} S_m^{VI} = \frac{1}{{\sqrt{2m + 1} }}\left( {\sin \frac{{\left( {2j - 1} \right) k\pi }}{{2m + 1}}} \right) _{j,k = 1}^m, \end{aligned}$$

the matrices \(L_n^{\text {per}}\) and \(L_m^{\text {DN}}\) are similar to the diagonal matrices \({\Lambda _n}\) and \({\Lambda _m}\), respectively,

$$\begin{aligned} \Lambda _n = \text {diag}(\lambda _0, \lambda _1, \cdots , \lambda _j, \cdots , \lambda _{n-1}), \quad \lambda _j = 2 - 2\cos \left( \frac{2j\pi }{n}\right) , \quad j = 0, 1, \cdots , n-1, \\ \Lambda _m = \text {diag}(\lambda _0', \lambda _1', \cdots , \lambda _k', \cdots , \lambda _{m-1}'), \quad \lambda _k' = 2 - 2\cos \left( \frac{(2k+1)\pi }{2m+1}\right) , \quad k = 0, 1, \cdots , m-1, \end{aligned}$$

the matrices \(\Delta _{mn}(t)\) is similar to the diagonal matrice \({\Lambda _{mn}}\),

$$\begin{aligned}{\Lambda _{mn}(t)} = \text {diag}({\lambda _{0,0}(t)},{\lambda _{0,1}(t)}, \cdots ,{\lambda _{n - 1,m - 1}(t)}), \quad {\lambda _{j,k}} = {r^{ - 1}(t)}{\lambda _j} + {s^{ - 1}(t)}{\lambda _k'}. \end{aligned}$$

Based on Eq. (10), \(\Delta _{mn}(t)\) can expressed as

$$\begin{aligned} {\Delta _{mn}(t)}&= \left( {{F_n} \otimes S_m^{VI}} \right) {\Lambda _{mn}(t)}{\left( {{F_n} \otimes S_m^{VI}} \right) ^{ - 1}} \\&= \left( {{F_n} \otimes {E_m}} \right) \left( {{E_n} \otimes S_m^{VI}} \right) {\Lambda _{mn}(t)}\left( {F_n^{ - 1} \otimes {E_m}} \right) \left( {{E_n} \otimes {{\left( {S_m^{VI}} \right) }^{ - 1}}} \right) . \end{aligned}$$
(11)

Then, the k-th power of the matrix \(\Delta _{mn}(t){\Delta ^T _{mn}(t)} + \lambda {E_{mn}}\) can be obtained

$$\begin{aligned} {\left[ {\Delta _{mn}(t){\Delta ^T _{mn}(t)} + \lambda {E_{mn}}} \right] ^k} = \left( {{F_n} \otimes {E_m}} \right) \left( {{E_n} \otimes S_m^{VI}} \right) \textrm{M }^k(t)\left( {F_n^{ - 1} \otimes {E_m}} \right) \left( {{E_n} \otimes {{\left( {S_m^{VI}} \right) }^{ - 1}}} \right) , \end{aligned}$$

where \(\mathrm{M (t)} = {\left( {\Lambda _{mn}^2(t) + \lambda {E_{mn}}} \right) }\) is a diagonal matrix,

$$\begin{aligned} \mathrm{M (t)} = \left[ {\begin{array}{*{20}{c}} {\lambda _{0,0}^2(t) + \lambda }& 0& \cdots & 0\\ 0& {\lambda _{0,1}^2(t) + \lambda }& \ddots & \vdots \\ \vdots & \ddots & \ddots & 0\\ 0& \cdots & 0& {\lambda _{n - 1,m - 1}^2(t) + \lambda } \end{array}} \right] . \end{aligned}$$

To calculate matrix-vector multiplication \({\left[ {\Delta _{mn}(t){\Delta ^T _{mn}(t)} + \lambda {E_{mn}}} \right] ^k}{{\mathcal {F}}}\left( \cdot \right)\), \({\delta } = \left( {{E_{mn}} \otimes {{\left( {S_m^{VI}} \right) }^{ - 1}}} \right) {{\mathcal {F}}}\left( \cdot \right)\) is first calculated, i.e.

$$\begin{aligned} {\delta } = \left[ {\begin{array}{*{20}{c}} {{{\left( {S_m^{VI}} \right) }^{ - 1}}{\chi _1}}& 0& \cdots & 0\\ 0& {{{\left( {S_m^{VI}} \right) }^{ - 1}}{\chi _2}}& \ddots & \vdots \\ \vdots & \ddots & \ddots & 0\\ 0& \cdots & 0& {{{\left( {S_m^{VI}} \right) }^{ - 1}}{\chi _n}} \end{array}} \right] , \end{aligned}$$

where

$$\begin{aligned} {\chi _j} = [{f_{j + 1}},{f_{j + 2}}, \cdots ,{f_{j + m}}], \quad j = 1,2, \cdots n. \end{aligned}$$
(12)

when \({\delta }_j={{{\left( {S_m^{VI}} \right) }^{ - 1}}{\chi _j}}\) is computed, \(\left( {{E_n} \otimes {{\left( {S_m^{VI}} \right) }^{ - 1}}} \right) {{\mathcal {F}}}\left( \cdot \right)\) can be obtained by sequentially combing the n’s. It is worth noting that \({{{\left( {S_m^{VI}} \right) }^{ - 1}}{\chi _j}}\) can be efficiently computed by IDST-VI.

Based on the decomposition of the matrix \(\Delta _{mn} (t)\), perform the calculations from right to left sequentially. \(b = \left( {F_n^{ - 1} \otimes {I_m}} \right) \delta\) will be quick calculation by m-times \(\text {IFFT}({\vartheta }_j)\), where

$$\begin{aligned} {\vartheta _j} = {[\delta _j,\delta _{j + m}, \cdots ,\delta _{j + (n - 1)m}]}, \quad j = 1,2, \cdots m. \end{aligned}$$
(13)

Then, the product of matrix \(\mathrm M^k(t)\) and vector b is the element-wise vector-vector multiplication.

Similarly, following the same vector decomposition method of (12) and (13), by sequentially executing DST-VI and FFT, and arranging the obtained vectors in order, a fast algorithm for \({\left[ {\Delta _{mn}(t){\Delta ^T _{mn}(t)} + \lambda {E_{mn}}} \right] ^k}{{\mathcal {F}}}\left( \cdot \right)\) is achieved.

Algorithm 3
figure c

Fast algorithm for matrix-vector multiplication \({\left[ {\Delta _{mn}^2 + \lambda {E_{mn}}} \right] ^k}{{\mathcal {F}}}\left( \cdot \right)\)

Comparison of computational efficiency

Since the complexities of Algorithm 1 and Algorithm 2 can be done in \(O(N\text {log}_2N)\) and O(N) operations, respectively, where \(N = mn\)65, the matrix-vector multiplication module \(\Delta _{mn}v\) can be obtaioned in \(O(N\text {log}_2N)\), which is lower than \(O(N^2)\) the direct computation in MATLAB. Algorithm 3 based on FFT, DST-VI, IFFT, and IDST-VI66 has a complexity of \(O(N\text {log}_2N)\), while the complexity of direct computation is \(O(N^3)\). Hence, the right of implicit dynamical Eq. (11) is performed efficiently in \(O(N\text {log}_2N)\) operations instead of \(O(N^3)\) in original MATLAB computation. The SZNNCRN solver reduces computational complexity and saves storage space, thereby enabling the implementation of large-scale resistive network computations.

Figure 3
figure 3

Comparing the CPU runtime of IMRNN and SZNNCRN.

Figure 4
figure 4

Comparing the CPU runtime of SZNNCRN and orthers.

Numerical simulations were conducted on resistor networks of different scales using the SZNNCRN and IMRNN52 (i.e. Eq. (11)) methods, respectively. The same design parameters were employed for both models: \(\gamma = 3, \lambda = 2\), \(k = 2\), a linear activation function was selected, and the simulation duration was set to \(t = 0.3\) seconds, when the model has converged. Resistor networks of different sizes were calculated, in which the longitudinal resistivity is \(r(t) = 1 + 0.004 t\) and the longitudinal resistivity is \(s(t) = 1 + 0.004 t\). Figure 3 presents a comparative analysis of the runtime performance between IMRNN and SZNNCRN, depicted as a line graph. Under identical hardware configurations, parameter settings, and other operational conditions, the computational efficiency of SZNNCRN surpasses that of IMRNN by approximately \(400-3000\) orders of magnitude. Furthermore, as the network size escalates, the advantage of SZNNCRN in terms of computational speed becomes increasingly pronounced.

Table 1 Variable changes in special cases.

Figure 4 compares the computational speed of the SZNNCRN method against the ZNN27 and SZDN67 methods, which are commonly used for solving general linear systems of equations. The detailed parameter configurations are provided in Table 1, where t represents the time parameter after the model reaches a steady state. In the experiments, various sizes of resistor networks were considered for computation, and the runtime comparisons among different methods across different scales were presented in the form of line charts. Analysis of the charts reveals the following: For resistor networks of various sizes, the SZNNCRN method consistently demonstrates superior computational speed compared to the ZNN and SZDN methods. As the scale of computation increases, the advantage of the SZNNCRN method in computational efficiency becomes increasingly pronounced.

Table 2 Variable changes in special cases.
Table 3 Variable changes in special cases.

Tables 2 and 3 present the CPU runtime required for solving dynamic cobweb resistor networks of various sizes using different methods. Through analysis of the information in the table, the following conclusions can be drawn:

  1. 1.

    The runtime of the SZNNCEN method consistently outperforms other methods, demonstrating higher computational efficiency.

  2. 2.

    As the computational scale gradually increases, the computational efficiency of the SZNNCRN method becomes increasingly prominent.

  3. 3.

    Under the same computational resources and time constraints, the SZNNCRN method is capable of effectively handling larger-scale dynamic cobweb resistor network models, indicating its robust capability in dealing with complex problems.

Convergence analysis

Theorem 1

For a lattice \({\Delta _{mn} (t)} \in {\mathbb {R}^{mn}}\) of the resistive network and the current vector \(\mathcal {I} \in {\mathbb {R}^{mn}}\), if a monotonically increasing odd activation function \({{\mathcal {F}}}\left( \cdot \right)\) is used, then the state vector V(t) of the SZNNCRN, starting from any randomly generated initial state V(0), always globally converges to the theoretical solution \({V^*} (t) = \Delta _{mn}^{ - 1}(t){{\mathcal {I}} }\) of Eq. (11).

Proof

Given a dynamic cobweb lattice \({\Delta _{mn}(t)} \in {\mathbb {R}^{mn}}\) and current vector \({{\mathcal {I}}} \in {\mathbb {R}^{mn}}\), let \(E(t) = V(t) - {V^*}(t) = V(t) - \Delta _{mn}^{ - 1}(t){{\mathcal {I}}}\) represents the error between the state V(t) and theoretical solution \({V^*}(t)\). As E(t) approaches zero, the state vector V(t) can infinitely approach the theoretical solution \({V^*}(t)\). Moreover,

$$\begin{aligned} \frac{{\text {d}E(t)}}{{\text {d}t}} = {\Delta _{mn}(t)}\frac{{\text {d}V(t)}}{{\text {d}t}}. \end{aligned}$$

In connection with Eq. (11), can obtain

$$\begin{aligned} \mathop E\limits ^. (t) = - \gamma {({ \Delta _{mn}(t){\Delta ^T _{mn}(t)}} + \lambda {E_{mn}})^k}{{\mathcal {F}}}\left( {E(t)} \right) . \end{aligned}$$

Define the following Lyapunov candidate function L(E(t), t)

$$\begin{aligned} L(E(t),t) = \frac{1}{2}\left\| {E(t)} \right\| _F^2, \end{aligned}$$

then

$$\begin{aligned} L(E(t),t) = \frac{1}{2}\left\| {E(t)} \right\| _F^2 = \frac{1}{2}\text {tr}\left( {E{{(t)}^T}E(t)} \right) . \end{aligned}$$

It is evident that, under the condition of \(E(t) \ne 0\), the time derivative of the function L(E(t), t) with respect to t can be derived

$$\begin{aligned} \frac{{\text {d}L(E(t),t)}}{{\text {d}t}}&= \frac{1}{2}\frac{{\text {d}\,\text {tr}\left( {{(E(t))}^T}(E(t))\right) }}{{\text {d}t}} \\&= \text {tr}\left( {{{\left( {E(t)} \right) }^T}\left( {\mathop E\limits ^ \cdot (t)} \right) } \right) \\&= - \gamma \, \text {tr}\left( {{{\left( {{\Delta _{mn}(t){\Delta ^T _{mn}(t)}}+ \lambda E_{mn}} \right) }^k}{{(E(t))}^T}{{\mathcal {F}}}\left( {E(t)} \right) } \right) . \end{aligned}$$
(14)

Observe that the inequality

$$\begin{aligned} {\lambda _{\min }}(A)\text {tr}(B) \le \text {tr}(AB) \le {\lambda _{\max }}(A)\text {tr}(B) \end{aligned}$$

holds when matrix A is a symmetric semi-definite and matrix B is a symmetric, where \(\lambda _{\min }(A)\) represents the minimum eigenvalue of the matrix \(A\), and \(\lambda _{\max }(B)\) represents the maximum eigenvalue of the matrix \(B\).

\({\Delta ^T _{mn}(t)}\) and \({\Delta _{mn}(t)}\) have the same eigenvalues, thus, it be can conclude that

$$\begin{aligned} \frac{{\text {d}L(E(t),t)}}{{dt}}&\le - \gamma {\left( {{\alpha ^2(t)} + \lambda } \right) ^k}\text {tr}\left( {{{(E(t))}^T}{{\mathcal {F}}}\left( {E(t)} \right) } \right) \\&= - \gamma {\left( {{\alpha ^2(t)} + \lambda } \right) ^k}\sum \limits _{j = 1}^m {\sum \limits _{i = 1}^n {{e_{ij}}f({e_{ij}})} }, \end{aligned}$$
(15)

where \(\alpha (t) = {\lambda _{\min }}({\Delta _{mn}(t)})\).

Let \(f( \cdot )\) be the elemental function of the monotonically increasing odd activation function \({{\mathcal {F}}}\left( \cdot \right)\). Hence, \(f( \cdot )\) satisfies the following conditions

$$\begin{aligned} f(u)\left\{ {\begin{array}{*{20}{c}} {> 0{ }, \quad \text {if} \; { }u > 0},\\ { = 0{ }, \quad \text {if} \; { }u = 0},\\ {< 0{ }, \quad \text {if} \; { }u < 0}. \end{array}} \right. \end{aligned}$$

Given that \(e( \cdot )\) is the elemental function of the error function \(E\left( \cdot \right)\), it can deduce that

$$\begin{aligned} {e_{ij}}f({e_{ij}})\left\{ {\begin{array}{*{20}{c}} { > 0{ }, \quad \text {if} \; { }{e_{ij}} \ne 0},\\ { = 0{ }, \quad \text {if} \; { }{e_{ij}} = 0}. \end{array}} \right. \end{aligned}$$

In conclusion

$$\begin{aligned} L(E(t),t)\left\{ {\begin{array}{*{20}{c}} { < 0 { }, \quad \text {if} \; { }E(t) \ne 0},\\ { = 0 { }, \quad \text {if} \; { }E(t) = 0}. \end{array}} \right. \end{aligned}$$

Taking into account the inequality above and Lyapunov’s stability theorem, it can ascertained that for any arbitrary initial point V(0), E(t) globally converges to zero.

Specifically, when the activation function \({{\mathcal {F}}}\left( \cdot \right)\) is a Linear activation function (LAF), Eq. (14) becomes

$$\begin{aligned} \frac{{\text {d}L(E(t),t)}}{{\text {d}t}}&= - \gamma \text {tr}\left( {{{\left( {{{\Delta _{mn}(t)}{\Delta ^T _{mn}(t)}} + \lambda E_{mn}} \right) }^k}{{(E(t))}^T}\left( {E(t)} \right) } \right) \\&\le - \gamma {({\alpha ^2(t)} + \lambda )^k}\text {tr}\left( {{{(E(t))}^T}\left( {E(t)} \right) } \right) \\&= - 2\gamma {({\alpha ^2(t)} + \lambda )^k}L(E(t),t). \end{aligned}$$

Upon analyzing the aforementioned differential inequality

$$\begin{aligned} \mathop L\limits ^ \cdot (t) \le - 2\gamma {\left( {{\alpha ^2(t)} + \lambda } \right) ^k}L(t). \end{aligned}$$

It can be inferred that

$$\begin{aligned} L(t) \le L(0)\exp ( - 2\gamma {({\alpha ^2(t)} + \lambda )^k}t). \end{aligned}$$

When the SZNNCRN model utilizes a linear activation function, its convergence rate is

$$\begin{aligned} ECR = \gamma {\left( {{\alpha ^2(t)} + \lambda } \right) ^k}. \end{aligned}$$

\(\square\)

Theorem 2

For a lattice \({\Delta _{mn}(t)} \in {\mathbb {R}^{mn}}\) of the resistive network and the current vector \(\mathcal {I} \in {\mathbb {R}^{mn}}\), if a activation function \({{\mathcal {F}}}\left( \cdot \right) = x \cdot \psi \left( x \right) , \forall x, \Psi \left( x \right) > 0\) is used, then the state vector V(t) of the SZNNCRN, starting from any randomly generated initial state V(0), always globally converges to the theoretical solution \({V^*}(t) = \Delta _{mn}^{ - 1}(t){{\mathcal {I}}}\) of Eq. (11).

Proof

Let \(f( \cdot )\) be the elemental function of the activation function \({{\mathcal {F}}}\left( \cdot \right)\), which takes the form \({f}\left( \cdot \right) = x \cdot \varphi \left( x \right)\), where \(\varphi \left( x \right) > 0\) and \(\varphi \left( x \right)\) is the elemental function of \(\Psi \left( x \right)\). Refer to Eq. (15), the following is obtained,

$$\begin{aligned} \frac{{dL(E(t),t)}}{{dt}} \le - \gamma {\left( {{\alpha ^2(t)} + \lambda } \right) ^k}\sum \limits _{j = 1}^m {\sum \limits _{i = 1}^n {e_{ij}^2\varphi \left( {{e_{ij}}} \right) }. } \end{aligned}$$
(16)

It can be deduced that

$$\begin{aligned} L(E(t),t)\left\{ {\begin{array}{*{20}{c}} { < 0 \mathrm{ } \quad \text {if} \; \mathrm{ }E(t) \ne 0,}\\ { = 0 \mathrm{ } \quad \text {if} \; \mathrm{ }E(t) = 0.} \end{array}} \right. \end{aligned}$$

Given the inequality above and Lyapunov’s stability theorem, it can ascertain that for any arbitrary initial point V(0), E(t) globally converges to zero. \(\square\)

Numerical examples

In this segment, Utilizing Eq. (11), Algorithms 1, 2, and 3, the proposed SZNNCRN model is numerically simulated. These simulations further corroborated the model’s superior convergence performance, assisting in selecting optimal parameters.

Figure 5
figure 5

The temporal evolution of the function V(t) as depicted by its trajectory curve.

Figure 5 depicts the temporal evolution of the potential V(t) in a \(2 \times 3\) resistor network with different activation functions and the rate of change of resistivity \(\rho\), the six curves shown represent the elements of V(t). The network’s design parameters are \(\gamma =10\), \(\lambda =3\), and \(k=2\). The figure illustrates the evolution of the electric potential V(t) over time under various conditions. Specifically, when different activation functions are employed, the curves depicting the change in electric potential over time exhibit consistency. However, when \(\rho\) varies, the corresponding curves of the electric potential’s temporal evolution undergo noticeable alterations.

Figure 6
figure 6

Trajectory curves of \({\left\| {{\Delta _{mn}(t)}V(t) - \mathcal {I}} \right\| _2}\) with time t for different activation functions.

Figure 6 simulates the temporal evolution of the error function \({\left\| {{\Delta _{mn}(t)}V(t) - \mathcal {I}} \right\| _2}\) in a \(3 \times 3\) resistor network with resistors of values \(r=1+0.004t\) and \(s=100(1+0.004t)\). The network’s design parameters are \(\gamma =10\), \(\lambda =3\), and \(k=2\). The network is initialized using an initial potential V(0), and a spectrum of activation functions is utilized. The figure illustrates the convergence of the error under these different activation functions. Both types of activation functions are capable of reducing the error to zero. Under the parameter conditions depicted in Fig. 6, the convergence rates of SPSF, PSAF, BSAF, and LAF are observed to surpass those of the other activation functions evaluated.

Figure 7
figure 7

Trajectory curves of \({\left\| {{\Delta _{mn}(t)}V(t) - \mathcal {I}} \right\| _2}\) with time t under various parameter.

Figure 7a, b present analysis of a \(3 \times 3\) resistor network, which employs resistors with values \(r=1+0.004t\) and \(s=100(1+0.004t)\) and utilizes the Linear activation function (LAF). These figures depict the effects of varying design parameters \(\gamma , \lambda ,\) and k on the convergence performance of the model. The other parameters of the subgraph are designed as follows: In Fig. 7a \(\lambda = 3, k = 2\), In Fig. 7b \(\gamma = 10, k = 2\), In Fig. 7c \(\gamma = 3, \lambda = 5\). The analysis reveals that, given the limitations of the hardware, an increase in these design parameters leads to an enhancement in the convergence speed. It should be noted that while the convergence rate is affected by the selection of design parameters, the final convergence outcome remains consistent.

Figure 8 simulates the time evolution of the error function \({\left\| {{\Delta _{mn}(t)}V(t) - \mathcal {I}} \right\| _2}\) under different initial states in a \(3 \times 3\) resistor network with resistors of values \(r=1+0.004t\) and \(s=100(1+0.004t)\). The design parameters of the network are \(\gamma =10\), \(\lambda =3\) and \(k=2\). The figure illustrates the convergence of the error for different initial states. From any initial state, the error converges to zero, indicating that the convergence of the model is not affected by the initial state.

Figure 8
figure 8

Trajectory curves of \({\left\| {{\Delta _{mn}(t)}V(t) - \mathcal {I}} \right\| _2}\) with time t under different initial states.

Applications

Solving for equivalent resistance

The numerical method SZNNCRN is applied to solve the Laplacian matrix Eq. (1) to precisely determine the potential distribution \(V^*\). Additionally, building upon formula (6), the method is further applied to calculate the equivalent resistance between any two points in the cobweb resistance network. Hence, the SZNNCRN technique is efficient in calculating the potential distribution and extends its applicability to the determination of the equivalent resistance value.

Table 4 Accuracy analysis of equivalent resistance for networks.

Without loss of generality, the activation function in the SZNNCRN model uses a linear activation function (LAF) with the design parameters taken as \(\lambda = 3, k = 5,\gamma = 10\), respectively. Table 4 compares the equivalent resistance obtained by the exact formula (7) and SZNNCRN (11) method. It is showed that the solution results of SZNNCRN methd are consistent with the results obtained from the exact formulas.

The SZNNCRN method offers a practical and feasible approach for directly calculating the equivalent resistance of cobweb resistor networks. In contrast to exact formulas, this method efficiently circumvents intricate numerical derivation steps, yielding precise numerical solutions. Moreover, it can determine the numerical equivalent resistance for specific networks where an exact solution is unavailable.

An application in path planning

Path planning68,69,70 is a major research focus of artificial intelligence and robotics71,72,73. In practical applications, path planning often conducts with specialized environments74,75,76,77, which brings new challenges for the process. This aection performs preliminary path planning on the cobweb maps.

Algorithm 4
figure d

Path planning algorithm.

The cobweb resistor network exhibits distinctive structural and nodal distribution features. Within this network, the entry and exit points of the current align with the nodes possessing the highest and lowest potentials, respectively. The potential experiences an irregular natural potential descent from regions of high potential to areas of low potential. This characteristic is harnessed for path planning: utilizing the potential solution results obtained from the SZNNCRN model, the potential gradient can be computed. This computation identifies the direction in which the potential decreases most rapidly. Subsequently, this direction is employed to determine the optimal path. The starting point corresponds to the node with the highest potential, while the endpoint corresponds to the node with the lowest potential. In the presence of obstacles, they are mapped onto the potential distribution view. The potential value corresponding to the location of the obstacle is set to a significantly high value to avoid traversal.

Leveraging the principles of potential distribution and gradient descent for path generation constitutes the core concept of this application study on path planning. First, based on the observed cobweb environment with obstacles, a grid-based discretization approach was applied, accurately positioning the obstacles at the network nodes. Secondly, neglecting the existence of obstacles, a resistor network model precisely corresponding to the shape of the physical grid was constructed. Next, utilizing the SZNNCRN method, the potential values at each node within the resistor network were calculated. Subsequently, a detailed distribution map of the potentials at all nodes is constructed, visually representing their spatial distribution characteristics across the network. Then, to account for the obstacles’ impact, weighted potential processing was implemented for the nodes corresponding to the obstacles. This resulted in a weighted potential distribution map that better reflected the realities of the environment. Subsequently, based on this weighted distribution map, the gradient descent algorithm was employed to computationally determine the optimal path for robot path planning within this complex environment. Finally, the optimal path computed was mapped onto a cobweb-like environmental map incorporating obstacles, enabling efficient navigation for robots and related mobile entities within complex environments. The specific steps and flowchart of the pathfinding algorithm are presented in Algorithm 4.

Figure 9
figure 9

Initial cobweb map with obstacles.

Figure 10
figure 10

Potential distribution diagram without obtacles.

Figure 11
figure 11

Path-planning in a node-weighted potential distribution diagram.

Figure 12
figure 12

Robot path planning in cobweb map with obstacles.

In Fig. 9, a \(30\times 16\) cobweb resistance network with \(r = s = 1\), the current is directed towards \(d_1(19,13)\) and the output is \(d_2(5,4)\), adding arbitrary obstacles, as an initial environmental map. Based on the above Algorithm 4 and Flowchart, the potential V is calculated, and the potential distribution is shown in Fig. 10. In Fig. 11, the robot begins at the starting point and follows the path corresponding to the steepest potential descent within the graph, aiming to reach the endpoint eventually. The path planning for this network map is depicted in Fig. 12.

Conclusions

The present study designs a structural zeroing neural network algorithm for solving the Laplacian system of cobweb resistor networks. The algorithm ingeniously harnesses the unique structural properties of the Laplacian matrix, developing an efficient computational method for neurons that is specifically attuned to such problems. Comparative analysis with existing algorithms highlights the significant speed advantage of the proposed method. Rigorous theoretical analysis demonstrates the convergence of the method, and numerical experiments further validate the accuracy of the theoretical findings. Building on this foundation, the method is applied to calculate the equivalent resistance between any two points within the cobweb resistor network. It is creatly extended to robot path planning in specialized environments. However, due to hardware limitations, our method currently lacks the practical computational capacity for ultra-large-scale resistance networks. Additionally, while our approach is specifically tailored for the spider web resistance network, it is not directly applicable to other types of resistance networks. In the future, we will continue to explore other fast algorithms for resistive networks.