Introduction

Let \(\mathbb {E}\) be a subset of a uniformly convex Banach space \(\mathbb {X}\). A mapping \(\mathscr {G}:\mathbb {E}\rightarrow \mathbb {E}\) is called a generalized \((\alpha ,\beta )\)-nonexpansive mapping if for all \(x,y\in \mathbb {E}\) there exist \(\alpha ,\beta \in [0,1)\) with \(\alpha +\beta <1\) such that

$$\begin{aligned} \frac{1}{2}&\Vert x-\mathscr {G}x\Vert \le \Vert x-y\Vert \implies \\&\Vert \mathscr {G}x-\mathscr {G}y\Vert \le \alpha \Vert x-\mathscr {G}y\Vert +\alpha \Vert y-\mathscr {G}x\Vert \\&\qquad +\beta \Vert x-\mathscr {G}x\Vert +\beta \Vert y-\mathscr {G}y\Vert +(1-2\alpha -2\beta )\Vert x-y\Vert \end{aligned}$$
(1)

The class of generalized \((\alpha ,\beta )\)-nonexpansive mappings is broad and includes many well-known nonexpansive-type mappings, such as:

  1. 1.

    Nonexpansive mappings: \(\Vert \mathscr {G}x-\mathscr {G}y\Vert \le \Vert x-y\Vert\).

  2. 2.

    Suzuki nonexpansive mappings1:

    $$\frac{1}{2}\Vert x-\mathscr {G}x\Vert \le \Vert x-y\Vert \implies \Vert \mathscr {G}x-\mathscr {G}y\Vert \le \Vert x-y\Vert .$$
  3. 3.

    Generalized \(\alpha\)-nonexpansive mappings2:

    $$\Vert \mathscr {G}x - \mathscr {G}y\Vert \le \alpha \Vert x-\mathscr {G}y\Vert + \alpha \Vert y-\mathscr {G}x\Vert + (1-2\alpha )\Vert x-y\Vert .$$
  4. 4.

    Reich–Suzuki nonexpansive mappings3:

    $$\tfrac{1}{2}d(x,\mathscr {G}x)\le d(x,y) \implies d(\mathscr {G}x,\mathscr {G}y) \le \alpha d(\mathscr {G}x,x) + \alpha d(\mathscr {G}y,y) + (1-2\alpha )d(x,y).$$

Hence, generalized \((\alpha ,\beta )\)-nonexpansive mappings provide a unified framework that extends these important classes.

Remark 1.1

When \(\alpha = 0\), the definition reduces to Suzuki’s condition (C). However, the converse does not hold (see3). Thus, the class of generalized \((\alpha ,\beta )\)-nonexpansive mappings properly contains the Suzuki nonexpansive mappings.

Fixed point theory, a central component of nonlinear analysis, provides essential tools for establishing the existence and uniqueness of solutions to a wide range of mathematical problems. Its applications are especially prominent in the study of fractional differential equations (FDEs), which model complex phenomena involving memory effects and non-local interactions4,5,6,7,8,9. A standard approach for analyzing FDEs is to convert them into equivalent integral equations using operators such as the Riemann–Liouville or Caputo fractional integrals. Once in integral form, fixed point theorems can be applied to study existence, uniqueness, and approximation of solutions10.

Beyond abstract analysis, fixed point theory plays an important role in modeling real-world systems, including epidemic processes. In SEIR-type models, population dynamics are described through compartments—susceptible (S), exposed (E), infected (I), recovered (R)—and extensions that incorporate additional groups such as vaccinated individuals (see11,12,13). Incorporating fractional derivatives enhances these models by capturing memory effects and historical dependencies, making them suitable for diseases with long incubation periods, complex transmission mechanisms, or sustained immunity.

Fixed point methods ensure the existence and uniqueness of solutions to these fractional-order epidemic models and support the development of iterative schemes for approximating compartment populations over time. Such numerical techniques enable simulation and forecasting of disease dynamics and evaluation of intervention strategies (see, e.g. 14,15,16,17,18,19,45, providing a quantitative foundation for public health decision-making.

Fixed point theory also finds applications in other emerging areas, including fractional-order complex-valued neural networks20, orthogonal interpolative iterative mappings21, multivalued nonlinear dominated mappings22, modeling in partial differential equations23and multivalued operators involving nonlinear contractions (see, e.g. 24,25,45.

The aim of this paper is to introduce a new fixed point iterative scheme that converges faster than existing schemes in the literature for the class of generalized \((\alpha ,\beta )\)-nonexpansive mappings in uniformly convex Banach spaces. We establish weak and strong convergence, \(\mathscr {G}\)-stability and almost \(\mathscr {G}\)-stability, and data dependence results. The rate of convergence is demonstrated through a numerical example comparing several iterative methods. Applications are provided to the approximation of boundary value problems via Green’s functions and to a fractional-order SEIR epidemic model. These results extend and generalize various related findings in the literature.

The paper is organized as follows: section “Preliminaries” contains preliminaries. Section “Main results” presents the main results, including convergence, stability, rate of convergence, and data dependence. Section “Numerical example and rate of convergence” provides a numerical example. Section “Application to the SEIR epidemic model” discusses the application to the SEIR epidemic model. Section “Conclusion” concludes the paper.

Preliminaries

Fixed point iterative schemes are essential numerical tools for approximating solutions to equations for which exact analytical solutions are difficult or impossible to obtain. These schemes aim to find a point xsuch that \(\mathscr {G}(x) = x\), where \(\mathscr {G}\) is a given function or operator. The iterative approach typically starts with an initial guess \(x_0\) and generates a sequence \(\{x_n\}\) using a recurrence relation, often of the form \(x_{n+1} = \mathscr {G}(x_n)\), that converges to the fixed point under suitable conditions. The development of fixed point iterative schemes has its roots in Banach’s Contraction Mapping Principle (1922), which provided a rigorous foundation for convergence under contraction conditions in complete metric spaces. This principle led to the classical Picard iteration26, widely used in solving ordinary differential equations. Over time, more generalized iterative methods or schemes have been developed. Modern advances in fixed point iterative schemes focus on improving convergence rates, enhancing robustness, and applying them to nonlinear and non-compact operators. These developments are particularly significant in functional analysis, optimization, and numerical solutions of integral and differential equations.

Some of such iterative schemes are outlined in the sequel. For instance, in 2011, Sahu and Petrusel27 introduced the S iteration scheme defined as

$$\begin{aligned} {\left\{ \begin{array}{ll} x_0\in \mathbb {E}\\ y_n=(1-\alpha _n)x_n+\alpha _n\mathscr {G}x_n\\ x_{n+1}=\mathscr {G}y_n,\,\, n\in \mathbb {N}, \end{array}\right. } \end{aligned}$$
(2)

for \(\{\alpha _n\}\) being a real sequence.

Gürsoy et al.28 introduced an iterative scheme called the Picard-S in 2014 and defined thus:

$$\begin{aligned} {\left\{ \begin{array}{ll} x_0=x\in \mathbb {E}\\ z_n=(1-\beta _n)x_n+\beta _n\mathscr {G}x_n\\ y_n=(1-\alpha _n)\mathscr {G}x_n+\alpha _n\mathscr {G}z_n\\ x_{n+1}=\mathscr {G}y_n,\,\, n\in \mathbb {N}. \end{array}\right. } \end{aligned}$$
(3)

Also in 2014, Abbas and Nazir29 introduced a three step iterative process called the AK iterative scheme and defined as follows;

$$\begin{aligned} {\left\{ \begin{array}{ll} x_0=x\in \mathbb {E}\\ z_n=(1-\gamma _n)x_n+\gamma _n\mathscr {G}x_n\\ y_n=(1-\beta _n)\mathscr {G}x_n+\beta _n\mathscr {G}z_n\\ x_{n+1}=(1-\alpha _n)\mathscr {G}y_n+\alpha _n\mathscr {G}z_n,\,\, n\in \mathbb {N} \end{array}\right. } \end{aligned}$$
(4)

for \(\{\alpha _n\}\), \(\{\beta _n\}\), \(\{\gamma _n\}\in (0,1)\)

Forward to 2018, Ullah and Arshad30 introduced the M iterative scheme as follows;

$$\begin{aligned} {\left\{ \begin{array}{ll} x_0\in \mathbb {E}\\ z_n=(1-\alpha _n)x_n+\alpha _n\mathscr {G}x_n,\\ y_n=\mathscr {G}z_n\\ x_{n+1}=\mathscr {G}y_n\,\, n\in \mathbb {N}, \end{array}\right. } \end{aligned}$$
(5)

where \(\{\alpha _n\}\) is a real sequence in [0, 1]. The scheme was used to prove weak and strong convergence theorems for Suzuki generalized nonexpansive mapping in the framework of uniformly convex Banach spaces.

Furthermore, Abdeljawad et al.31 in 2021 defined the JA iterative scheme as follows;

$$\begin{aligned} {\left\{ \begin{array}{ll} x_0\in \mathbb {E}\\ z_n=(1-\beta _n)x_n+\beta _n\mathscr {G}x_n\\ y_n=\mathscr {G}z_n\\ x_{n+1}=\mathscr {G}[(1-\alpha _n)\mathscr {G}x_n+\alpha _n\mathscr {G}y_n],\,\, n\in \mathbb {N}, \end{array}\right. } \end{aligned}$$
(6)

where \(\{\alpha _n\}, \{\beta _n\}\in [0,1]\) are sequences of real values.

As a concern to whether there exists a robust iterative scheme that can extend and generalize the above outlined schemes in approximating the fixed points of a class of generalized \((\alpha ,\beta )\)-nonexpansive mapping, we introduce the following new fixed point iterative scheme;

$$\begin{aligned} {\left\{ \begin{array}{ll} x_0\in \mathbb {E}\\ w_n=\mathscr {G}x_n\\ z_n=(1-\beta _n)x_n+\beta _n\mathscr {G}w_n\\ y_n=\mathscr {G}[(1-\alpha _n)\mathscr {G}w_n+\alpha _n\mathscr {G}z_n]\\ x_{n+1}=\mathscr {G}y_n,\,\,n\in \mathbb {N} \end{array}\right. } \end{aligned}$$
(7)

The following are the motivations of the steps of (7);

  • The choice of \(x_0\) initiates the iteration from an arbitrary point of the space.

  • The point \(w_n=\mathscr {G}x_n\) represents a direct application of the operator, providing a first-level approximation to the fixed point.

  • The convex combination \(z_n=(1-\beta _n)x_n+\beta _n\mathscr {G}w_n\) blends the current iterate with a more advanced evaluation of \(\mathscr {G}\), introducing stability and allowing better control of the convergence path through the sequence \(\{\beta _n\}\).

  • The term \(y_n=\mathscr {G}\!\left[ (1-\alpha _n)\mathscr {G}w_n+\alpha _n\mathscr {G}z_n\right]\) introduces a second convex combination, this time between two transformed points, followed by another application of \(\mathscr {G}\). This step accelerates convergence by incorporating deeper information about the operator’s behavior.

  • Finally, \(x_{n+1}=\mathscr {G}y_n\) generates the next iterate, ensuring that each step remains anchored to the operator \(\mathscr {G}\) and systematically pushes the sequence toward the fixed point set.

Lemma 2.132

Let \(\{\mu _n\}\) be a nonnegative sequence for which one assumes there exists \(n_0\in \mathbb {N}\), such that for all \(n\ge n_0\), and suppose the following inequality is satisfied;

$$\begin{aligned} \mu _{n+1}\le (1-\varphi _n)\mu _n+\varphi _n\wp _n \end{aligned}$$

where \(\varphi _n\in (0,1)\), \((1-\varphi _n)<1\), \(\forall n\in \mathbb {N}\), \(\sum\nolimits _{n=0}^{\infty }\varphi _n=\infty\) and \(\wp _n\ge 0\), \(\forall n\in \mathbb {N}\). Then,

$$\begin{aligned} 0\le \limsup \limits _{n\rightarrow \infty }\mu _n\le \limsup \limits _{n\rightarrow \infty }\wp _n. \end{aligned}$$

Definition 2.133

A mapping \(\mathscr {G}:\mathbb {E}\rightarrow \mathbb {E}\) with \(\mathcal {F}(\mathscr {G})\ne \emptyset\) is said to satisfy condition (I) if there exists a nondecreasing function \(h:[0,\infty )\rightarrow [0,\infty )\) such that \(h(0)=0\) and \(h(s)>0\) for all \(s>0\), and

$$\Vert x-\mathscr {G}x\Vert \;\ge \; h\bigl (\rho (x,\mathcal {F}(\mathscr {G}))\bigr ) \qquad \text {for all } x\in \mathbb {E},$$

where

$$\rho (x,\mathcal {F}(\mathscr {G}))=\inf \{\Vert x-\varpi ^*\Vert :\,\varpi ^*\in \mathcal {F}(\mathscr {G})\}$$

denotes the distance from xto the fixed point set \(\mathcal {F}(\mathscr {G})\).

Proposition 2.134

Let \(\mathscr {G}\) be a generalized \((\alpha ,\beta )\)-nonexpansive mapping on a subset \(\mathbb {E}\), subset of a Banach space \(\mathbb {X}\). Then,

  1. 1.

    if \(\mathscr {G}\) has at least one fixed point, then for all \(\varpi ^*\in \mathcal {F}(\mathscr {G})\),

    $$\begin{aligned} \Vert \mathscr {G}\varpi ^*-\mathscr {G}x\Vert \le \Vert \varpi ^*-x\Vert , \end{aligned}$$

    for all \(x\in \mathbb {E}\),

  2. 2.

    for \(x,y\in \mathbb {E}\),

    $$\begin{aligned} \Vert x-\mathscr {G}y\Vert \le \Big (\frac{3+\alpha +\beta }{1-\alpha -\beta }\Big )\Vert x-\mathscr {G}x\Vert +\Vert x-y\Vert , \end{aligned}$$
    (8)

    holds,

  3. 3.

    if \(\mathbb {X}\) satisfies Opial’s condition, then \(\{x_n\}\subseteq \mathbb {E}\), \(x_n\rightharpoonup \varpi ^*\), \(\Vert x_n-\mathscr {G}x_n\Vert \rightarrow 0\) \(\Rightarrow\) \(\mathscr {G}\varpi ^*=\varpi ^*\) holds.

Lemma 2.235

Let \(\mathbb {X}\) be a uniformly convex Banach space and \(\{\alpha _n\}_{n=0}^{\infty }\), \(\{\beta _n\}_{n=0}^{\infty }\) and \(\{\gamma _n\}_{n=0}^{\infty }\) be sequences of numbers such that \(0<a\le \alpha _n\le b<1\), \(n\ge 1\), for \(a, b\in \mathbb {R}\). Let \(\{s_n\}_{n=0}^{\infty }\) and \(\{t_n\}_{n=0}^{\infty }\) be sequences in \(\mathbb {X}\)such that \(\limsup \nolimits _{n\rightarrow \infty }\Vert s_n\Vert \le \lambda\), \(\limsup \nolimits _{n\rightarrow \infty }\Vert t_n\Vert \le \lambda\)and \(\limsup \nolimits _{n\rightarrow \infty }\Vert \alpha _ns_n+(1-\alpha _n)t_n\Vert =\lambda\)for some \(\lambda \ge 0\). Then, \(\lim \nolimits _{n\rightarrow \infty }\Vert s_n-t_n\Vert =0\).

Definition 2.236

A Banach space \(\mathbb {X}\) is said to satisfy the Opial condition37 if for each sequence \(\{x_n\}\) in \(\mathbb {X}\), converging weakly to \(p\in \mathbb {X}\), we have

$$\begin{aligned} \limsup \limits _{n\rightarrow \infty }\Vert x_n-p\Vert <\limsup \limits _{n\rightarrow \infty }\Vert x_n-q\Vert , \end{aligned}$$
(9)

for all \(q\in \mathbb {X}\) such that \(p\ne q\)

Lemma 2.338

If \(\rho \in [0,1)\) is a real number and \(\{\epsilon _n\}_{n=0}^{\infty }\) is a sequence of positive numbers such that \(\lim \nolimits _{n\rightarrow \infty }\epsilon _n=0\), then for any sequence of positive numbers, \(\{s_n\}_{n=0}^{\infty }\) satisfying \(s_{n+1}\le \rho s_n+\epsilon _n,~(n=0,1,2,...)\), \(\lim \nolimits _{n\rightarrow \infty }s_n=0\).

Lemma 2.439

Let \(\{m_n\}_{n=0}^\infty\) and \(\{\epsilon _n\}_{n=0}^\infty\) be sequences of nonnegative numbers and \(\delta \in [0,1)\) such that

$$\begin{aligned} m_{n+1}=\delta m_n+\epsilon _n~~n\ge 0. \end{aligned}$$

If \(\sum\nolimits _{n=0}^{\infty }\epsilon _n<\infty\), then \(\sum\nolimits _{n=0}^{\infty }m_n<\infty\).

Definition 2.340

Let \(\alpha>0\), \(0<\alpha <1\), \(n\in \mathbb {N}\). The Caputo fractional derivative of order \(\alpha\) of a function x(t) is defined as

$$_{t}^{C}\mathop {{\mathop {{{\,\mathrm{\mathscr {D}}\,}}}\nolimits _{t_0}^{\alpha }}}\limits x(t)=\frac{1}{\Gamma (n-\alpha )}\int _{t_0}^{t}(t-u)^{n-\alpha -1}D^{n}x(u)du$$

The construction of novel iterative processes is important not only for theoretical enrichment of fixed point theory but also for addressing practical problems where robustness and stability are critical. In particular, the convergence behavior (weak and strong), efficiency in terms of rate of convergence, and sensitivity to data perturbations are central aspects that determine the effectiveness of any new scheme. The incorporation of \(\mathscr {G}\)-stability and almost \(\mathscr {G}\)-stability further ensures that the scheme remains reliable under perturbations, an essential property for applications in real-world dynamical systems.

The significance of the present work lies in developing an iterative process that unifies and extends several established methods, while also demonstrating enhanced stability and convergence properties. Beyond its theoretical contributions, the proposed scheme is applied to a fractional-order SEIR epidemic model, illustrating its relevance in analyzing nonlinear models arising in epidemiology. Thus, the results presented here bridge the gap between abstract fixed point theory and practical applications, underscoring both the mathematical depth and applied value of the approach.

Main results

In this section, we present the core findings of the paper. We establish both weak and strong convergence theorems for the proposed iterative scheme in the setting of uniformly convex Banach spaces. Furthermore, we analyze the rate of convergence to highlight the efficiency of the scheme in comparison with existing methods via numerical example. The data dependence of the scheme is also studied, ensuring robustness of the results under perturbations. Finally, we investigate \(\mathscr {G}\)-stability and almost \(\mathscr {G}\)-stability properties, which confirm the stability of the scheme under various perturbative conditions. These results collectively demonstrate the versatility and effectiveness of the iterative process in fixed point approximation.

Weak and strong convergence

Lemma 3.1

Assume \(\mathbb {E}\) is a nonempty closed convex subset of a Banach space \(\mathbb {X}\). Let \(\mathscr {G}:\mathbb {E}\rightarrow \mathbb {E}\) be a mapping satisfying the generalized \((\alpha ,\beta )\)-nonexpansive mapping with \(\mathcal {F}(\mathscr {G})\ne \emptyset\). Suppose \(\{x_n\}\) is a sequence generated by the new iterative scheme (7), then \(\lim \nolimits _{n \rightarrow \infty }\Vert x_n-\varpi ^*\Vert\) exists for each \(\varpi ^*\in \mathcal {F}(\mathscr {G})\).

Proof

Assume \(\varpi ^*\in \mathcal {F}(\mathscr {G})\) is a fixed point and by Lemma 1, we can invoke (7) to have the following estimates at every step thus;

$$\begin{aligned} \Vert w_n-\varpi ^*\Vert&=\Vert \mathscr {G}x_n-\varpi ^*\Vert \\&\le \Vert x_n-\varpi ^*\Vert \end{aligned}$$
$$\begin{aligned} \Vert z_n-\varpi ^*\Vert&=\Vert (1-\beta _n)x_n+\beta _n\mathscr {G}w_n-\varpi ^*\Vert \\&\le (1-\beta _n)\Vert x_n-\varpi ^*\Vert +\beta _n\Vert \mathscr {G}w_n-\varpi ^*\Vert \\&\le (1-\beta _n)\Vert x_n-\varpi ^*\Vert +\beta _n\Vert w_n-\varpi ^*\Vert \\&=\Vert x_n-\varpi ^*\Vert \end{aligned}$$
$$\begin{aligned} \Vert y_n-\varpi ^*\Vert&=\Vert \mathscr {G}[(1-\alpha _n)\mathscr {G}w_n+\alpha _n\mathscr {G}z_n]-\varpi ^*\Vert \\&\le \Vert [(1-\alpha _n)\mathscr {G}w_n+\alpha _n\mathscr {G}z_n]-\varpi ^*\Vert \\&\le (1-\alpha _n) \Vert \mathscr {G}w_n - \varpi ^*\Vert + \alpha _n \Vert \mathscr {G}z_n - \varpi ^*\Vert \\&\le (1-\alpha _n) \Vert w_n - \varpi ^*\Vert + \alpha _n \Vert z_n - \varpi ^*\Vert \\&= (1-\alpha _n) \Vert x_n - \varpi ^*\Vert + \alpha _n \Vert x_n - \varpi ^*\Vert \\&= \Vert x_n - \varpi ^*\Vert \end{aligned}$$
$$\begin{aligned} \Vert x_{n+1} - \varpi ^*\Vert&= \Vert \mathscr {G}y_n - \varpi ^*\Vert \\&\le \Vert y_n - \varpi ^*\Vert \\&= \Vert x_n - \varpi ^*\Vert \end{aligned}$$

Therefore, \(\{\Vert x_n - \varpi ^*\Vert \}\) is bounded and non-decreasing, as such, as \(n \rightarrow \infty\), \(\lim \nolimits _{n \rightarrow \infty } \Vert x_n - \varpi ^*\Vert\) exists for every \(\varpi ^* \in \mathcal {F}(\mathscr {G}) \ne \emptyset\). This completes the proof. \(\square\)

Lemma 3.2

Let \(\mathscr {G}: \mathbb {E} \rightarrow \mathbb {E}\) be a mapping that satisfies the conditions of the class of generalized \((\alpha ,\beta )\)-nonexpansive mappings on a nonempty closed convex subset of a uniformly convex Banach space \(\mathbb {X}\). Let \(\{x_n\}\) be a sequence generated by the new iterative scheme (7). Then \(\mathcal {F}(\mathscr {G}) \ne \emptyset\) if and only if \(\{x_n\}\) is bounded and \(\lim \nolimits _{n \rightarrow \infty } \Vert \mathscr {G}x_n - x_n\Vert = 0\).

Proof

Assume that \(\mathcal {F}(\mathscr {G})\ne \emptyset\) is a fixed point set and \(\varpi ^*\in \mathcal {F}(\mathscr {G})\). Then by Lemma 3.1, \(\lim \nolimits _{n \rightarrow \infty }\Vert x_n - \varpi ^*\Vert\) exists and \(\{x_n\}\) is bounded. For a real value m, we set

$$\begin{aligned} \lim _{n \rightarrow \infty } \Vert x_n - \varpi ^*\Vert = m \end{aligned}$$
(10)

From Lemma 3.1and (10), we have the following:

$$\begin{aligned} & \limsup \limits _{n \rightarrow \infty } \Vert w_n - \varpi ^*\Vert \le \limsup \limits _{n \rightarrow \infty } \Vert x_n - \varpi ^*\Vert = m \end{aligned}$$
(11)
$$\begin{aligned} & \limsup \limits _{n \rightarrow \infty } \Vert z_n - \varpi ^*\Vert \le \limsup \limits _{n\rightarrow \infty } \Vert x_n - \varpi ^*\Vert = m \end{aligned}$$
(12)

and

$$\begin{aligned} \limsup \limits _{n \rightarrow \infty } \Vert y_n - \varpi ^*\Vert \le \limsup \limits _{n \rightarrow \infty } \Vert x_n - \varpi ^*\Vert \end{aligned}$$
(13)

By hypothesis (1) of Proposition 2.1, we have

$$\begin{aligned} \limsup \limits _{n\rightarrow \infty } \Vert y_n - \varpi ^*\Vert \le \limsup \limits _{n \rightarrow \infty } \Vert x_n - \varpi ^*\Vert = m. \end{aligned}$$
(14)

From the proof of Lemma 3.1, we have that

$$\begin{aligned} \begin{aligned} \Vert x_{n+1} - \varpi ^*\Vert&= \Vert \mathscr {G}y_n -\varpi ^*\Vert \\&\le \Vert y_n - \varpi ^*\Vert \end{aligned} \end{aligned}$$
(15)

Again, from hypothesis (1) of Proposition 2.1, alongside Eq. (10), we have

$$\begin{aligned} m = \liminf \limits _{n \rightarrow \infty } \Vert x_{n+1} - \varpi ^*\Vert \le \liminf \limits _{n \rightarrow \infty } \Vert y_n - \varpi ^*\Vert . \end{aligned}$$
(16)

From (13) and (16), it follows that

$$\liminf \limits _{n \rightarrow \infty } \Vert y_n - \varpi ^*\Vert \le \limsup \limits _{n \rightarrow \infty } \Vert y_n - \varpi ^*\Vert = m,$$

such that

$$\begin{aligned} \lim \limits _{n \rightarrow \infty } \Vert y_n - \varpi ^*\Vert = m, \end{aligned}$$
(17)

and we have

$$\begin{aligned} \Vert y_n - \varpi ^*\Vert&= \Vert \mathscr {G}[(1 - \alpha _n)\mathscr {G}w_n + \alpha _n \mathscr {G}z_n] - \varpi ^*\Vert \\&\le \Vert (1-\alpha _n)\mathscr {G}w_n+\alpha _n\mathscr {G}z_n-\varpi ^*\Vert \\&\le (1 - \alpha _n) \Vert \mathscr {G}w_n - \varpi ^*\Vert + \alpha _n \Vert \mathscr {G} z_n - \varpi ^*\Vert \\&\le (1 - \alpha _n) \Vert w_n - \varpi ^*\Vert + \alpha _n \Vert z_n - \varpi ^*\Vert . \end{aligned}$$

This gives

$$\Vert y_n - \varpi ^*\Vert - \Vert w_n - \varpi ^*\Vert \le \frac{\Vert y_n-\varpi ^*\Vert -\Vert w_n-\varpi ^*\Vert }{\alpha _n}\le \Vert z_n - \varpi ^*\Vert - \Vert w_n - \varpi ^*\Vert ,$$

hence,

$$\begin{aligned} \Vert y_n-\varpi ^*\Vert \le \Vert z_n-\varpi ^*\Vert . \end{aligned}$$

So that,

$$\begin{aligned} m=\liminf \limits _{n \rightarrow \infty } \Vert y_n - \varpi ^*\Vert \le \limsup \limits _{n \rightarrow \infty } \Vert z_n - \varpi ^*\Vert \end{aligned}$$
$$\begin{aligned} m\le \liminf \limits _{n \rightarrow \infty } \Vert z_n - \varpi ^*\Vert . \end{aligned}$$
(18)

combining (12) and (18), we have

$$\begin{aligned} m=\lim \limits _{n\rightarrow \infty }\Vert z_n-\varpi ^*\Vert . \end{aligned}$$
(19)

Again,

$$\begin{aligned} \Vert z_n-\varpi ^*\Vert&=\Vert (1-\beta _n)x_n+\beta _n\mathscr {G}w_n-\varpi ^*\Vert \\&\le (1-\beta _n)\Vert x_n-\varpi ^*\Vert +\beta _n\Vert \mathscr {G}w_n-\varpi ^*\Vert \\&\le (1-\beta _n)\Vert x_n-\varpi ^*\Vert +\beta _n\Vert w_n-\varpi ^*\Vert \end{aligned}$$

here, this gives

$$\begin{aligned} \Vert z_n-\varpi ^*\Vert -\Vert x_n-\varpi ^*\Vert \le \frac{\Vert z_n-\varpi ^*\Vert -\Vert x_n-\varpi ^*\Vert }{\beta _n}\le \Vert w_n-\varpi ^*\Vert -\Vert x_n-\varpi ^*\Vert , \end{aligned}$$

hence,

$$\begin{aligned} \Vert z_n-\varpi ^*\Vert \le \Vert w_n-\varpi ^*\Vert . \end{aligned}$$

Furthermore,

$$\begin{aligned} m=\liminf \limits _{n \rightarrow \infty }\Vert z_n-\varpi ^*\Vert \le \liminf \limits _{n \rightarrow \infty }\Vert w_n-\varpi ^*\Vert \end{aligned}$$
$$\begin{aligned} m\le \liminf \limits _{n \rightarrow \infty }\Vert w_n-\varpi ^*\Vert . \end{aligned}$$
(20)

Combining (11) and (20), we have

$$\begin{aligned} m=\lim \limits _{n \rightarrow \infty }\Vert w_n-\varpi ^*\Vert . \end{aligned}$$
(21)

From (21), we have

$$\begin{aligned} m=\lim \limits _{n \rightarrow \infty }\Vert w_n-\varpi ^*\Vert&=\lim \limits _{n \rightarrow \infty }\Vert \mathscr {G}x_n-\varpi ^*\Vert \\&\le \lim \limits _{n \rightarrow \infty }\Vert x_n-\varpi \Vert \end{aligned}$$

so that if the following estimate holds without loss of generality,

$$\begin{aligned} m=\lim \limits _{n \rightarrow \infty }\Vert x_n-\varpi ^*\Vert =\lim \limits _{n \rightarrow \infty }\Vert (1-\mu _n)(x_n-\varpi ^*)+\mu _n(\mathscr {G}x_n-\varpi ^*)\Vert \end{aligned}$$
(22)

(for any arbitrary \(\mu _n\in (0,1)\)), then, by applying Lemma 2.2, we have

$$\begin{aligned} \lim \limits _{n \rightarrow \infty }\Vert \mathscr {G}x_n-x_n\Vert =0. \end{aligned}$$

Conversely, we want to show that the fixed point set \(\mathcal {F}(\mathscr {G}) \ne \emptyset\)whenever \(\{x_n\}\)is bounded with

$$\lim _{n \rightarrow \infty } \Vert \mathscr {G} x_n - x_n \Vert = 0.$$

To show that, let \(\varpi ^* \in \mathcal {A}(\mathbb {E}, \{x_n\})\). Applying condition (2) of Proposition 2.1, we have

$$\begin{aligned} \mathcal {R}(\mathscr {G}(\varpi ^*), \{x_n\})&= \limsup _{n \rightarrow \infty } \Vert x_n - \mathscr {G} \varpi ^* \Vert \\&\le \left( \frac{3+\alpha + \beta }{1 - \alpha - \beta } \right) \limsup \limits _{n \rightarrow \infty } \Vert \mathscr {G} x_n - x_n \Vert + \limsup \limits _{n \rightarrow \infty } \Vert x_n - \varpi ^* \Vert \\&= \limsup _{n \rightarrow \infty } \Vert x_n - \varpi ^* \Vert \\&= \mathcal {R}(\varpi ^*, \{x_n\}) \end{aligned}$$

It follows that \(\mathscr {G} \varpi ^* \in \mathcal {A}(\mathbb {E}, \{x_n\})\). Hence \(\mathcal {A}(\mathbb {E}, \{x_n\})\) is singleton, which follows that \(\mathscr {G} \varpi ^* = \varpi ^*\) and \(\mathcal {F}(\mathscr {G})\)is nonempty. Hence, the Lemma is proved. \(\square\)

The next theorem is the weak convergence result.

Theorem 3.1

Let \(\mathbb {E}\)be a convex closed subset of a uniformly convex Banach space \(\mathbb {X}\), and \(\mathscr {G}:\mathbb {E}\rightarrow \mathbb {E}\)a mapping satisfying the class of generalized (\(\alpha ,\beta\))-nonexpansive mapping. Suppose \(\mathcal {F}(\mathscr {G}) \ne \emptyset\)and \(\{x_n\}\)is a sequence generated by the new fixed point iterative scheme (7). Then \(\{x_n\}\)converges weakly to some fixed point of \(\mathscr {G}\)if \(\mathbb {X}\)is endowed with Opial’s condition.

Proof

Since every uniformly convex Banach space is always reflexive, we claim that \(\mathbb {X}\) is reflexive. We have that the sequence \(\{x_n\}\) is bounded as shown in Lemma 3.2. As such, there exists a weak convergent subsequence \(\{x_{n_j}\}\) which has a weak limit, \(s_1\). By Lemma 3.2, we can have that \(\lim \nolimits _{j \rightarrow \infty }\Vert \mathscr {G}x_{n_j} - x_{n_j}\Vert = 0\).

If we apply Condition (3) of Proposition 2.1, the point \(s_1\) becomes the fixed point for the mapping \(\mathscr {G}\). Then we now show that \(s_1\) is a weak limit for \(\{x_n\}\) and the proof is complete. Furthermore, if on the contrary, we have that \(s_1\) is not a weak limit for \(\{x_n\}\) and so, we can find another convergent subsequence \(\{x_{n_k}\}\) of \(\{x_n\}\) such that this subsequence converges to a weak limit \(s_2\) and \(s_1 \ne s_2\). Considering the same approach as before, we can show that \(s_2\) is a fixed point of the mapping \(\mathscr {G}\) by applying Condition (3) of Proposition 2.1. By Lemma 3.1 and applying the Opial’s condition (9) on \(\mathscr {G}\), we have that

$$\begin{aligned} \lim \limits _{n \rightarrow \infty } \Vert x_n - s_1\Vert&= \lim \limits _{k \rightarrow \infty } \Vert x_{n_k} - s_1\Vert \\&< \lim \limits _{k \rightarrow \infty } \Vert x_{n_k} - s_2\Vert \\&= \lim \limits _{n \rightarrow \infty } \Vert x_n - s_2\Vert \\&= \lim \limits _{k \rightarrow \infty } \Vert x_{n_k} - s_2\Vert \\&< \lim \limits _{k \rightarrow \infty } \Vert x_{n_k} - s_1\Vert \\&= \lim \limits _{n \rightarrow \infty } \Vert x_n - s_1\Vert . \end{aligned}$$

Obviously, the strict inequality, \(\lim \nolimits _{n \rightarrow \infty } \Vert x_n - s_1\Vert < \lim\nolimits _{n \rightarrow \infty } \Vert x_n - s_1\Vert\) shows contradiction based on our earlier claim that \(s_1\ne s_2\). Hence, we have to accept that \(s_1 = s_2\) which implies that there is only one weak limit \(s_1\) for the sequence \(\{x_n\}\). Therefore the sequence \(\{x_n\}\) converges weakly to the fixed point. Thereby completing the proof. \(\square\)

Here we now present the strong convergence results in different forms.

Theorem 3.2

Let \(\mathbb {X}\) be a uniformly convex Banach space, \(\mathbb {E}\) be a nonempty closed convex subset of \(\mathbb {X}\) and \(\mathbb {E}\) is compact. Suppose \(\mathscr {G}:\mathbb {E}\rightarrow \mathbb {E}\) is a mapping in the class of generalized \((\alpha , \beta )\)-nonexpansive mapping with \(\mathcal {F}(\mathscr {G}) \ne \emptyset\). If \(\{x_n\}_{n=0}^\infty\) is a sequence generated by the new iterative scheme (7), then \(\{x_n\}_{n=0}^\infty\) converges strongly to a fixed point \(\varpi ^*\in \mathcal {F}(\mathscr {G})\).

Proof

Since the subset \(\mathbb {E}\) is convex and compact, we can find a subsequence \(\{x_{n_p}\}\) of the sequence \(\{x_n\}\) such that it converges to a point \(r \in \mathbb {E}\) (i.e., \(x_{n_p} \rightarrow r\)). We want to show that the limit point \(r \in \mathbb {E}\) is a fixed point of \(\mathscr {G}\) and a strong limit of the sequence \(\{x_n\}\) and hence \(x_{n_p} \rightarrow \mathscr {G}r\). To achieve this, we apply Lemma 3.2and we obtain

$$\lim \limits _{p\rightarrow \infty } \Vert Gx_{n_p} - x_{n_p}\Vert = 0.$$

Applying condition (2) of Proposition 2.1, we have

$$\begin{aligned} \Vert x_{n_p} - \mathscr {G}r\Vert \le \left( \frac{3+\alpha +\beta }{1-\alpha -\beta }\right) \Vert x_{n_p} - \mathscr {G}x_{n_p}\Vert + \Vert x_{n_p} - r\Vert \rightarrow 0 \quad \text {as } p \rightarrow \infty . \end{aligned}$$

Hence, \(r=\mathscr {G}r\), so it is obvious that ris a fixed point of \(\mathscr {G}\).

By Lemma 3.1, \(\lim \nolimits _{n\rightarrow \infty } \Vert x_{n+1} - r\Vert\)exists. Of course, it has become clear that \(\{x_n\}\)converges strongly to the limit r. Hence, the proof is complete. \(\square\)

The next theorem is a strong convergence result that does not need \(\mathbb {E}\)to be compact.

Theorem 3.3

Let \(\mathscr {G}\) be a generalized \((\alpha , \beta )\)-nonexpansive mapping on a closed convex subset \(\mathbb {E}\) of a Banach space \(\mathbb {X}\). If the fixed point set \(\mathcal {F}(\mathscr {G}) \ne \emptyset\) and \(\{x_n\}\) is a sequence generated by the new iterative scheme (7), then \(\{x_n\}\) converges strongly to some fixed point of \(\mathscr {G}\) if \(\liminf \nolimits _{n\rightarrow \infty }\rho (x_n, \mathcal {F}(\mathscr {G})) = 0\), where \(\rho (\cdot ,\cdot )\)denotes a distance function or metric.

Proof

Let \(\varpi ^* \in \mathcal {F}(\mathscr {G}) \ne \emptyset\) be a fixed point. From Lemma 3.1, it has already been established that the sequence \(\{\Vert x_n - \varpi ^*\Vert \}\) has a well-defined limit as \(n \rightarrow \infty\). Consequently, the limit of \(\rho (x_n, \mathcal {F}(\mathscr {G}))\) also exists. Now, assume that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty } \rho (x_n, \mathcal {F}(\mathscr {G})) = 0. \end{aligned}$$

Our goal is to construct a Cauchy sequence within \(\mathcal {F}(\mathscr {G})\). To do this, we define a sequence \(\{p_k\} \subseteq \mathcal {F}(\mathscr {G})\) and a corresponding subsequence \(\{x_{n_k}\} \subseteq \{x_n\}\), ensuring that

$$\begin{aligned} \Vert x_{n_k} - p_k\Vert \le \frac{1}{2^k}, \quad \forall k \in \mathbb {N}. \end{aligned}$$

If \(\{p_k\}\) is a nonincreasing sequence, then it follows that

$$\begin{aligned} \Vert x_{n_{k+1}} - p_k\Vert \le \Vert x_{n_{k+1}} - p_{k+1}\Vert + \Vert p_{k+1} - p_k\Vert \le \frac{1}{2^{k+1}} + \Vert p_{k+1} - p_k\Vert \le \frac{1}{2^k}. \end{aligned}$$

By applying the triangle inequality, we obtain

$$\begin{aligned} \Vert p_{k+1} - p_k\Vert \le \Vert p_{k+1} - x_{n_{k+1}}\Vert + \Vert x_{n_{k+1}} - x_{n_k}\Vert + \Vert x_{n_k} - p_k\Vert . \end{aligned}$$

Substituting the bounds, we get

$$\begin{aligned} \Vert p_{k+1} - p_k\Vert \le \frac{1}{2^{k+1}}+\frac{1}{2^k}\le \frac{1}{2^{k-1}} \rightarrow 0 \quad \text {as } k \rightarrow \infty . \end{aligned}$$

Thus, \(\{p_k\}\) is a Cauchy sequence in \(\mathcal {F}(\mathscr {G})\). Assume that \(\mathcal {F}(\mathscr {G})\)is closed in \(\mathbb {E}\), the sequence \(\{p_k\}\) converges to some \(\varpi ^* \in \mathcal {F}(\mathscr {G})\). Consequently, we obtain

$$\begin{aligned} \Vert x_{n_k} - \varpi ^*\Vert \le \Vert x_{n_k} - p_k\Vert + \Vert p_k - \varpi ^*\Vert \rightarrow 0 \quad \text {as } k \rightarrow \infty . \end{aligned}$$

Therefore, the subsequence \(\{x_{n_k}\}\) converges strongly to \(\varpi ^*\). Since Lemma 3.1 guarantees the existence of \(\lim \nolimits _{n\rightarrow \infty } \Vert x_n - \varpi ^*\Vert\), it follows that the entire sequence \(\{x_n\}\)strongly converges to \(\varpi ^*\), a fixed point of \(\mathscr {G}\). Therefore, the proof is complete. \(\square\)

The strong convergence result using condition (I) is given as follows.

Theorem 3.4

Assume that \(\mathscr {G}: \mathbb {E} \rightarrow \mathbb {E}\) is a generalized \((\alpha , \beta )\)-nonexpansive mapping on a convex closed subset \(\mathbb {E}\) of a uniformly convex Banach space \(\mathbb {X}\). Assume that \(\mathscr {G}\)satisfies condition (I). If \(\mathcal {F}(\mathscr {G})\ne \emptyset\) and \(\{x_n\}\) is a sequence generated by the new iterative scheme (7), then \(\{x_n\}\) converges strongly to some fixed point of \(\mathscr {G}\).

Proof

From Lemma 3.2, it has been shown that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty } \Vert x_n - \mathscr {G}x_n\Vert = 0. \end{aligned}$$
(23)

Since Gsatisfies condition (I), it follows that

$$\begin{aligned} \Vert x_n - \mathscr {G}x_n\Vert \ge \mu (\rho (x_n, \mathcal {F}(\mathscr {G}))). \end{aligned}$$
(24)

From (23), we have

$$\begin{aligned} \liminf \limits _{n\rightarrow \infty }\mu (\rho (x_n, \mathcal {F}(\mathscr {G}))) = 0. \end{aligned}$$
(25)

Given that \(\mu : [0, \infty ) \rightarrow [0, \infty )\) is a nondecreasing function with \(\mu (0) = 0\) and \(\mu (h)>0\) for all \(h \in (0, \infty )\). From \((23)\), we can infer that

$$\begin{aligned} \liminf \limits _{n\rightarrow \infty }\rho (x_n, \mathcal {F}(\mathscr {G})) = 0. \end{aligned}$$
(26)

If all the claims of Theorem 3.2 are fulfilled, then we can say that \(\{x_n\}\) converges strongly to some fixed point of \(\mathscr {G}\). \(\square\)

The following example validates the above Theorem 3.4;

Example 1

Let \(\mathbb {X}=\mathbb {R}\) with the usual norm and consider the closed convex subset \(\mathbb {E}=[0,1]\). Define \(\mathscr {G}:\mathbb {E}\rightarrow \mathbb {E}\) by

$$\mathscr {G}(x)=\tfrac{1}{4}x+\tfrac{1}{4}, \quad x\in \mathbb {E}.$$

Clearly, \(\mathscr {G}\)maps \(\mathbb {E}\)into itself. We verify that \(\mathscr {G}\) is a generalized \((\alpha ,\beta )\)-nonexpansive mapping for suitable \(\alpha ,\beta \in [0,1)\) with \(\alpha +\beta <1\). Indeed, for any \(x,y\in \mathbb {E}\)we have

$$|\mathscr {G}x-\mathscr {G}y|=\tfrac{1}{4}|x-y|,$$

and hence the defining inequality of generalized \((\alpha ,\beta )\)-nonexpansiveness holds (take for instance \(\alpha =\beta =0\)).

Next, the set of fixed points of \(\mathscr {G}\) is

$$\mathcal {F}(\mathscr {G})=\{x\in [0,1]: \mathscr {G}x=x\}.$$

Solving \(x=\tfrac{1}{4}x+\tfrac{1}{4}\) yields \(x=\tfrac{1}{3}\). Thus \(\mathcal {F}(\mathscr {G})=\{\tfrac{1}{3}\}\), which is nonempty.

Let \(\{x_n\}\) be the sequence generated by the new iterative scheme (7) with arbitrary initial value \(x_0\in [0,1]\). By Theorem 3.4, the sequence \(\{x_n\}\) converges strongly to the unique fixed point \(\tfrac{1}{3}\)of \(\mathscr {G}\).

Hence, this example illustrates the applicability of the theorem in a nontrivial setting.

Data dependence result

The data dependence result is given as follows;

Theorem 3.5

Suppose \(\mathcal {S}\) is an approximate operator of \(\mathscr {G}\) satisfying the generalized \((\alpha ,\beta )\)-nonexpansive mapping. Assume \(\{x_n\}_{n=0}^\infty\) is a sequence generated by the new iterative scheme (7) for \(\mathscr {G}\) and define the sequence \(\{\bar{x}_n\}\) generated by the iterative scheme

$$\begin{aligned} {\left\{ \begin{array}{ll} \bar{w}_n=\mathcal {S}\bar{x}_n\\ \bar{z}_n=(1-\beta _n)\bar{x}_n+\beta _n\mathcal {S}\bar{w}_n\\ \bar{y}_n=\mathcal {S}[(1-\alpha _n)\mathcal {S}\bar{w}_n+\alpha _n\mathcal {S}\bar{z}_n]\\ \bar{x}_{n+1}=\mathcal {S}\bar{y}_n,\,\,n\in \mathbb {N} \end{array}\right. } \end{aligned}$$
(27)

corresponding to the approximate operator \(\mathcal {S}\), where \(\{\alpha _n\}\) and \(\{\beta _n\}\) are real sequences in [0, 1] such that \(\frac{1}{2}\le \alpha _n\) for all \(n\in \mathbb {N}\) and \(\sum\nolimits _{n=0}^{\infty }\alpha _n=\infty\). If \(\mathscr {G}\varpi ^*=\varpi ^*\) and \(\mathcal {S}\mu ^*=\mu ^*\) such that \(\lim \nolimits _{n \rightarrow \infty }\Vert \bar{x}_n-\mu ^*\Vert =0\), then we have \(\Vert \varpi ^*-\mu ^*\Vert \le \frac{9\epsilon }{1-\delta }\) where \(\epsilon>0\) is a constant.

Proof

Using (7) and (27), we have

$$\begin{aligned} \Vert w_n-\bar{w}_n\Vert&=\Vert \mathscr {G}x_n-\mathcal {S}\bar{x}_n\Vert \nonumber \\&\le \Vert \mathscr {G}x_n-\mathscr {G}\bar{x}_n+\mathscr {G}\bar{x}_n-\mathcal {S}\bar{x}_n\Vert \nonumber \\&\le \Vert \mathscr {G}x_n-\mathscr {G}\bar{x}_n\Vert +\Vert \mathscr {G}\bar{x}_n-\mathcal {S}\bar{x}_n\Vert \nonumber \\&\le \Vert x_n-\bar{x}_n\Vert +\epsilon \end{aligned}$$
(28)
$$\begin{aligned} \Vert z_n-\bar{z}_n\Vert&=\Vert (1-\beta _n)x_n+\beta _n\mathscr {G}w_n-(1-\beta _n)\bar{x}_n-\beta _n\mathcal {S}\bar{w}_n\Vert \\&\le (1-\beta _n)\Vert x_n-\bar{x}_n\Vert +\beta _n\Vert \mathscr {G}w_n-\mathcal {S}\bar{w}_n\Vert \\&\le (1-\beta _n)\Vert x_n-\bar{x}_n\Vert +\beta _n\Vert \mathscr {G}w_n-\mathscr {G}\bar{w}_n+\mathscr {G}\bar{w}_n-\mathcal {S}\bar{w}_n\Vert \\&\le (1-\beta _n)\Vert x_n-\bar{x}_n\Vert +\beta _n\Vert \mathscr {G}w_n-\mathscr {G}\bar{w}_n\Vert +\beta _n\Vert \mathscr {G}\bar{w}_n-\mathcal {S}\bar{w}_n\Vert \\&\le (1-\beta _n) \Vert x_n - \bar{x}_n\Vert + \beta _n \Vert w_n - \bar{w}_n\Vert + \beta _n \epsilon \\&= (1-\beta _n) \Vert x_n - \bar{x}_n\Vert + \beta _n (\Vert x_n - \bar{x}_n\Vert + \epsilon ) + \beta _n\epsilon \\&= (1-\beta _n) \Vert x_n - \bar{x}_n\Vert + \beta _n \Vert \bar{x}_n - \bar{x}_n\Vert + \beta _n\epsilon + \beta _n \epsilon \\&= \Vert x_n - \bar{x}_n\Vert - \beta _n \Vert x_n - \bar{x}_n\Vert + \beta _n \Vert x_n - \bar{x}_n\Vert + 2\beta _n\epsilon \\&= \Vert x_n - \bar{x}_n\Vert + 2\beta _n\epsilon \end{aligned}$$
$$\begin{aligned} \Vert y_n - \bar{y}_n\Vert&= \Vert \mathscr {G}[(1-\alpha _n)\mathscr {G}\bar{w}_n + \alpha _n\mathscr {G}z_n] - \mathcal {S}[(1-\alpha _n)\mathcal {S}\bar{w}_n + \alpha _n\mathcal {S}\bar{z}_n]\Vert \\&\le \Vert \mathscr {G}[(1-\alpha _n)\mathscr {G}w_n + \alpha _n\mathscr {G}z_n] -\mathscr {G}[(1-\alpha _n)\mathcal {S}\bar{w}_n + \alpha _n \mathcal {S}\bar{z}_n] \\&\quad + \mathscr {G}[(1-\alpha _n)\mathcal {S}\bar{w}_n + \alpha _n \mathcal {S}\bar{z}_n] - \mathcal {S}[(1-\alpha _n)\mathcal {S}\bar{w}_n + \alpha _n\mathcal {S}\bar{z}_n]\Vert \\&\le \Vert \mathscr {G}[(1-\alpha _n)\mathscr {G}\bar{w}_n + \alpha _n\mathscr {G}\bar{z}_n] - \mathscr {G}[(1-\alpha _n)\mathcal {S}\bar{w}_n + \alpha _n \mathcal {S}\bar{z}_n]\Vert \\&\quad + \Vert \mathscr {G}[(1-\alpha _n)\mathcal {S}\bar{w}_n + \alpha _n \mathcal {S}\bar{z}_n] - \mathcal {S}[(1-\alpha _n)\mathcal {S}\bar{w}_n + \alpha _n\mathcal {S}\bar{z}_n]\Vert \\&\le \Vert (1-\alpha _n)\mathscr {G}\bar{w}_n +\alpha _n\mathscr {G}\bar{z}_n-(1-\alpha _n)\mathcal {S}\bar{w}_n-\alpha _n \mathcal {S}\bar{z}_n\Vert +\epsilon \\&\le (1-\alpha _n)\Vert \mathscr {G}w_n - \mathcal {S}\bar{w}_n\Vert + \alpha _n\Vert \mathscr {G}z_n - \mathcal {S}\bar{z}_n\Vert + \epsilon \\&\le (1-\alpha _n)\Vert \mathscr {G}\bar{w}_n - \mathscr {G}\bar{w}_n\mathscr {G}\bar{w}_n-\mathcal {S}\bar{w}_n\Vert +\alpha _n\Vert \mathscr {G}z_n-\mathscr {G}\bar{z}_n+\mathscr {G}\bar{z}_n-\mathcal {S}\bar{z}_n\Vert + \epsilon \\&\le (1-\alpha _n)\Vert \mathscr {G}w_n - \mathscr {G}\bar{w}_n\Vert + (1-\alpha _n)\Vert \mathscr {G}\bar{w}_n - \mathcal {S}\bar{w}_n\Vert + \alpha _n \Vert \mathscr {G}z_n - \mathscr {G}\bar{z}_n\Vert \\&\quad + \alpha _n \Vert \mathscr {G}\bar{z}_n - \mathcal {S}\bar{z}_n\Vert \\&\le (1-\alpha _n)\Vert w_n-\bar{w}_n + (1-\alpha _n) \epsilon + \alpha _n\Vert z_n-\bar{z}_n\Vert + \alpha _n\epsilon \\&= (1-\alpha _n) \Vert w_n - \bar{w}_n\Vert + \alpha _n \Vert z_n - \bar{z}_n\Vert + \epsilon \\&= (1-\alpha _n) \{\Vert x_n - \bar{x}_n\Vert + \epsilon \} + \alpha _n\{\Vert x_n - \bar{x}_n\Vert + 2\beta _n \epsilon \} + \epsilon \\&= (1-\alpha _n) \Vert x_n - \bar{x}_n\Vert + (1-\alpha _n)\epsilon + \alpha _n \Vert x_n - \bar{x}_n\Vert + 2\alpha _n \beta _n \epsilon + \epsilon \\&= \Vert x_n - \bar{x}_n\Vert + 2\alpha _n\beta _n \epsilon + \epsilon +(1 - \alpha _n)\epsilon . \end{aligned}$$
$$\begin{aligned} \Vert x_{n+1} - \bar{x}_{n+1}\Vert&= \Vert \mathscr {G}y_n - \mathcal {S}y_n\Vert \\&\le \Vert \mathscr {G}y_n - \mathscr {G}\bar{y}_n + \mathscr {G}\bar{y}_n - \mathcal {S}\bar{y}_n\Vert \\&\le \Vert \mathscr {G}y_n - \mathscr {G}\bar{y}_n\Vert + \Vert \mathscr {G}\bar{y}_n - \mathcal {S}\bar{y}_n\Vert \\&\le \Vert y_n - \bar{y}_n\Vert + \epsilon \\&= \Vert x_n - \bar{x}_n\Vert + 2\alpha _n\beta _n\epsilon + (1-\alpha _n)\epsilon + \epsilon +\epsilon \\&= \Vert x_n - \bar{x}_n\Vert + 2\alpha _n\beta _n \epsilon + (1-\alpha _n) \epsilon + 2\epsilon . \end{aligned}$$

Since \(\alpha _n,\beta _n\in (0,1)\), we have \(\alpha _n\beta _n<1\) and \(1-\alpha _n\le \alpha _n\), so that,

$$\begin{aligned} \Vert x_{n+1} - \bar{x}_{n+1}\Vert&\le \Vert x_n - \bar{x}_n\Vert + 2\epsilon + \alpha _n \epsilon + 2\epsilon \\&= \Vert x_n - \bar{x}_n\Vert + \alpha _n \epsilon + 4\epsilon \\&= \Vert x_n - \bar{x}_n\Vert + \alpha _n\epsilon + 4(1-\alpha _n+\alpha _n)\epsilon \\&\le \Vert x_n - \bar{x}_n\Vert + \alpha _n(1-\delta )\frac{9\epsilon }{1-\delta }. \\ \end{aligned}$$

Let \(\mu _n:= \Vert x_n - \bar{x}_n\Vert\), \(\Phi _n:= \alpha _n (1-\delta )\) and \(\wp _n:= \frac{9\epsilon }{1-\delta }\).

From Lemma 2.1, it follows that \(0\le \limsup \nolimits _{n \rightarrow \infty } \Vert x_n - \bar{x}_n\Vert \le \limsup \nolimits _{n \rightarrow \infty } \frac{9\epsilon }{1-\delta }\).

From strong convergence result, it is clear that \(\lim \nolimits _{n \rightarrow \infty } \Vert x_n - \varpi ^*\Vert = 0\).

Consequently, we can assume that \(\lim \nolimits _{n \rightarrow \infty } \Vert \bar{x}_n - \mu ^*\Vert = 0\), so we clearly have that

$$\begin{aligned} \Vert \varpi ^* - \mu ^*\Vert \le \frac{9\epsilon }{1-\delta }. \end{aligned}$$

Thereby completing the proof. \(\square\)

\(\mathscr {G}\)-stability and almost \(\mathscr {G}\)-stability results

The following theorems indicate the \(\mathscr {G}\)-stability and almost \(\mathscr {G}\)-stability results.

Theorem 3.6

Let \(\mathbb {X}\)be a Banach space, \(\mathscr {G}: \mathbb {E}\rightarrow \mathbb {E}\) be a generalized \((\alpha , \beta )\)-nonexpansive mapping with a fixed point \(\varpi ^*\in \mathcal {F}(\mathscr {G}) \ne \emptyset\). Let \(\{x_n\}_{n=0}^\infty\) be a sequence generated by the iterative scheme (7) and if it converges to the fixed point \(\varpi ^*\), then the iterative scheme (7) is \(\mathscr {G}\)-stable.

Proof

Let \(\{s_n\}_{n=0}^\infty\) be an arbitrary sequence in \(\mathbb {E}\) and let the sequence \(\{s_n\}_{n=0}^\infty\)generated by the new itera tive scheme (7) be \(x_{n+1} = f(\mathscr {G}, x_n)\) which converges to a unique fixed point \(\varpi ^*\).

Assume that \(\epsilon _n = \Vert s_{n+1} - f(\mathscr {G}, s_n)\Vert\). We are to show that \(\lim \nolimits _{n \rightarrow \infty } \epsilon _n = 0\) if and only if \(\lim \nolimits _{n \rightarrow \infty } \Vert s_n - \varpi ^*\Vert = 0\).

Suppose \(\lim \nolimits _{n \rightarrow \infty } \epsilon _n = 0\).

$$\begin{aligned} \Vert s_{n+1} - \varpi ^*\Vert&= \Vert s_{n+1} - f(\mathscr {G}, s_n) + f(\mathscr {G}, s_n) - \varpi ^*\Vert \nonumber \\&\le \Vert s_{n+1} - f(\mathscr {G}, s_n)\Vert + \Vert f(\mathscr {G}, s_n) - \varpi ^*\Vert \nonumber \\&\le \epsilon _n + \Vert f(\mathscr {G}, s_n) - \varpi ^*\Vert \nonumber \\&\le \epsilon _n + \Vert \mathscr {G}y_n - \varpi ^*\Vert \nonumber \\&\le \epsilon _n + \Vert y_n - \varpi ^*\Vert \end{aligned}$$
(29)

Next,

$$\begin{aligned} \Vert y_n - \varpi ^*\Vert&= \Vert \mathscr {G}[(1-\alpha _n)\mathscr {G}w_n + \alpha _n \mathscr {G}z_n] - \varpi ^*\Vert \nonumber \\&\le \Vert (1-\alpha _n)\mathscr {G}w_n + \alpha _n\mathscr {G}z_n - \varpi ^*\Vert \nonumber \\&\le (1-\alpha _n) \Vert \mathscr {G} w_n - \varpi ^*\Vert + \alpha _n \Vert \mathscr {G}z_n - \varpi ^*\Vert \nonumber \\&\le (1-\alpha _n) \Vert w_n - \varpi ^*\Vert + \alpha _n \Vert z_n - \varpi ^*\Vert , \end{aligned}$$
(30)

but,

$$\begin{aligned} \Vert z_n - \varpi ^*\Vert&= \Vert (1-\beta _{n-1}) s_n + \beta _n \mathscr {G}w_n - \varpi ^*\Vert \nonumber \\&\le (1-\beta _n) \Vert s_n - \varpi ^*\Vert + \beta _n \Vert \mathscr {G}w_n - \varpi ^*\Vert \nonumber \\&\le (1-\beta _n) \Vert s_n - \varpi ^*\Vert + \beta _n \Vert w_n - \varpi ^*\Vert , \end{aligned}$$
(31)

and,

$$\begin{aligned} \Vert w_n - \varpi ^*\Vert&= \Vert \mathscr {G}s_n - \varpi ^*\Vert \nonumber \\&\le \Vert s_n - \varpi ^*\Vert \end{aligned}$$
(32)

Putting (32) in (31), we have

$$\begin{aligned} \Vert z_n - \varpi ^*\Vert&\le (1-\beta _n) \Vert s_n - \varpi ^*\Vert + \beta _n \Vert s_n - \varpi ^*\Vert \nonumber \\&= \Vert s_n - w^*\Vert . \end{aligned}$$
(33)

Putting (32) and (33) in (30), we have

$$\begin{aligned} \Vert y_n - \varpi ^*\Vert&= (1-\alpha _n) \Vert s_n - \varpi ^*\Vert + \alpha _n \Vert s_n - \varpi ^*\Vert \nonumber \\&= \Vert s_n - \varpi ^*\Vert , \end{aligned}$$
(34)

putting (34) in (29), we have

$$\Vert s_{n+1} - \varpi ^*\Vert \le \epsilon _n + \Vert s_n - \varpi ^*\Vert .$$

From Lemma (2.3), we have

$$\begin{aligned} \lim \limits _{n \rightarrow \infty } \Vert s_n - \varpi ^*\Vert = 0. \end{aligned}$$

Conversely, assume that \(\lim \nolimits _{n \rightarrow \infty } \Vert s_n - \varpi ^*\Vert = 0\), then

$$\begin{aligned} \epsilon _n&= \Vert s_{n+1} - f(\mathscr {G}, s_n)\Vert \nonumber \\&= \Vert s_{n+1} - \varpi ^* + \varpi ^* - f(\mathscr {G}, s_n)\Vert \nonumber \\&\le \Vert s_{n+1} - \varpi ^*\Vert + \Vert \varpi ^* - f(\mathscr {G}, s_n)\Vert \nonumber \\&= \Vert s_{n+1} - \varpi ^*\Vert + \Vert \mathscr {G}y_n- \varpi ^*\Vert \nonumber \\&\le \Vert s_{n+1} - \varpi ^*\Vert + \Vert y_n - \varpi ^*\Vert \nonumber \\&\le \Vert s_{n+1} - \varpi ^*\Vert + \Vert \mathscr {G}[(1-\alpha _n)\mathscr {G}w_n + \alpha _n\mathscr {G}z_n] - \varpi ^*\Vert \nonumber \\&\le \Vert s_{n+1} - \varpi ^*\Vert + \Vert (1-\alpha _n)\mathscr {G}w_n + \alpha _n\mathscr {G}z_n - \varpi ^*\Vert \nonumber \\&\le \Vert s_{n+1} - \varpi ^*\Vert + (1-\alpha _n) \Vert \mathscr {G}w_n - \varpi ^*\Vert + \alpha _n \Vert \mathscr {G}z_n - \varpi ^*\Vert \nonumber \\&\le \Vert s_{n+1} - \varpi ^*\Vert + (1-\alpha _n) \Vert w_n - \varpi ^*\Vert + \alpha _n \Vert z_n - \varpi ^*\Vert \nonumber \\&\le \Vert s_{n+1} - \varpi ^*\Vert + (1-\alpha _n) \Vert \mathscr {G}s_n - \varpi ^*\Vert + \alpha _n \Vert (1-\beta _n)s_n + \beta _n \mathscr {G}w_n - \varpi ^*\Vert \nonumber \\&\le \Vert s_{n+1} - \varpi ^*\Vert + (1-\alpha _n) \Vert s_n - \varpi ^*\Vert + \alpha _n(1-\beta _n) \Vert s_n - \varpi ^*\Vert + \alpha _n\beta _n \Vert w_n - \varpi ^*\Vert \nonumber \\&\le \Vert s_{n+1} - \varpi ^*\Vert + (1-\alpha _n) \Vert s_n - \varpi ^*\Vert + \alpha _n (1-\beta _n) \Vert s_n - \varpi ^*\Vert + \alpha _n \beta _n\Vert \mathscr {G}s_n - \varpi ^*\Vert \nonumber \\&\le \Vert s_{n+1} - \varpi ^*\Vert +(1-\alpha _n)\Vert s_n-\varpi ^*\Vert +\alpha _n(1-\beta _n)\Vert s_n-\varpi ^*\Vert +\alpha _n\beta _n\Vert s_n-\varpi \Vert \nonumber \\&= \Vert s_{n+1} - \varpi ^*\Vert + \Vert s_n - \varpi ^*\Vert -\alpha _n\Vert s_n-\varpi ^*\Vert +\alpha _n\Vert s_n-\varpi ^*\Vert -\alpha _n\beta _n\Vert s_n-\varpi ^*\Vert \nonumber \\&\quad +\alpha _n\beta _n\Vert s_n-\varpi ^*\Vert \nonumber \\&= \Vert s_{n+1} - \varpi ^*\Vert + \Vert s_n - \varpi ^*\Vert \end{aligned}$$
(35)

Taking limit as \(n \rightarrow \infty\) of both sides and taking cognizance of the fact that \(\lim \nolimits _{n \rightarrow \infty } \Vert s_n - \varpi ^*\Vert = 0\). Hence \(\lim \nolimits _{n \rightarrow \infty } \epsilon _n = 0\).

Therefore the iterative scheme (7) is \(\mathscr {G}\)-stable. \(\square\)

Next, we consider the almost \(\mathscr {G}\)-stability result.

Theorem 3.7

Let \(\mathbb {X}, \mathbb {E}\) and \(\mathscr {G}\) be the same as used in Theorem 3.6 with \(\mathscr {G}\) being a mapping in the class of generalized \((\alpha , \beta )\)-nonexpansive mapping and \(\mathcal {F}(\mathscr {G}) \ne \emptyset\). Let \(\sum\nolimits _{n=0}^{\infty }\epsilon _n<\infty\) implies \(\lim \nolimits _{n \rightarrow \infty }\Vert x_n-\mu ^*\Vert =0\). Then the iterative scheme (7) is almost \(\mathscr {G}\)-stable.

Proof

Let \(\{s_n\}_{n=0}^\infty\) be an approximate sequence of \(\{x_n\}_{n=0}^\infty\) in \(\mathbb {E}\) . Assume that the iterative scheme (7) is represented as \(x_{n+1} = f(\mathscr {G}, x_n)\) which converges to a fixed point \(\varpi ^* \in \mathcal {F}(\mathscr {G}) \ne \emptyset\) and \(\epsilon _n = \Vert s_{n+1} - f(\mathscr {G}, s_n)\Vert\), \(\forall n \in \mathbb {N}\). It is our aim to prove that \(\sum\nolimits _{n=0}^\infty \epsilon _n < \infty\) implies \(\lim \nolimits _{n \rightarrow \infty } \Vert s_n - \varpi ^*\Vert = 0\).

Let \(\sum\nolimits _{n=0}^\infty \epsilon _n < \infty\), then using (7), we have

$$\begin{aligned} \Vert s_{n+1} - \varpi ^*\Vert&= \Vert s_{n+1} - f(\mathscr {G}, s_n) + f(\mathscr {G}, s_n) - \varpi ^*\Vert \\&\le \Vert s_{n+1} - f(\mathscr {G}, s_n)\Vert + \Vert f(\mathscr {G}, s_n) - \varpi ^*\Vert \\&\le \epsilon _n + \Vert f(\mathscr {G}, s_n) - \varpi ^*\Vert \\&\le \epsilon _n + \Vert \mathscr {G} y_n - \varpi ^*\Vert \\&\le \epsilon _n + \Vert y_n - \varpi ^*\Vert \\&\le \epsilon _n + \Vert \mathscr {G}[(1-\alpha _n)\mathscr {G}w_n + \alpha _n z_n] - \varpi ^*\Vert \\&\le \epsilon _n + \Vert (1 - \alpha _n) \mathscr {G}w_n + \alpha _n\mathscr {G}z_n - \varpi ^* \Vert \\&\le \epsilon _n + (1 - \alpha _n) \Vert \mathscr {G}w_n - \varpi ^* \Vert + \alpha _n \Vert \mathscr {G}z_n - \varpi ^* \Vert \\&\le \epsilon _n + (1 - \alpha _n) \Vert w_n - \varpi ^* \Vert + \alpha _n \Vert z_n - \varpi ^* \Vert \\&\le \epsilon _n + (1 - \alpha _n) \Vert \mathscr {G} s_n - \varpi ^* \Vert + \alpha _n \Vert (1 - \beta _n)s_n + \beta _n\mathscr {G}w_n - \varpi ^* \Vert \\&\le \epsilon _n + (1 - \alpha _n) \Vert s_n - \varpi ^* \Vert + \alpha _n (1 - \beta _n) \Vert s_n - \varpi ^* \Vert + \alpha _n \beta _n \Vert \mathscr {G} w_n - \varpi ^* \Vert \\&\le \epsilon _n+(1-\alpha _n)\Vert s_n-\varpi ^*\Vert +\alpha _n(1-\beta _n)\Vert s_n-\varpi ^*\Vert +\alpha _n\beta _n\Vert w_n-\varpi ^*\Vert \\&\le \epsilon _n+\Vert s_n-\varpi ^*\Vert -\alpha _n\Vert s_n-\varpi ^*\Vert +\alpha _n\Vert s_n-\varpi ^*\Vert -\alpha _n\beta _n\Vert s_n-\varpi ^*\Vert \\&\quad +\alpha _n\beta _n\Vert w_n-\varpi ^*\Vert \\&\le \epsilon _n+\Vert s_n-\varpi ^*\Vert -\alpha _n\beta _n\Vert s_n-\varpi ^*\Vert +\alpha _n\beta _n\Vert \mathscr {G}s_n-\varpi ^*\Vert \\&\le \epsilon _n+\Vert s_n-\varpi ^*\Vert . \end{aligned}$$

Set \(m_n = \Vert s_n - \varpi ^* \Vert .\)

So that,

$$m_{n+1} = m_n + \varepsilon _n.$$

Since \(\sum\nolimits _{n=1}^{\infty } \epsilon _n < \infty\), then by Lemma 2.4, we obtain \(\sum _{n=1}^{\infty } m_n < \infty\). It follows that \(\lim _{n \rightarrow \infty } m_n = 0\), that is,

$$\lim \limits _{n \rightarrow \infty } \Vert s_n - \varpi ^* \Vert = 0.$$

Hence, the proof is complete. \(\square\)

Numerical example and rate of convergence

The rate of convergence of the proposed scheme, in comparison with existing iterative methods in the literature, is illustrated through numerical examples. The results are presented in both tabular and graphical forms, providing a quantitative assessment of the comparative performance of the iterative methods. Furthermore, Example 2 analytically demonstrates how the conditions of a generalized (\(\alpha\),\(\beta\))-nonexpansive mapping can be guaranteed.

Example 2

Define \(\mathscr {G}: [0,1] \rightarrow [0,1]\)by

$$\mathscr {G}(x) = {\left\{ \begin{array}{ll} 1-x, & \text {if } x \in \left[ 0, \frac{1}{8} \right) , \\ \frac{x+7}{8}, & \text {if } x \in \left[ \frac{1}{8},1\right] . \end{array}\right. }$$

We shall prove that \(\mathscr {G}\)is a generalized \((\alpha ,\beta )\)-nonexpansive mapping for some \(\alpha , \beta \ge 0\) with \(\alpha + \beta < 1\). We shall divide the proof into three cases.

Case 1: If \(0 \le x, y < \frac{1}{8}\), then we have \(\mathscr {G}(x) = 1-x\)and \(\mathscr {G}(y) = 1-y\). Thus,

$$|\mathscr {G}(x) - \mathscr {G}(y)| = |(1-x) - (1-y)| = |x - y|.$$

We choose \(\alpha = \frac{1}{4}, \beta = \frac{1}{4}\)such that

$$\begin{aligned} \frac{1}{4} |x - \mathscr {G}(y)|&+ \frac{1}{4} |y - \mathscr {G}(x)| + \frac{1}{4} |x - \mathscr {G}(x)| + \frac{1}{4} |y - \mathscr {G}(y)| + (1 - 2\alpha - 2\beta )|x - y|\\&= \frac{1}{4} |x - (1-y)| + \frac{1}{4} |y - (1-x)| + \frac{1}{4} |x - (1-x)| \\&\quad + \frac{1}{4} |y - (1-y)| + \frac{1}{4} |x - y|\\&= \frac{1}{4} (|x - 1 + y| + |y - 1 + x| + |1| + |1| + |x - y|)\\&= \frac{1}{4} (|x - y| + |y - x| + 1 + 1 + |x - y|)\\&= \frac{1}{4} (2|x - y| + 2) = \frac{1}{2} |x - y| + \frac{1}{2} \\&\ge |\mathscr {G}(x) - \mathscr {G}(y)|. \end{aligned}$$

Case 2: If \(x, y \ge \frac{1}{8}\), then \(\mathscr {G}(x) = \frac{x+7}{8}\) and \(\mathscr {G}(y) = \frac{y+7}{8}\), so that

$$|\mathscr {G}(x) - \mathscr {G}(y)| = \left| \frac{x+7}{8} - \frac{y+7}{8} \right| = \frac{|x - y|}{8}.$$

We compute

$$\frac{1}{4} |x - \mathscr {G}(y)| + \frac{1}{4} |y - \mathscr {G}(x)| + \frac{1}{4} |x - \mathscr {G}(x)| + \frac{1}{4} |y - \mathscr {G}(y)| + \frac{1}{2} |x - y|.$$

Since \(|x - \mathscr {G}(y)| = |x - \frac{y+7}{8}| \le |x - y|\), and similarly for other terms, we conclude that

$$|\mathscr {G}(x) - \mathscr {G}(y)| \le \frac{1}{2} |x - y|.$$

Case 3: If \(x < \frac{1}{8}\) and \(y \ge \frac{1}{8}\), then \(\mathscr {G}(x) = 1-x\) and \(\mathscr {G}(y) = \frac{y+7}{8}\). We have

$$|\mathscr {G}(x) - \mathscr {G}(y)| = \left| 1-x - \frac{y+7}{8} \right| .$$

Using a similar analysis with \(\alpha = \frac{1}{4}, \beta = \frac{1}{4}\), we establish

$$|\mathscr {G}(x) - \mathscr {G}(y)| \le \frac{1}{4} |x - \mathscr {G}(y)| + \frac{1}{4} |y - \mathscr {G}(x)| + \frac{1}{4} |x - \mathscr {G}(x)| + \frac{1}{4} |y - \mathscr {G}(y)| + \frac{1}{4} |x - y|.$$

Thus, \(\mathscr {G}\) is a generalized \((\frac{1}{4}, \frac{1}{4})\)-nonexpansive mapping.

Remark 4.1

The restriction of \(\mathscr {G}\) to the unit interval \([0,1]\) is fundamental. This interval is invariant under \(\mathscr {G}\) and contains its unique fixed point, \(\varpi ^* = 1\). This makes \([0,1]\) the natural and necessary domain for studying the convergence of iterative sequences defined by the iterative schemes outlined for this paper.

Example 3

Let \(\mathbb {E}=[0,1]\) be closed and convex subset of \(\mathbb {X}\). The mapping \(\mathscr {G}:[0,1]\rightarrow [0,1]\) is defined as

$$\begin{aligned} \mathscr {G}(x)=\frac{e^{\frac{x}{2}}-1}{2} \end{aligned}$$

and is clearly a generalized \((\alpha ,\beta )\)-nonexpansive mapping. We compute the function numerically to measure the rate of convergence of our iterative scheme by comparison with other existing schemes in literature. The result can be shown in Table 4 and graph.

Remark 4.2

  • 1. The convergence order \(p\) is reported as zero in Table 5 when the classical asymptotic formula

    $$p = \lim _{n\rightarrow \infty } \frac{\log |e_{n+1}/e_n|}{\log |e_n/e_{n-1}|}$$

    cannot be reliably evaluated due to the rapid attainment of machine precision or premature stagnation of the error sequence.

  • 2. The rate constant \(r\) is shown as zero when the estimated limit

    $$r = \lim _{n\rightarrow \infty } \frac{|e_{n+1}|}{|e_n|^{p}}$$

    becomes numerically insignificant or undefined because the errors fall below the floating-point tolerance.

  • 3. The numerical results for Example 2are presented in Tables 1 and 2, and Figs. 1 and 2. Correspondingly, the results for Example 3 are reported in Table 4 and Figs. 3 and 4.

  • 4. Furthermore, to compare the computational efficiency, convergence order, and rate constants of the different schemes, we include Table 3 for Example 2 and Table 5 for Example 3.

Table 1 Comparison of speed of convergence of some iterative scheme for Example 2.
Table 2 Continuation of Table 1.
Fig. 1
Fig. 1
Full size image

Graph corresponding to Tables 1 and 2 combined.

Fig. 2
Fig. 2
Full size image

3D surface plot corresponding to Tables 1 and 2 combined.

Table 3 Convergence characteristics and CPU times (s) for all iterative methods. All methods converged to the fixed point \(\varpi ^{*}=1\) in 20 iterations.
Table 4 Comparison of iteration values of Picard, PicardS, New, and Abbas–Nasir methods.
Table 5 Convergence characteristics and CPU times (s) for Example 3. All methods converged to the fixed point \(\varpi ^{*}=0\).
Fig. 3
Fig. 3
Full size image

3D Plots corresponding to Table 4.

Fig. 4
Fig. 4
Full size image

2D Plot of values corresponding to Table 4 for Example 3.

Application to the SEIR epidemic model

The SEIR (Susceptible–Exposed–Infectious–Recovered) model is a fundamental epidemiological framework used to study the transmission dynamics of infectious diseases. It refines the classical SIR model by incorporating an Exposed (E) compartment, accounting for the latent period between infection and the onset of infectiousness. This makes it particularly useful for diseases like COVID-19, measles, influenza, and Ebola, where exposed individuals do not immediately transmit the pathogen41,42,43.

In the SEIR model, the population N(t) is partitioned into four compartments representing the susceptible class - S(t), the exposed class - E(t), infected or infectious class - I(t) and the recovered class - R(t), where tis the time variable. The dynamics of the SEIR model can be represented as a system of differential equations with integer order thus:

$$\begin{aligned} \begin{aligned} \frac{dS(t)}{dt}&= \Upsilon - \mu S(t) - \beta \frac{S(t) I(t)}{N(t)}, \\ \frac{dE(t)}{dt}&= \beta \frac{S(t) I(t)}{N(t)} - (\mu + \delta ) E(t), \\ \frac{dI(t)}{dt}&= \delta E(t) - (\gamma + \mu + \alpha ) I(t), \\ \frac{dR(t)}{dt}&= \gamma I(t) - \mu R(t), \end{aligned} \end{aligned}$$
(36)

where the total population is \(N = S + E + I + R \le N_0\) (for \(N_0\) being the initial population). The Eq. (36) is subject to the initial conditions S(0), E(0), I(0), and R(0). Moreover, the parameters are defined as follows:

\(\Upsilon =\)Per-capita birth rate

\(\mu =\)Per-capita natural death rate

\(\alpha =\)Disease-induced average fatality rate

\(\beta =\)Probability of disease transmission per contact times the number of contacts per unit time

\(\delta =\)Rate of progression from exposed to infectious (such that \(\frac{1}{\delta }\) is the latent period)

\(\gamma =\)Recovery rate of infectious individuals (such that \(\frac{1}{\gamma }\) is the infectious period)

Remark 5.112

The SEIR model (36) will reduce to the classical SIR model if \(\Upsilon =\mu =0\) and \(\delta =\infty\). Furthermore, if \(\Upsilon\) and \(\mu \ne 0\), then the model is considered an endemic SIR model.

To generalize the classical ODE model in (36) with the aim to leverage the effects (particularly for disease with long incubation period like HIV and tuberculosis) and the non-local dynamics in disease spread modelling, we extend (36) to the non-integer order equation of Caputo type. Here, we replace \(\frac{d}{dt}\)with \(^C\mathop {{\mathop {{{\,\mathrm{\mathscr {D}}\,}}}\nolimits ^\alpha }}\limits\)for \(\alpha \in (0,1)\)and the system (36) becomes The fractional-order SEIR model is given by:

$$\begin{aligned} _0^C\mathop {{\mathop {{{\,\mathrm{\mathscr {D}}\,}}}\nolimits _t^\alpha }}\limits S(t)&= \Upsilon - \mu S(t) - \beta \frac{S(t) I(t)}{N(t)}, \\ _0^C\mathop {{\mathop {{{\,\mathrm{\mathscr {D}}\,}}}\nolimits _t^\alpha }}\limits E(t)&= \beta \frac{S(t) I(t)}{N(t)} - (\mu + \delta ) E(t), \\ _0^C\mathop {{\mathop {{{\,\mathrm{\mathscr {D}}\,}}}\nolimits _t^\alpha }}\limits I(t)&= \delta E(t) - (\gamma + \mu + \alpha ) I(t), \\ _0^C\mathop {{\mathop {{{\,\mathrm{\mathscr {D}}\,}}}\nolimits _t^\alpha }}\limits R(t)&= \gamma I(t) - \mu R(t), \end{aligned}$$
(37)

where \(_0^C\mathop {{\mathop {{{\,\mathrm{\mathscr {D}}\,}}}\nolimits _t^\alpha }}\limits\) denotes the Caputo fractional derivative of order \(\alpha \in (0, 1)\). Subject to the initial conditions:

$$S(0)>0, \quad E(0)>0, \quad I(0)>0, \quad R(0)>0.$$

To achieve our aim of proving the existence of the solution of the system (37), we use the following transformed representation: Let

$$\begin{aligned} h_1(t, S, E, I, R)&= \Upsilon - \mu S(t) - \beta \frac{S(t) I(t)}{N(t)}, \\ h_2(t, S, E, I, R)&= \beta \frac{S(t) I(t)}{N(t)} - (\mu + \delta ) E(t), \\ h_3(t, S, E, I, R)&= \delta E(t) - (\gamma + \mu + \alpha ) I(t), \\ h_4(t, S, E, I, R)&= \gamma I(t) - \mu R(t). \end{aligned}$$
(38)

So that the accompanying initial condition becomes:

$$S(0) = M_1, \quad E(0) = M_2, \quad I(0) = M_3, \quad R(0) = M_4.$$

Consequently, (37) becomes:

$$\begin{aligned} _0^C\mathop {{\mathop {{{\,\mathrm{\mathscr {D}}\,}}}\nolimits _t^\alpha }}\limits S(t)&= h_1(t, S, E, I, R), \\ _0^C\mathop {{\mathop {{{\,\mathrm{\mathscr {D}}\,}}}\nolimits _t^\alpha }}\limits E(t)&= h_2(t, S, E, I, R), \\ _0^C\mathop {{\mathop {{{\,\mathrm{\mathscr {D}}\,}}}\nolimits _t^\alpha }}\limits I(t)&= h_3(t, S, E, I, R), \\ _0^C\mathop {{\mathop {{{\,\mathrm{\mathscr {D}}\,}}}\nolimits _t^\alpha }}\limits R(t)&= h_4(t, S, E, I, R), \end{aligned}$$
(39)

with initial conditions:

$$S(0) = M_1, \quad E(0) = M_2, \quad I(0) = M_3, \quad R(0) = M_4.$$

Suppose by transformation,

$$g(t) = \begin{bmatrix} S(t) \\ E(t) \\ I(t) \\ R(t) \end{bmatrix}, \quad g_0 = \begin{bmatrix} M_1 \\ M_2 \\ M_3 \\ M_4 \end{bmatrix},$$

such that:

$$H(t, g(t)) = \begin{bmatrix} h_1(t, g(t)) \\ h_2(t, g(t)) \\ h_3(t, g(t)) \\ h_4(t, g(t)) \end{bmatrix}.$$

Thereby reducing (39) to:

$$\begin{aligned} {\left\{ \begin{array}{ll} _0^C\mathop {{\mathop {{{\,\mathrm{\mathscr {D}}\,}}}\nolimits _t^\alpha }}\limits g(t) = H(t, g(t)), & \alpha \in (0, 1), \\ g(0) = g_0. \end{array}\right. } \end{aligned}$$
(40)

Equation (40) is the fractional differential equation of Caputo type.

Since we are to show the existence of solution of (40), there is need to write (40) in its integral equation equivalence, thus:

$$g(t) = g_0 + \nu (\alpha ) H(t, g(t)) + \hslash (\alpha ) \int _0^tH(s, g(s))ds,$$

where \(\nu (\alpha ) = \frac{1-\alpha }{\Gamma (\alpha )}\) and \(\hslash (\alpha ) = \frac{\alpha }{\Gamma (\alpha )}\).

Suppose that

$$\mathbb {H} = C([0,T], \mathbb {R}^4)$$

is the Banach space of all continuous functions \(g : [0,T] \rightarrow \mathbb {R}^4\), endowed with the norm

$$\Vert g\Vert _{\infty } = \max _{t \in [0,T]} \Vert g(t)\Vert ,$$

where \(\Vert \cdot \Vert\)denotes the Euclidean norm in \(\mathbb {R}^4\), we have in the sequel, the following Lemma which is useful in proving the main theorem of this section.

Lemma 5.113

Assume that the following conditions hold:

  • (\(C_1\)) suppose there exists a constant \(L_H>0\) such that

    $$\begin{aligned} |H(t,g_1(t))-H(t,g_2(t))|\le L_H|g_1-g_2|, \end{aligned}$$

    for each \(g\in \mathbb {H}\)and \(t\in [0,T]\), and

  • (\(C_2\)) \([\nu (\alpha )+T\hslash (\alpha )]L_H<1\).

Then (40) has a unique solution.

Theorem 5.1

Suppose that conditions \((C_1)\)and \((C_2)\) of Lemma 5.1 hold. Suppose that \(\{\alpha _n\}, \{\beta _n\} \in (0,1)\) are arbitrary sequences of real numbers such that \(\sum\nolimits _{k=0}^\infty \alpha _k\beta _k = \infty\) for all \(k\in \mathbb {N}\). Then (40) has a unique solution \(\varpi ^*\) and the sequence generated by the new iterative scheme (7) converges to \(\varpi ^*\).

Proof

Suppose that \(\{x_n\}\) is a sequence generated by the new iterative scheme (7). Let \(\mathscr {G}: \mathbb {H} \rightarrow \mathbb {H}\) be an operator defined by

$$\begin{aligned} \mathscr {G}g(t) = g_0 + \nu (\alpha )H(t, g(t)) + \hslash (\alpha ) \int _0^t H(s, g(s)) ds \end{aligned}$$
(41)

Here, we aim to show that the sequence \(x_n\) converges to the fixed point \(\varpi ^*\) as \(n \rightarrow \infty\).

From (7), (41) and the hypotheses of Lemma 5.1, we have,

$$\begin{aligned} \Vert w_n - \varpi ^*\Vert&= \Vert \mathscr {G}x_n - \varpi ^*\Vert = \Vert \mathscr {G}x_n - \mathscr {G}\varpi ^*\Vert \nonumber \\&\le \max \limits _{t \in [0,T]}|\mathscr {G}x_n(t) - \mathscr {G}\varpi ^*(t)|\nonumber \\&= \max \limits _{t \in [0,T]} \Bigg |\nu (\alpha )H(t, x_n(t)) + \hslash (\alpha ) \int _0^t H(s, x_n(s)) ds\nonumber \\&\qquad - \left( \nu (\alpha )H(t, \varpi ^*(t)) + \hslash (\alpha )\int _0^t H(s, \varpi ^*(s)) ds\right) \Bigg |\nonumber \\&= \max \limits _{t \in [0,T]} \Bigg |\nu (\alpha )\Big (H(t, x_n(t)) - H(t, \varpi ^*(t))\Big )\nonumber \\&\qquad + \hslash (\alpha ) \int _0^t \Big (H(s, x_n(s)) - H(s, \varpi ^*(s))\Big ) ds\Bigg | \nonumber \\&\le \max \limits _{t \in [0,T]} \Bigg |\nu (\alpha )(H(t, x_n(t)) - H(t, \varpi ^*(t)))\Bigg | \nonumber \\&\qquad + \max \limits _{t \in [0,T]} \left| \hslash (\alpha ) \int _0^t (H(s, x_n(s)) - H(s, \varpi ^*(s))) ds\right| \nonumber \\&\le \nu (\alpha )\max _{t \in [0,T]} |H(t, x_n(t)) - H(t, \varpi ^*(t))|\nonumber \\&\qquad +\hslash (\alpha )\max _{t \in [0,T]} |\int _0^t H(s, x_n(s)) ds - \int _0^t H(s, \varpi ^*(s)) ds| \nonumber \\&\le \nu (\alpha )\max _{t \in [0,T]} |H(t, x_n(t)) - H(t, \varpi ^*(t))|\nonumber \\&\qquad + \hslash (\alpha )\max _{t \in [0,T]} \int _0^t \Big |(H(s, x_n(s)) - H(s, \varpi ^*(s)))\Big |ds \nonumber \\&\le \nu (\alpha )L_H\max _{t \in [0,T]} |x_n(t) - \varpi ^*(t)| + \hslash (\alpha )L_HT\max _{t \in [0,T]} |x_n(t) - \varpi ^*(t)|\nonumber \\&\le \nu (\alpha )L_H \Vert x_n(t) - \varpi ^*(t)\Vert + \hslash (\alpha )L_HT\Vert x_n(t) - \varpi ^*(t)\Vert \nonumber \\&= [\nu (\alpha ) + \hslash (\alpha )T] L_H \Vert x_n - \varpi ^*\Vert \end{aligned}$$
(42)

Using (7) and (42), we have

$$\begin{aligned} \left\| {z_{n} - \varpi ^{*} } \right\| & = \left\| {(1 - \beta _{n} )x_{n} + \beta _{n} {\mathcal{G}}w_{n} - \varpi ^{*} } \right\| \\ & \le (1 - \beta _{n} )\left\| {x_{n} - \varpi ^{*} } \right\| + \beta _{n} \left\| {{\mathcal{G}}w_{n} - \varpi ^{*} } \right\| \\ & \le (1 - \beta _{n} )\left\| {x_{n} - \varpi ^{*} } \right\| + \beta _{n} \left\| {{\mathcal{G}}w_{n} - {\mathcal{G}}\varpi ^{*} } \right\| \\ & \le (1 - \beta _{n} )\left\| {x_{n} - \varpi ^{*} } \right\| + \beta _{n} \max _{{t \in [0,T]}} |{\mathcal{G}}w_{n} - {\mathcal{G}}\varpi ^{*} | \\ & = (1 - \beta _{n} )\left\| {x_{n} - \varpi ^{*} } \right\| + \beta _{n} \max _{{t \in [0,T]}} |\nu (\alpha )H(t,w_{n} (t)) + \hbar (\alpha )\int_{0}^{t} H (s,w_{n} (s))ds \\ & \quad - (\nu (\alpha )H(t,\varpi ^{*} (t)) + \hbar (\alpha )\int_{0}^{t} H (s,\varpi ^{*} (s))ds)| \\ & = (1 - \beta _{n} )x_{n} - \varpi ^{*} + \beta _{n} \max _{{t \in [0,T]}} |\nu (\alpha )H(t,x_{n} (t)) - \nu (\alpha )H(t,\varpi ^{*} (t))| \\ & \quad + \beta _{n} \max _{{t \in [0,T]}} |\hbar (\alpha )\int_{0}^{t} H (s,w_{n} (s))ds - \hbar (\alpha )\int_{0}^{t} H (s,\varpi ^{*} (s))ds| \\ & \le (1 - \beta _{n} )x_{n} - \varpi ^{*} + \beta _{n} |\nu (\alpha )|\max _{{t \in [0,T]}} |H(t,w_{n} (t)) - H(t,\varpi ^{*} (t))| \\ & \quad + \beta _{n} \hbar (\alpha )\max _{{t \in [0,T]}} |\int_{0}^{t} {(H(} s,w_{n} (s)) - H(s,\varpi ^{*} (s)))ds| \\ & \le (1 - \beta _{n} )x_{n} - \varpi ^{*} + \beta _{n} \nu (\alpha )L_{H} \max _{{t \in [0,T]}} |w_{n} (t) - \varpi ^{*} (t)| \\ & \quad + \beta _{n} \hbar (\alpha )L_{H} T\max _{{t \in [0,T]}} |w_{n} (s) - \varpi ^{*} (s)|ds \\ \end{aligned}$$
$$\begin{aligned} & \le (1 - \beta _{n} )x_{n} - \varpi ^{*} + \beta _{n} \nu (\alpha )L_{H} w_{n} - \varpi ^{*} + \beta _{n} \hbar (\alpha )TL_{H} w_{n} - \varpi ^{*} \\ & = (1 - \beta _{n} )x_{n} - \varpi ^{*} + \beta _{n} [\nu (\alpha ) + \hbar (\alpha )T]L_{H} w_{n} - \varpi ^{*} \\ & = (1 - \beta _{n} )x_{n} - \varpi ^{*} + \beta _{n} [\nu (\alpha ) + \hbar (\alpha )T]^{2} L_{H}^{2} x_{n} - \varpi ^{*} \\ & = (1 - \beta _{n} (1 - [\nu (\alpha ) + \hbar (\alpha )T]L_{H} ))x_{n} - \varpi ^{*} . \\ \end{aligned}$$
(43)

Using (7) and (43), we have,

$$\begin{aligned} \left\| {y_{n} - \varpi ^{*} } \right\| & = \left\| {{\mathscr{G}}[(1 - \alpha _{n} ){\mathscr{G}}w_{n} + \alpha _{n} {\mathscr{G}}z_{n} ] - \varpi ^{*} } \right\| \\ & = \left\| {{\mathscr{G}}[(1 - \alpha _{n} ){\mathscr{G}}w_{n} + \alpha _{n} {\mathscr{G}}z_{n} ] - {\mathscr{G}}\varpi ^{*} } \right\| \\ & \le \mathop {\max }\limits_{{t \in [0,T]}} |{\mathscr{G}}[(1 - \alpha _{n} ){\mathscr{G}}w_{n} + \alpha _{n} {\mathscr{G}}z_{n} ] - {\mathscr{G}}\varpi | \\ & = \mathop {\max }\limits_{{t \in [0,T]}} |\nu (\alpha )G(t,[(1 - \alpha _{n} ){\mathscr{G}}w_{n} + \alpha _{n} {\mathscr{G}}z_{n} ](t)) \\ & \quad + \hbar (\alpha )\int_{0}^{t} G (s,[(1 - \alpha _{n} ){\mathscr{G}}w_{n} + \alpha _{n} {\mathscr{G}}z_{n} ](s))ds \\ & \quad - (\nu (\alpha )G(t,\varpi ^{*} (t)) + \hbar (\alpha )\int_{0}^{t} G (s,\varpi (s))ds)| \\ \end{aligned}$$
$$\begin{aligned} & = \nu (\alpha )\mathop {\max }\limits_{{t \in [0,T]}} |G(t,[(1 - \alpha _{n} ){\mathscr{G}}w_{n} + \alpha _{n} {\mathscr{G}}z_{n} ](t)) - G(t,\varpi ^{*} (t))| \\ & \quad + \hbar (\alpha )\mathop {\max }\limits_{{t \in [0,T]}} |\int_{0}^{t} G (s,[(1 - \alpha _{n} ){\mathscr{G}}w_{n} + \alpha _{n} {\mathscr{G}}z_{n} ](s))ds - \int_{0}^{t} G (s,\varpi ^{*} (s))ds| \\ & \le \nu (\alpha )L_{H} \mathop {\max }\limits_{{t \in [0,T]}} |(1 - \alpha ){\mathscr{G}}w_{n} + \alpha _{n} {\mathscr{G}}z_{n} - \varpi ^{*} | \\ & \quad + \hbar (\alpha )L_{H} T\mathop {\max }\limits_{{t \in [0,T]}} |(1 - \alpha _{n} ){\mathscr{G}}w_{n} + \alpha _{n} {\mathscr{G}}z_{n} - \varpi ^{*} | \\ & \le \nu (\alpha )L_{H} \left\| {(1 - \alpha _{n} ){\mathscr{G}}w_{n} + \alpha _{n} {\mathscr{G}}z_{n} - \varpi ^{*} } \right\| + \hbar (\alpha )L_{H} T\left\| {(1 - \alpha _{n} ){\mathscr{G}}w_{n} + \alpha _{n} {\mathscr{G}}z_{n} - \varpi ^{*} } \right\| \\ & = L_{H} (\nu (\alpha ) + \hbar (\alpha )T)\left\| {(1 - \alpha _{n} ){\mathscr{G}}w_{n} + \alpha _{n} {\mathscr{G}}z_{n} - \varpi ^{*} } \right\| \\ & \le L_{H} (\nu (\alpha ) + \hbar (\alpha )T)(1 - \alpha _{n} )\left\| {{\mathscr{G}}w_{n} - \varpi ^{*} } \right\| + L_{H} \alpha _{n} (\nu (\alpha ) + \hbar (\alpha )T)\left\| {{\mathscr{G}}z_{n} - \varpi ^{*} } \right\| \\ & \le L_{H} (1 - \alpha _{n} )[\nu (\alpha ) + \hbar (\alpha )T]\left\| {{\mathscr{G}}w_{n} - {\mathscr{G}}\varpi ^{*} } \right\| + L_{H} \alpha _{n} [\nu (\alpha ) + \hbar (\alpha )T]\left\| {{\mathscr{G}}z_{n} - {\mathscr{G}}\varpi ^{*} } \right\| \\ & \le L_{H} (1 - \alpha _{n} )[\nu (\alpha ) + \hbar (\alpha )T]\mathop {\max }\limits_{{t \in [0,T]}} |{\mathscr{G}}w_{n} - {\mathscr{G}}\varpi ^{*} | \\ & \quad + L_{H} \alpha _{n} [\nu (\alpha ) + \hbar (\alpha )T]\mathop {\max }\limits_{{t \in [0,T]}} |{\mathscr{G}}z_{n} - {\mathscr{G}}\varpi ^{*} | \\ & = L_{H} (1 - \alpha _{n} )[\nu (\alpha ) + \hbar (\alpha )T]\mathop {\max }\limits_{{t \in [0,T]}} |\nu (\alpha )G(t,w_{n} (t)) + \hbar (\alpha )\int_{0}^{t} G (s,w_{n} (s))ds \\ & \quad - (\nu (\alpha )G(t,\varpi ^{*} (t)) + \hbar (\alpha )\int_{0}^{t} G (s,\varpi ^{*} (s))ds)| + L_{H} \alpha _{n} [\nu (\alpha ) + \hbar (\alpha )T] \\ & \quad \times \mathop {\max }\limits_{{t \in [0,T]}} |\nu (\alpha )G(t,z_{n} (t)) + \hbar (\alpha )\int_{0}^{t} G (s,z_{n} (s))ds \\ & \quad - (\nu (\alpha )G(t,\varpi ^{*} (t)) + \hbar (\alpha )\int_{0}^{t} G (s,\varpi ^{*} (s))ds)| \\ \end{aligned}$$
$$\begin{aligned} & = L_{H} (1 - \alpha _{n} )[\nu (\alpha ) + \hbar (\alpha )T]\{ \nu (\alpha )\mathop {\max }\limits_{{t \in [0,T]}} \left| {G(t,w_{n} (t)) - G(t,\varpi ^{*} (t))} \right| \\ & \quad + \hbar (\alpha )\mathop {\max }\limits_{{t \in [0,T]}} \left| {\int_{0}^{t} G (s,w_{n} (s))ds - \int_{0}^{t} G (s,\varpi ^{*} (s))ds} \right|\} \\ & \quad + L_{H} \alpha _{n} [\nu (\alpha ) + \hbar (\alpha )T]\{ \nu (\alpha )\mathop {\max }\limits_{{t \in [0,T]}} \left| {G(t,z_{n} (t)) - G(t,\varpi ^{*} (t))} \right| \\ & \quad + \hbar (\alpha )\mathop {\max }\limits_{{t \in [0,T]}} \left| {\int_{0}^{t} G (s,z_{n} (s))ds - \int_{0}^{t} G (s,\varpi ^{*} (s))ds} \right|\} \\ & = L_{H} (1 - \alpha _{n} )[\nu (\alpha ) + \hbar (\alpha )T]\{ \nu (\alpha )\mathop {\max }\limits_{{t \in [0,T]}} \left| {G(t,w_{n} (t)) - G(t,\varpi ^{*} (t))} \right| \\ & \quad + \hbar (\alpha )\mathop {\max }\limits_{{t \in [0,T]}} \left| {\int_{0}^{t} {\left[ {G(s,w_{n} (s)) - G(s,\varpi ^{*} (s))} \right]} ds} \right|\} \\ & \quad + L_{H} \alpha _{n} [\nu (\alpha ) + \hbar (\alpha )T]\{ \nu (\alpha )\mathop {\max }\limits_{{t \in [0,T]}} \left| {G(t,z_{n} (t)) - G(t,\varpi ^{*} (t))} \right| \\ & \quad + \hbar (\alpha )\mathop {\max }\limits_{{t \in [0,T]}} \left| {\int_{0}^{t} {\left[ {G(s,z_{n} (s)) - G(s,\varpi ^{*} (s))} \right]} ds} \right|\} \\ \end{aligned}$$
$$\begin{aligned} & \le L_{H} (1 - \alpha _{n} )[\nu (\alpha ) + \hbar (\alpha )T]\left\{ {\nu (\alpha )\mathop {\max }\limits_{{t \in [0,T]}} \left| {w_{n} (t) - \varpi ^{*} (t)} \right|} \right. \\ & \quad + \left. {\hbar (\alpha )T\mathop {\max }\limits_{{t \in [0,T]}} \left| {w_{n} (s) - \varpi ^{*} (s)} \right|} \right\} + L_{H} \alpha _{n} [\nu (\alpha ) + \hbar (\alpha )T] \\ & \quad \times \left\{ {\nu (\alpha )\mathop {\max }\limits_{{t \in [0,T]}} \left| {z_{n} (t) - \varpi ^{*} (t)} \right| + \hbar (\alpha )T\mathop {\max }\limits_{{t \in [0,T]}} |z_{n} (cs) - \varpi ^{*} (s)|} \right\} \\ & \le L_{H} (1 - \alpha _{n} )[\nu (\alpha ) + \hbar (\alpha )T]\left\{ {\nu (\alpha )\left\| {w_{n} - \varpi ^{*} } \right\| + \hbar (\alpha )T\left\| {w_{n} - \varpi ^{*} } \right\|} \right\} \\ & \quad + L_{H} \alpha _{n} [\nu (\alpha ) + \hbar (\alpha )T]\left\{ {\nu (\alpha )\left\| {z_{n} - \varpi ^{*} } \right\| + \hbar (\alpha )T\left\| {z_{n} - \varpi ^{*} } \right\|} \right\} \\ & = L_{H} (1 - \alpha _{n} )[\nu (\alpha ) + \hbar (\alpha )T]^{2} \left\| {w_{n} - \varpi ^{*} } \right\| + L_{H} \alpha _{n} [\nu (\alpha ) + \hbar (\alpha )T]^{2} \left\| {z_{n} - \varpi ^{*} } \right\| \\ & = L_{H}^{2} (1 - \alpha _{n} )[\nu (\alpha ) + \hbar (x)T]^{3} \left\| {x_{n} - \varpi ^{*} } \right\| \\ & \quad + L_{H} \alpha _{n} [\nu (\alpha ) + \hbar (\alpha )T]^{2} (1 - \beta _{n} (1 - [\nu (\alpha ) + \hbar (\alpha )T]^{2} L_{H}^{2} ))\left\| {x_{n} - \varpi ^{*} } \right\| \\ & = \{ L_{H}^{2} (1 - \alpha _{n} )[\nu (\alpha ) + \hbar (\alpha )T]^{3} + L_{H} \alpha _{n} [\nu (\alpha ) + \hbar (\alpha )T]^{2} \\ & \quad \times \left( {1 - \beta _{n} \left( {1 - [\nu (x) + \hbar (x)T]^{2} L_{H}^{2} } \right)} \right)\} \left\| {x_{n} - \varpi ^{*} } \right\| \\ \end{aligned}$$
(44)

Finally, using (7) and (44), we have

$$\begin{aligned} \Vert x_{n+1}-\varpi ^*\Vert&=\Vert \mathscr {G}y_n-\varpi ^*\Vert \nonumber \\&=\Vert \mathscr {G}y_n-\mathscr {G}\varpi ^*\Vert \nonumber \\&\le \max \limits _{t\in [0,T]}|\mathscr {G}y_n(t)-\mathscr {G}\varpi ^*(t)|\nonumber \\&=\max \limits _{t\in [0,T]}\Bigg |\nu (\alpha )G(t,y_n(t))+\hslash (\alpha )\int _{0}^{t}G(s,y_n(s))ds\nonumber \\&~~~~~~-\Bigg (\nu (\alpha )G(t,\varpi ^*(t))+\hslash (\alpha )\int _{0}^{t}G(s,\varpi ^*(s))ds\Bigg )\Bigg |\nonumber \\&=\max \limits _{t\in [0,T]}|\nu (\alpha )G(t,y_n(t))-\nu (\alpha )G(t,\varpi ^*(t))|\nonumber \\&~~~~~+\max \limits _{t\in [0,T]}|\hslash (\alpha )\int _{0}^{t}G(s,y_n(s))ds-\hslash (\alpha )\int _{0}^{t}G(s,\varpi ^*(s))ds|\nonumber \\&\le \nu (\alpha )\max \limits _{t\in [0,T]}|G(t,y_n(t))-G(t,\varpi ^*(t))|\nonumber \\&~~~~~~+\hslash (\alpha )\max \limits _{t\in [0,T]}\Big |(\int _{0}^{t}G(s,y_n(s))-\int _{0}^{t}G(s,\varpi ^*(s)))ds\Big |\nonumber \\&\le \nu (\alpha )\max \limits _{t\in [0,T]}|G(t,y_n(t))-G(t,\alpha (t))|\nonumber \\&~~~~~~~+\hslash (\alpha )\max \limits _{t\in [0,T]}\int _{0}^{t}|G(s,y_n(s))-G(s,\varpi ^*(s))|ds\nonumber \\&\le \nu (\alpha )L_H\max \limits _{t\in [0,T]}|y_n(t)-\varpi ^*(t)|+\hslash (\alpha )L_HT\max \limits _{t\in [0,T]}|y_n(t)-\varpi ^*(t)|\nonumber \\&\le \nu (\alpha )L_H\Vert y_n-\varpi ^*\Vert +\hslash (\alpha )L_HT\Vert y_n-\varpi ^*\Vert \nonumber \\&\le [\nu (\alpha )+\hslash (\alpha )T]L_H\Vert y_n-\varpi ^*\Vert \nonumber \\&=[\nu (\alpha )+\hslash (\alpha )T]L_H\Bigg \{ L_H^2 (1 - \alpha _n)[\nu (\alpha ) + \hslash (\alpha )T]^3 + L_H \alpha _n [\nu (\alpha ) + \hslash (\alpha )T]^2\times \nonumber \\&\qquad \left( 1 - \beta _n \left( 1 - [\nu (\alpha ) + \hslash (\alpha )T]^2 L^2_H \right) \right) \Bigg \} \Vert x_n - \varpi ^*\Vert \end{aligned}$$
(45)

From condition (\(C_2\)) of Lemma 5.1, we have that \([\nu (\alpha )+T\hslash (\alpha )]L_H<1\), as such, the following is deduced,

$$\begin{aligned} \Vert x_{n+1}-\varpi ^*\Vert&\le \Bigg \{L_H^3 [\nu (\alpha ) + \hslash (\alpha )T]^4 - \alpha _n L_H^2 [\nu (\alpha ) + \hslash (\alpha )T]^3 + L_H^2 \alpha _n [\nu (\alpha ) + \hslash (\alpha )T]^3 \\&\quad - \beta _n\alpha _n L_H^2 [\nu (\alpha ) + \hslash (\alpha )T]^3 (1 - [\nu (\alpha ) + \hslash (\alpha )T]^2 L_H^2)\Bigg \}\Vert x_n-\varpi ^*\Vert \\&= \Bigg \{L_H^3 [\nu (\alpha ) + \hslash (\alpha )T]^4 - \beta _n \alpha _n L_H^2 [\nu (\alpha ) + \hslash (\alpha )T]^3\\&\qquad \times (1 - [\nu (\alpha ) + \hslash (\alpha )T]^2 L_H^2)\Bigg \}\Vert x_n-\varpi ^*\Vert \\&\le \Big \{L_H[\nu (\alpha ) + \hslash (\alpha )T]- \alpha _n \beta _n (1 - [\nu (\alpha ) + \hslash (\alpha )T]^2 L_H^2)\Big \}\Vert x_n-\varpi ^*\Vert \\&\le [1 - \alpha _n \beta _n (1 - [\nu (\alpha ) + \hslash (\alpha )T]L_H)]\Vert x_n-\varpi ^*\Vert \end{aligned}$$

By induction, we obtain

$$\begin{aligned} \Vert x_{n+1} - \varpi ^*\Vert \le \Vert x_0 - \varpi ^*\Vert \prod _{k=0}^{n} \{1 - \alpha _k \beta _k (1 - L_H [\nu (\alpha ) + \hslash (\alpha )T]) \} \end{aligned}$$
(46)

Recall that \(\alpha _k, \beta _k \in [0, 1]\) for all \(k \in \mathbb {N}\), such that based on condition \((C_2)\) of Lemma 5.1, we have

$$\begin{aligned} 1 - \alpha _k \beta _k (1 - L_H [x(\alpha ) + \hslash (\alpha )T]) < 1 \end{aligned}$$

If \(\sum\nolimits _{k=0}^{\infty } \alpha _k\beta _k = \infty\), and from elementary theory we have that \(1 - x \le e^{-x}\), then we can infer that

$$\begin{aligned} \Vert x_{n+1} - \varpi ^*\Vert \le \Vert x_0 - \varpi ^*\Vert e^{-(1 - L_H[\nu (\alpha ) + \hslash (\alpha )T]) \sum _{k=0}^{\infty } \alpha _k \beta _k}. \end{aligned}$$

Hence \(\lim \nolimits _{n \rightarrow \infty } \Vert x_n - \varpi ^*\Vert = 0\). Thereby completing the proof. \(\square\)

Remark 5.2

The fixed point iterative scheme (7) transforms the SEIR fractional system into an integral operator problem. By repeatedly applying the operator to an initial guess, it produces a sequence that converges to the solution of the fractional model.

Conclusion

In this study, we introduced a new fixed point iterative scheme for generalized \((\alpha ,\beta )\)-nonexpansive mappings in Banach spaces. The proposed scheme effectively generalizes and extends several existing iterative methods in the literature, including Picard and those defined in (2), (3), (4), (5), and (6), thereby demonstrating its broad applicability and improved convergence behavior. We rigorously established both weak and strong convergence theorems under suitable conditions and provided a comparative rate of convergence analysis through a numerical example, supported by both graphical and tabular representations.

Additionally, we examined the data dependence of the iterative scheme and proved its \(\mathscr {G}\)-stability and almost \(\mathscr {G}\)-stability, thereby confirming the robustness of the method in addressing perturbations and iterative uncertainties. Two significant applications were also presented to highlight the utility of the developed scheme: one involving the solution of nonlinear boundary value problems via Green’s function, and another applying the scheme to a fractional-order SEIR epidemic model formulated through the Caputo fractional derivative.

These results underscore the theoretical and practical significance of the new iterative method in fixed point theory and nonlinear analysis. Future research directions may include:

  1. 1.

    Further generalization of the proposed scheme to multivalued or non-self mappings;

  2. 2.

    Applications of the scheme to optimal control problems and machine learning frameworks;

  3. 3.

    Extending the convergence analysis to more generalized metric or modular function spaces;

  4. 4.

    Investigating stochastic variants of the scheme for uncertain and data-driven systems;

  5. 5.

    Exploring hybridization of the method with numerical optimization techniques for broader computational efficiency.

Open questions. While the results in this paper establish a solid foundation, several intriguing questions remain open for exploration:

  • Can the proposed scheme be adapted to settings where the Banach space is replaced by a modular, probabilistic, or fuzzy normed space?

  • How does the scheme behave under high-dimensional or infinite-dimensional optimization problems, such as those arising in functional data analysis?

  • Is it possible to establish explicit error estimates and bounds for the rate of convergence beyond asymptotic comparisons?

  • Can the stability results be extended to cover broader perturbation classes, such as stochastic noise or adversarial disturbances?

  • How effectively can the scheme be integrated into computational platforms for real-time simulation of dynamical systems, such as epidemic or control models?