Introduction

PID controllers are widely employed in manufacturing industries for process control because they offer better effectiveness, robustness, and durability [1]. A PID controller regulates system stability, settling time, and response error through three parameters: proportional gain (\({k}_{p}\)), derivative gain (\({k}_{d}\)) and integral gain (\({k}_{i}\)). Proper tuning is critical in industrial settings, as it minimizes settling time, steady-state error, overshoot, and rise time for efficient performance. PID controllers are valued for maintaining setpoints despite disturbances, with broad applications from chemical reactors to motor speed control. Their simplicity makes them widely implementable and foundational for advanced control methods. The proportional term stabilizes the system, the integral term eliminates steady-state error, and the derivative term reduces error change, overshoot, and settling time [1].

The main limitation of PID controllers lies in optimally tuning the proportional (kp), integral (ki), and derivative (\({k}_{d}\)) gains, a complex and time-consuming task in nonlinear systems where poor tuning can cause instability. The tuning of PID controller parameters can be classified into three main categories: analytical methods, rule-based methods, and numerical methods [2]. The Ziegler–Nichols (ZN) method, the most common analytical approach for PID tuning, is widely used but often fails to deliver optimal performance [3].

Rule-based methods, which use heuristic or empirical rules, are often derived from experience or experimental data, such as fuzzy logic tuning and expert systems [4]. Rule-based and analytical PID tuning methods are limited by simplified models, fixed parameters, and reliance on heuristics or expertise, making them less accurate, adaptable, and consistent for real-world systems.

Numerical methods for PID tuning, such as meta-heuristics algorithms, which are computational optimization techniques to systematically search for the best controller parameters by minimizing a defined performance criterion [5]. Numerical methods overcome the limits of analytical and rule-based models by efficiently solving complex, nonlinear, and time-varying systems. Meta-heuristic algorithms enhance this by randomly exploring the search space to quickly find near-optimal solutions. Meta-heuristic algorithms imitate search strategies inspired by physics, human behavior, or nature. For example, the Sine–Cosine Optimization algorithm (SCA) [6] explores the search space by guiding agents toward the best-known regions using sine and cosine functions. Similarly, the Particle Swarm Optimization algorithm (PSO) [7] mimics the flocking behavior of birds in nature to optimize solutions. Metaheuristic algorithms have demonstrated significant effectiveness in addressing a wide range of complex engineering optimization problems across various domains, including bioinformatics [8,9,10], electric motor design [11], Solar energy systems [12,13,14], and passive suspension system optimization [15], among others [16]. Numerous metaheuristic techniques have been proposed in the literature for controller design.

As shown in Fig. 1, the meta-heuristic algorithm optimizes the parameters of the three PID controllers to minimize the rise time, overshoot, settling time, or the difference between the desired response (r(t)) and the actual response (y(t)). Eq (1) describes the integral of the absolute error (IAE), which represents the absolute difference between r(t) and y(t).

Fig. 1
figure 1

PID parameters tuning using a meta-heuristic algorithm [17]

$$IAE= {\int }_{0}^{\infty }\left|y\left(t\right)-r(t)\right| dt$$
(1)

In the literature, various enhancements have been proposed for meta-heuristic approaches to optimize PID controller design. In [18], a PIDN controller was optimized with the Artificial Rabbit’s Optimization (ARO) algorithm for electric furnace temperature control, using adaptive tuning to improve accuracy and reduce overshoot. Comparative studies on DC motor control show that metaheuristic approaches, particularly Genetic Algorithms, outperform alternatives such as GWO, PSO, and ACO in improving rise time, settling time, and mean square error [19]. A study applied Dung Beetle Optimizer (DBO) and Ant-Lion Optimizer (ALO) to cascaded PID and FOPID controllers for Switched Reluctance Motor (SRM) speed control, achieving faster convergence and lower computational complexity than conventional methods [20]. In [21], PID-F controller optimized with the Spider Wasp Optimizer (SWO) was proposed for temperature control in continuous stirred tank reactors (CSTRs), addressing challenges of nonlinearity and time delays. In addition, several other meta-heuristic algorithms have been applied to optimize PID controller parameters to enhance the performance of DC motors, including Invasive Weed Optimization (IWO) [22], Flower Pollination Algorithm (FPA) [23], Firefly Algorithm [24], and Grey Wolf Optimization (GWO) [25]. For other applications, such as voltage regulator control, the Teaching Learning Based Optimization (TLBO) algorithm has been employed to optimize PID controller parameters [26]. Additionally, Differential Evolution (DE) and its enhanced variant, PSODE, have been used to optimize PID settings for three liquid level tank systems [27]. Also, Constrained Particle Swarm Optimization (CPSO) [28], dynamic Particle Swarm Optimization (dPSO) [29], opposition-based Henry Gas Solubility optimization algorithm (OBL-HGS) [30], and the improved Whale Optimization Algorithm (IWOA) [31]. According to the No-Free-Lunch (NFL) theorem [32] no single optimization algorithm can achieve optimal performance across all types of engineering problems. Consequently, enhanced variants of previously listed metaheuristic algorithms, developed to improve the performance PID controller.

The problem statement of this research study is optimizing the parameter estimation of the PID controller using the Artificial Satellite Search Algorithm (ASSA) [33] under the assumption of ideal and noise-free dynamic systems. ASSA is a new physics-based metaheuristic inspired by satellite dynamics. The ASSA models candidate solutions as satellites that adjust their positions to find optimal solutions by simulating both Medium Earth Orbit (MEO) and Low Earth Orbit (LEO) trajectories. This dual-orbit approach enhances the algorithm’s ability to explore and exploit the search space efficiently. It governs the equilibrium between gravitational and centrifugal forces crucial for stable satellite orbits. Within this framework, fundamental operators like gravitational force, mass, position, and velocity dictate satellite trajectories around Earth. Consequently, in the ASSA, each candidate solution (satellite) dynamically forms a unique relationship with Earth over time, thereby promoting more efficient exploration and exploitation of the search space.

ASSA achieves strong exploration capability through a well-integrated design combining several advanced mechanisms. It uses a logistic chaotic map for diverse initialization, adaptive parameters (β and γ) for dynamic, non-linear orbital fluctuations, and a time-decaying gravitational constant to shift smoothly from exploration to exploitation. Additionally, quantum-inspired qubits introduce probabilistic updates, and an orbit control mechanism alternates between global (MEO) and local (LEO) searches, ensuring broad coverage, flexibility, and escape from local optima, which makes ASSA highly effective for complex and high-dimensional optimization problems.

Despite its strengths, ASSA has two key limitations. First, its strong focus on exploration—driven by chaotic maps, adaptive parameters (β and γ), qubits, and orbit control—can reduce its ability to exploit promising regions, potentially slowing convergence in problems that require fine-tuning. Second, its greedy selection strategy, which always accepts better solutions, limits flexibility and increases the risk of premature convergence, as it cannot escape local optima like other algorithms that allow occasional acceptance of worse solutions.

To address the limitations of the original ASSA, an enhanced methodology is proposed by incorporating three key components: a memory mechanism, an evolutionary operator, and a stochastic local search, where this mechanism was applied successfully for enhancing the grey wolf optimizer [34]. The primary objective of these enhancements is to improve the balance between exploration and exploitation during the optimization process. The memory mechanism maintains a separate population that stores the best solutions found throughout the search, ensuring valuable solutions are not lost and enabling a more focused search around promising areas. Simultaneously, the evolutionary operator (based on Differential Evolution principles) guides the population with adaptive mutation and crossover, encouraging diversity in the early stages and fine-tuning solutions in later iterations through a dynamically controlled scaling factor.

In addition, a stochastic local search is employed to intensively refine high-quality solutions within the memory population. By adaptively generating trial solutions around the nearest neighbors of selected individuals, the local search further strengthens the algorithm’s exploitation capability while stabilizing convergence behavior. Together, these mechanisms effectively overcome the drawbacks of greedy selection and premature convergence, often observed in basic metaheuristic frameworks. The integration of these strategies results in a more robust and accurate optimization method, making the enhanced ASSA highly suitable for solving complex, multi-modal problems such as PID controller parameter tuning.

The key contributions of this work are outlined as follows:

  1. (1)

    ASSA optimized PID controller parameters for improved control performance.

  2. (2)

    MEASSA enhances ASSA with memory, evolutionary, and local search strategies.

  3. (3)

    MEASSA was benchmarked on three control systems against leading optimization algorithms.

The remainder of this paper is structured as follows: Sect. 2 provides the Artificial Satellite Search Algorithm. Section 3 describes the proposed enhanced version (MEASSA). Experimental results and a comprehensive discussion are presented in Sect. 4. Finally, the conclusions and key findings of the proposed work are summarized in Sect. 5.

Artificial satellite search algorithm (ASSA)

The ASSA simulates fundamental physics principles by establishing hypothetical orbits for Earth and its satellites to represent the search space [35]. In this model, candidate solutions (satellites) experience varying conditions relative to Earth, which symbolizes the optimal solution, across different time instances. This dynamic interaction ultimately facilitates more efficient exploration and exploitation of the search space. The ASSA employs two primary satellite strategies for navigating the search space: MEO search, where satellites are positioned distantly from Earth to facilitate exploration, and LEO search, placing satellites closer to Earth for effective exploitation. Figure 2 illustrates how factors such as a satellite’s position and mass, its gravitational attraction to Earth, and its orbital velocity collectively influence the satellite’s trajectory relative to the optimal solution (represented by Earth).

Fig. 2
figure 2

The simulation of satellite-like movements in the optimization domain [33]

In ASSA, each candidate solution, represented as a satellite, traverses an elliptical orbit with the “Earth” (representing the optimal solution) at one focus. Like other population-based meta-heuristics, ASSA begins with an initial set of satellite solutions whose fitness is evaluated. The algorithm then iterates, refining solutions based on an objective function, with the best solution in each iteration, like the current Earth. Significantly, the distance of each satellite from Earth is dynamically adjusted to reflect the passage of time within the optimization process.

ASSA’s mathematical model

In the following, the mathematical modeling of ASSA.

Initialization process

Traditional random initialization of satellite populations often leads to slow convergence and a heightened risk of entrapment in local optima due to insufficient initial diversity. To overcome these drawbacks and reduce the likelihood of premature convergence, this study replaces the conventional random generation of satellites with a logistic chaotic map [36], as formulated in Eq. (2).

$$S_{i + 1} = \omega * S_{i} *\left( {1 - S_{i} } \right), 0 \le S_{i} \le 1$$
(2)

where \(\omega\) represents a constant parameter and \({S}_{i}\) denotes the logistic chaotic position value corresponding to the ith satellite and \({S}_{0} \varepsilon (0 , 1)\). Following the fitness evaluation of the initial population, ASSA designates the global best solution (\({S}_{best}\)), functionally represented as the Earth (\({S}_{E}\)).

Gravitational force

Eq (3) provides the calculation for the gravitational force between a satellite (\({S}_{i}\)) and the Earth (\({S}_{best}\)).

$${F}_{i(t)}= {G}_{(t)}*\left(\frac{{M}_{i}*{M}_{E}}{{\overline{R} }^{2}+\varepsilon }\right)*{r}_{(\text{0,1})}$$
(3)

G(t) is an exponentially decaying function, defined by Eq (4), which controls search precision over time (t). \({M}_{i}\) and \({M}_{E}\) denote the inertia masses of the satellite \({S}_{i}\) and Earth \({S}_{E}\), respectively, calculated using Eq (4) and Eq (6). \(\overline{R }\) represents the Euclidean distance between \({S}_{i}\) and \({S}_{E}\), computed via Eq (7) and Eq (8).

$${G}_{(t)}={G}_{0}*{e}^{(-\alpha *\left(\frac{t}{T}\right))}$$
(4)

The constants \({G}_{0}\) and \(\alpha\) are fixed, which are tuned via sensitivity analysis performed during the numerical experiments.

$${M}_{i(t)}= \frac{{fit}_{i(t)}-{worst}_{(t)}}{\sum_{i=1}^{NP}({fit}_{i\left(t\right)}-{worst}_{\left(t\right)})}$$
(5)
$${M}_{i(t)}=\left(\frac{{best}_{i\left(t\right)}-{worst}_{\left(t\right)}}{\sum_{i=1}^{NP}\left({fit}_{i\left(t\right)}-{worst}_{\left(t\right)}\right)}\right)*{r}_{(\text{0,1})}$$
(6)

Here, \({fit}_{i(t)}\) denotes the fitness value of satellite i. In the context of minimization problems, the \({worst}_{(t)}\) refers to the maximum fitness value, while \({best}_{i\left(t\right)}\) represents the minimum fitness value.

$${R}_{(t)}= \sqrt{\sum_{j=1}^{dim}{({S}_{best}-{S}_{i\left(t\right)})}^{2}}$$
(7)
$${\overline{R} }_{(t)}=\frac{{R}_{i(t)}-up({R}_{i\left(t\right)})}{up\left({R}_{i\left(t\right)}\right)-low({R}_{i\left(t\right)})}$$
(8)

Here, \(dim\) represents the problem’s dimension, while \(up({R}_{i\left(t\right)})\) and \(low({R}_{i\left(t\right)})\) denote the upper and lower bounds of the Euclidean distance between satellite \({S}_{i}\) and Earth \({S}_{E}\), respectively.

Medium earth orbit (MEO) search

In the MEO phase, satellites are positioned at a considerable distance from Earth to enable efficient coverage of the entire search space. To enhance the exploration capabilities during this phase, the ASSA incorporates an adaptive factor (β) that simulates the natural variation in the satellite-to-Earth distance. This dynamic behavior evolves over time and is quantitatively defined by Eq (9).

$$\beta ={({e}^{{r}_{\left(\text{0,1}\right)}*\gamma })}^{-1}$$
(9)

where γ is a linearly decreasing control parameter ranging from 1 to −2, which is computed according to Eq (10)

$$\gamma =1+({r}_{\left(\text{0,1}\right)}*(\updelta -1))$$
(10)

where δ is a cyclic control parameter that gradually decreases from −1 to −2 over τ cycles throughout the entire optimization process, and is determined using Eq (11)

$$\delta =-1-(\frac{t\%\frac{T}{\tau }}{\tau })$$
(11)

where t denotes the current iteration number, T represents the maximum number of iterations, and τ indicates the total number of cycles within the entire optimization process.

Figure 3 illustrates the fluctuations of the adaptive factor across iterations. Higher values of the δ parameter correspond to broader exploration regions covered by a satellite, whereas lower δ values indicate a more focused search in the vicinity of the current best solution.

Fig. 3
figure 3

Simulation of the adaptive factors β and γ conducted over two independent runs [33]

This principle is further enhanced by allowing it to fluctuate randomly, as defined by Eq (12) to improve the exploration capability of ASSA when calculating the satellite’s position relative to the Earth.

\({S}_{Mi(t+1)}={S}_{Mi(t)}*{q}_{\left[\text{0,1}\right]}+\left(1- {q}_{\left[\text{0,1}\right]}\right)*({S}_{mean\left(t\right)}+\beta *({S}_{mean\left(t\right)}-{S}_{Mi(t)}\))) (12)

where \({S}_{mean\left(t\right)}\) represents the average of three solutions: \({S}_{i}\), \({S}_{best}\), and \({S}_{a}\); \({S}_{a}\) is a randomly selected solution from the population; β is an adaptive factor; and q [0, 1] denotes a qubit that transitions between states (0) and (1), contributing to the enhancement of the optimization process. According to the principles of quantum mechanics, a qubit can exist in a superposition of states 0 and 1 simultaneously. In the MEO search phase, the updated solution \({S}_{Mi(t+1)}\) is evaluated against the current solution, and the superior one is selected as the new global best solution \({S}_{best}\).

Low earth orbit (LEO) search

LEO artificial satellite search is influenced by MEO satellites, which have a broader coverage area. The velocity of a satellite depends on its location relative to the Earth. Kepler’s second law (the law of equal areas) and Kepler’s third law (the law of harmonies), outlined in Eq (13) and Eq (14), govern the principles that determine the initial velocity and semimajor axis (ai) of the ith satellite orbit at time t.

$${V}_{i(t)}=\sqrt{{G}_{0}*\left({M}_{i}+{M}_{E}\right)*\left|\frac{2}{R+\varepsilon }-\frac{1}{{a}_{i}+\varepsilon }\right|}$$
(12)
$${a}_{i(t)}=\sqrt[3]{\frac{{{T}_{i}}^{2}*{G}_{\left(t\right)}*({M}_{i}+{M}_{E})}{4{\pi }^{2}}}*{r}_{(\text{0,1})}$$
(13)

where \({T}_{i}\) denotes the orbital period of the ith satellite, randomly generated according to a normal distribution; and \({r}_{(\text{0,1})}\) is a uniformly distributed random value in the range (0,1), introduced to enhance the diversity of the semimajor axis. The velocity of LEO satellites during their movement away from the Earth is determined by scaling the initial velocity by the distance between a randomly selected solution and the current solution, to gradually reduce the satellite’s velocity. However, a major limitation is the lack of diversity among solutions, which may restrict the satellites’ ability to escape local optima over time, as the current solution continues to evolve. To address this limitation, ASSA incorporates a step size derived from the range between the lower and upper bounds of the optimization problem.

Additionally, when satellites approach the Earth, their velocity is computed by multiplying the initial velocity by the distance between the current solution and a randomly selected solution. This mechanism enhances the diversification of ASSA’s search strategies. As a result, although this approach promotes exploration, it may also lead to reduced population diversity over time, potentially causing a decrease in velocity during the optimization process. To preserve satellite velocity throughout the optimization process and prevent stagnation in local minima, an additional step is incorporated based on the distance between the lower and upper bounds of the search space. This enhancement is implemented using Eq. (15) and Eq (16).

$${v}_{i(t)}={r}_{(\text{0,1})}*{V}_{i\left(t\right)}*\left({S}_{a\left(t\right)}-{S}_{Li\left(t\right)}\right)+{r}_{\left(\text{0,1}\right)}*q\_dir*(1-\overline{{R }_{\left(t\right)})}*({U}_{B}-{L}_{B})$$
(14)
$${v}_{i(t)}={r}_{(\text{0,1})}*{V}_{i\left(t\right)}*\left({S}_{Li\left(t\right)}-{S}_{b\left(t\right)}\right)+{(1-r}_{\left(\text{0,1}\right)})*{V}_{i\left(t\right)}*\left({S}_{a\left(t\right)}-{S}_{b\left(t\right)}\right)+ {r}_{\left(\text{0,1}\right)}*q*(1-\overline{{R }_{\left(t\right)})}*({U}_{B}-{L}_{B})$$
(15)

where \({S}_{b}\) and \({S}_{a}\) denote two solutions randomly selected from the population; \({U}_{B}\) and \({L}_{B}\) represent the upper and lower bounds of the search space, respectively; and \(q\_dir\) is the qubit direction operator employed in Eq (17) to enhance the orbital movement of satellites. This operator is utilized to alter the search direction, thereby increasing the likelihood that satellites effectively scan the search space. Consistent with the principles of quantum mechanics, the qubit can exist in a superposition of both 0 and 1 states simultaneously, enabling a more flexible and probabilistic exploration behavior.

$$q_{dir} = q_{{\left[ {0,1} \right]}} *\to _{dir}$$
(16)

where \({q}_{\left[\text{0,1}\right]}\) represents a qubit capable of transitioning between the quantum states (0) and (1), to enhance the optimization process by introducing probabilistic behavior and promoting exploration. This process involves comparing two random values: if the first \({r}_{(\text{0,1})}\) is greater than the second, the qubit is assigned a value of 1; otherwise, it is set to 0. The symbol \(\underset{dir}{\to }\) represents the rotational direction of a satellite around the Earth, which can be either counterclockwise or clockwise. If \({r}_{(\text{0,1})}\) < 0.5, \(\underset{dir}{\to }\) is set to 1 (indicating counterclockwise rotation); otherwise, it is set to −1 (indicating clockwise rotation).

Assigning a new position to a satellite involves an additional step size, calculated as the product of the distance between the current satellite and the Earth and the gravitational force. This adjustment enables ASSA to effectively exploit the regions surrounding the current best solution, leading to improved performance with fewer function evaluations. Typically, satellite velocity serves as the primary search operator in ASSA when a satellite is moving away from the Earth. However, this velocity is influenced by the Earth’s gravitational force, which aids in the fine-tuned exploitation of areas near the optimal solution. As a satellite approaches Earth, its velocity increases substantially, enabling it to counteract the intensifying gravitational pull. In this scenario, velocity acts as a mechanism to escape local optima, particularly when the best-so-far solution (i.e., the Earth) corresponds to a local minimum. Therefore, the gravitational attraction of the Earth functions as an exploitation operator, guiding ASSA to challenge the current best solution in pursuit of potentially superior alternatives. This behavior is mathematically described in Eq (18).

$${S}_{Li\left(t+1\right)}={S}_{Li\left(t\right)}+\underset{dir}{\to }* {v}_{\left(t\right)}+\left(1-{q}_{\left[\text{0,1}\right]}\right)*{F}_{i\left(t\right)}* \left({S}_{a\left(t\right)}-{S}_{Li\left(t\right)}\right)$$
(17)

Orbit control mechanism

Satellites operate concurrently within the same orbit and across different orbits to collaboratively search for the desired target solution. Initially, MEO satellites determine their positions within the MEO orbit, while LEO satellites actively search for improved solutions. To simulate this coordinated behavior, an orbit control mechanism, denoted as \({c}_{iter}\), is introduced. This variable varies randomly between 0 and 1 over successive iterations, ensuring that each search cycle is executed efficiently. The orbit control function is mathematically defined in Eq (19), and the variation in the orbit control function across progressive iterations is illustrated graphically in Fig. 4.

$${c}_{iter}=\left|\left(2*{r}_{\left(\text{0,1}\right)}-1\right)*(\frac{t}{T})\right|$$
(18)

where t denotes the current iteration number i, and T represents the total number of iterations in the optimization process.

Fig. 4
figure 4

Illustration of the orbit control computation [33]

The main advantages of ASSA, which motivate us to use it, are as follows:

  1. 1.

    Diversity of Generated Solutions Using Logistic Chaotic Map

The initialization phase is crucial in metaheuristic algorithms. ASSA enhances it by using a logistic chaotic map Eq (2) instead of random initialization, introducing structured randomness that ensures diverse satellite positions. This boosts early exploration and reduces the risk of premature convergence and local optima entrapment.

  1. 2.

    Dynamic Adjustment of the Gravitational Constant over time

Eq. (3) in ASSA dynamically reduces the gravitational constant over time, controlling the attraction between satellites and the global best solution. This exponential decay enables a strategic shift from exploration in early iterations to exploitation later, enhancing convergence by balancing global and local search.

  1. 3.

    Adaptive parameters β and γ

In the MEO search phase, ASSA uses adaptive parameters β and γ (from Eq (9) and Eq (10) to simulate orbital fluctuations. These parameters control how far a satellite deviates from the average of itself, a random peer, and the best solution. This cyclic, non-linear adaptation enhances exploration diversity and dynamically adjusts the search intensity based on progress, preventing stagnation.

  1. 4.

    Incorporation of quantum-inspired principles

A key feature of ASSA is its use of quantum-inspired qubits (Eq. 12), which can exist in superposition, enabling probabilistic position updates. The qubit value q [0,1] controls the influence of current, peer, and best solutions. This stochastic behavior enhances exploration and helps the algorithm escape local optima by encouraging diverse and flexible search paths.

  1. 5.

    Strength Exploration

ASSA’s strong exploration ability stems from its integrated design: a chaotic map for diverse initialization, time-varying gravitational control, adaptive parameters (β and γ), and an orbit control mechanism that switches between global (MEO) and local (LEO) search. Combined with quantum-inspired qubits, these components ensure high diversity, effective local escape, and broad search coverage—ideal for complex, high-dimensional problems.

However, the main drawbacks of ASSA are as follows:

  1. 1.

    High Diversity May Limit Exploitation

ASSA’s emphasis on exploration (through chaotic maps, adaptive parameters (β and γ), qubits, and orbit control) can limit its ability to exploit. While these mechanisms help escape local optima, they may hinder convergence in problems requiring fine-tuning. Excessive variation may lead the algorithm to continue exploring when focused exploitation would yield faster and more precise results.

  1. 2.

    Greedy Selection Strategy

Another limitation of ASSA is its greedy selection strategy, where it always accepts better solutions without allowing worse ones. While the greedy strategy in metaheuristics helps accelerate convergence by always accepting better solutions, it often leads to significant drawbacks such as premature convergence, reduced population diversity, and limited exploration of the search space. By focusing solely on immediate improvement, the algorithm may get trapped in local optima, especially in complex or multi-modal landscapes, and lacks the flexibility to explore suboptimal regions that could lead to better solutions later.

The proposed MEASSA

To overcome the main drawbacks of ASSA, a memory mechanism for preserving better historical solutions and an evolutionary operator is embedded, as described in the following subsections.

Evolutionary operators

The evolutionary operator is inspired by the mutation and crossover mechanisms of DE, and it is applied to the explorer swarm (the main working population). Its main idea is to evolve individuals by combining information with the best solution and other members of the population. The evolutionary operator implements the DE/best/1/bin strategy.

The mutation operator is applied according to Eq (20)

$${{V}_{j}}^{(t+1)}={S}_{j}+ F ({S}_{best}-{S}_{j})$$
(19)

Where \({S}_{best}\) and \({S}_{j}\) represents the best solution and a randomly selected solution, and \(F\) as represented in Eq (21) is a dynamic scaling factor that is developed to enhance the balance between exploration and exploitation phases.

$$F={F}_{min}+\left({F}_{max}-{F}_{min}\right)\frac{T-t}{T}$$
(20)

where \({F}_{max}\), \({F}_{min}\) represent the maximum and minimum values of \(F\) and \(T\) and \(t\) represent the number of total iterations and current iterations iteratively.

The crossover operator is performed by mixing \({V}_{j}\) and \({S}_{j}\) as shown in Eq (22)

$${{U}_{j}}^{(t+1)}= \left\{\begin{array}{c}{{V}_{j}}^{(t+1)} if rand\le Pc \\ {{S}_{j}}^{\left(t\right)} Otherwise\end{array}\right.$$
(21)

where Uj is a test individual parameter and Pc is a crossover probability control.

The selection is performed according to Eq (23)

$${{S}_{j}}^{(t+1)}= \left\{\begin{array}{c}{{U}_{j}}^{(t+1)} if func({{U}_{j}}^{\left(t+1\right)} )\le func({{S}_{j}}^{\left(t\right)} ) \\ {{S}_{j}}^{\left(t\right)} Otherwise\end{array}\right.$$
(22)

The main benefit of embedding the evolutionary operator improves exploration in the early stages through a high scaling factor, enabling the algorithm to escape local optima, while gradually shifting toward exploitation in later stages by narrowing the search around the best solutions. This operator maintains population diversity, prevents premature convergence, and ensures only better offspring are retained through greedy selection. By generating competitive solutions and adapting the search strategy over time, the evolutionary operator plays a critical role in balancing exploration and exploitation, resulting in faster convergence and improved solution accuracy.

Memory mechanism

No personal best or memory for saving the best wolves found so far during the iterations. The current population represents the explorer, while the memory of this population with equal size to store the better solution found during the iterations. After each iteration, the memory population is updated with the better value from the corresponding explorer population.

The main benefit of this mechanism is enhancing the exploitation behavior of the algorithm by preventing the loss of good solutions and promoting intensive search around promising areas and avoiding the drawbacks of greedy strategy mechanisms.

Local search

The stochastic local search significantly enhances exploitation by intensively refining the neighborhood around high-quality solutions, specifically the top 50% of solutions in the memory swarm. By generating trial solutions based on the position of a solution and its nearest neighbor, local search enables directionally adaptive local exploration, helping the algorithm to fine-tune solutions with greater accuracy.

This targeted search avoids redundant exploration of low-quality areas, making the search process more efficient and focused. Additionally, local stabilizes convergence in later iterations by reducing unstable behavior and complements the global exploration introduced by the evolutionary operator and memory mechanism, ultimately improving the algorithm’s ability to locate and converge on the global optimum.

The stochastic local search is performed by finding the nearest solution (Sn) to the current solution (Si) in the memory population based on the Euclidean distance. Then a temporary solution is generated according to Eq (24):

$${{S}_{TP}}^{Mem}={{S}_{i}}^{Mem}+ rand \left(\text{0,1}\right)* ({{S}_{i}}^{Mem}-{{S}_{n}}^{Mem})$$
(23)

If the cost function of the generated temporary solution is better than that of the current solution in the memory population, it replaces the current one; otherwise, it is discarded.

Algorithm 1
figure a

MEASSA

The computational complexity analysis reveals fundamental differences between the original ASSA and its enhanced version MEASSA.

For both algorithms, the key parameters governing time complexity are N (population size), T (maximum number of iterations), and D (problem dimension, fixed at D=3 for PID controller parameter optimization). The original ASSA exhibits linear complexity of O (T × N × D), which simplifies O (T × N) given the constant dimensionality of the PID tuning problem. This efficiency stems from ASSA’s core operations, including gravitational force calculations, orbital position updates, and fitness evaluations, all scaling linearly with population size.

In contrast, MEASSA introduces three significant enhancements: evolutionary operators, memory mechanisms, and stochastic local search, which substantially alter its computational profile. The evolutionary operator contributes O (T × N × D) complexity through mutation and crossover operations, while the memory mechanism adds minimal overhead of O (T × N). However, the stochastic local search proves computationally intensive, requiring nearest-neighbor calculations for the top 50% of solutions that yield O (T × N2 × D) complexity.

Consequently, MEASSA’s overall time complexity becomes O (T × N2 × D), or simplified to O (T × N2), representing a quadratic relationship with population size. This complexity of trade-off reflects the fundamental balance in metaheuristic optimization, where MEASSA sacrifices computational efficiency for enhanced solution quality through more intensive exploitation mechanisms, making it particularly suitable for applications where solution accuracy outweighs computational cost considerations.

Experimental results and discussion

The step response characteristics of a controlled process in the time domain include delay time, rise time, peak time, settling time, and overshoot, as illustrated in Fig. 5. These characteristics are defined as follows [1]:

  1. (1)

    Delay Time (td): The time it takes for the response to initially reach 50% of its final value.

  2. (2)

    Peak Time (tp): The duration required for the response to attain its first peak value due to overshoot.

  3. (3)

    Rise Time (tr): The time needed for the response to increase from 10% to 90% of its final value.

  4. (4)

    Settling Time (ts): The time it takes for the response to remain within a specific percentage (typically 2%) of the final value.

  5. (5)

    Overshoot (Mp): The highest value reached by the response curve above the final value, usually expressed as a percentage over unity.

Fig. 5
figure 5

Time domain specification of controlled process response

Experimental tests were conducted on three systems. The first involved DC motor speed control, a setup frequently used in numerous related studies [11, 22, 25, 30, 31, 37]. The second system focused on regulating the liquid level in a series of three interconnected tanks [27]. The third system is the more complex one, which is a fourth-order transfer function. All simulations were performed under ideal, noise-free conditions to provide a clear baseline for comparing the optimization algorithms. The PID controller was implemented in its standard form without a derivative filter. The fitness function was used to evaluate solutions based on IAE using Eq (1), which represents the main objective. By targeting IAE, the controller indirectly pushes the system toward shorter rise and settling times, reduced overshoot, and quicker error recovery, although it does not guarantee optimal values of each metric individually. The experimental results were compared with relevant studies, including SCA [6], GWO [25], PSO [38], IWO [22], mJS [39], PSO-ACO [40], OBL-HGS [1] and CMA-ES [41]. The MEASSA algorithm was configured with a population size of 30 (as determined by sensitivity analysis) and a maximum of 50 iterations, corresponding to a stopping criterion of 1500 function evaluations. The evolutionary operator used a dynamically decaying scaling factor (Fmax=0.9, Fmin=0.2) via Eq. 21 and a crossover probability of Pc=0.9 determined via sensitivity analysis. All statistical results are based on 30 independent runs per tested system.

DC motor speed regulator system [25]:

The transfer function of the DC motor closed-loop speed control system with sampling time (Ts = 10ms) is given as shown in Eq (24), while the state space representation is shown in Eq (25). The parameter values of the DC motor used in the case study are presented in Table 1 [25].

Table 1 Parameters of DC motor [25]
$${G}_{1}\left(S\right)= \frac{15}{{1.08 s}^{2}+6.1 s+1.63}$$
(24)
$$\left\{\begin{array}{c}{\dot{x}}_{1}\left(t\right)={x}_{2}\left(t\right)\\ {\dot{x}}_{2}\left(t\right)=-1.51 {x}_{1}\left(t\right)-5.65 {x}_{2}\left(t\right)+ u\left(t\right)\\ y\left(t\right)=13.89 {x}_{1}(t)\end{array}\right\}$$
(25)

Table 2 shows the optimal PID controller parameter values for DC motor speed regulation achieved using the MEASSA algorithm, compared against standard ASSA and other related algorithms included in the comparative study. The MEASSA algorithm focuses on optimizing a single objective (IAE) by identifying the PID parameters that minimize this value. Additional performance metrics such as settling time, rise time, and overshoot were also evaluated based on the PID parameters estimated by MEASSA and the other algorithms.

Table 2 Step Response Metrics and Best IAE for Heuristic Algorithms for DC Motor Speed Regulator.

Compared to ASSA and other popular metaheuristic algorithms (e.g., PSO, IWO, CMA-ES, OBL-HGS), MEASSA demonstrated superior performance across all key performance indicators. Specifically, MEASSA achieved the lowest IAE value (9.977) among all tested algorithms, confirming its improved convergence behavior and superior ability to minimize steady-state and transient errors. In contrast, the original ASSA algorithm yielded a higher IAE of 14.501, reflecting its strong exploration capabilities but limited exploitation due to a lack of memory and local refinement.

The improved results of MEASSA are attributed to its balanced exploration–exploitation strategy, achieved through a combination of mechanisms that enhance both global and local search capabilities. The evolutionary operator introduces diversity and prevents the population from getting trapped around local optima, while the memory mechanism maintains a parallel swarm of the best-found solutions, enabling intensified search around promising regions.

Additionally, the stochastic local search refines top-performing solutions for more accurate convergence. Together, these enhancements effectively overcome the limitations of ASSA, such as its greedy selection strategy and overemphasis on exploration. MEASSA introduces adaptive convergence behavior, promoting exploration in early iterations and shifting toward exploitation in later stages through a dynamic scaling factor and targeted local refinement.

The step response plot (Fig. 6) further supports the numerical findings, showing that the MEASSA-controlled system responds quickly and smoothly with minimal overshoot and no oscillation. The bode plot (Fig. 7) illustrates improved phase margin and gain characteristics, indicating enhanced stability and frequency response behavior.

Fig. 6
figure 6

DC Motor Speed Response Over Time (in Seconds)

Fig. 7
figure 7

Bode Plots of the DC Motor System with PID Controller

MEASSA’s explicit goal of minimizing the Integral Absolute Error (IAE) inherently optimizes the trade-offs between overshoot, rise time, and settling time. By balancing global exploration (via its evolutionary operator) with local refinement (via its memory and local search), the algorithm consistently converges to PID gains that avoid extreme combinations—such as very fast rise times with excessive overshoot or minimal overshoot with efficient response. This results in the well-balanced transient performance observed in our results. To directly demonstrate the consistency of this outcome, we have now included a new figure showing the step responses from ten independent runs for the DC motor system, where the tight clustering of the curves confirms the reliability of our method.

Figure 8 effectively demonstrates MEASSA’s exceptional consistency in controller tuning, as all ten independent runs produce nearly identical step responses with minimal performance variation. The tight clustering of the curves confirms that the algorithm reliably achieves the key performance metrics of under 3% overshoot, a 0.06-0.07s rise time, and a 0.20-0.25s settling time across all executions. This visual evidence robustly validates that MEASSA does not rely on a single lucky run but consistently generates high-performance, well-balanced PID controllers.

Fig. 8
figure 8

Per-run plot of DC Motor response using MEASSA

Liquid level tank

The MEASSA algorithm was applied to the challenging task of regulating a slow dynamic response and nonlinear behavior of a three-cascaded-tank liquid level system (Fig. 9) whose transfer function and state space representation are described in Eq (26) and Eq (27) in order.

Fig. 9
figure 9

Three Cascaded Tanks Liquid Level Systems [27]

$${G}_{2}\left(S\right)={\left(\frac{1}{4s+0.2}\right)}^{3}=\frac{1}{{64 s}^{3}+{9.6 s}^{2}0.48 s+0.008}$$
(26)
$$\left\{\begin{array}{c}{\dot{x}}_{1}\left(t\right)={x}_{2}\left(t\right)\\ {\dot{x}}_{2}\left(t\right)={x}_{3}\left(t\right)\\ {\dot{x}}_{3}\left(t\right)=-0.000125 {x}_{1}\left(t\right)-0.0075 {x}_{2}\left(t\right)-0.15 {x}_{3}\left(t\right)+ u\left(t\right)\\ y\left(t\right)=0.015625 {x}_{1}(t)\end{array}\right\}$$
(27)

Table 3 presents a comparative analysis between MEASSA, standard ASSA, and other state-of-the-art metaheuristic algorithms. Among all tested algorithms, MEASSA achieved the lowest IAE (9.0781), outperforming not only ASSA (15.033) but also other algorithms like PSO (13.518), mJS (13.879), and CMA-ES (10.884). This substantial reduction in error reflects MEASSA’s superior ability to minimize deviations from the reference signal throughout the simulation period. While some algorithms, such as PSO-ACO and CMA-ES demonstrated faster settling times, these were often accompanied by excessive overshoot (e.g., PSO-ACO with 160.06% overshoot), which indicates instability and poor tuning for such a sensitive system.

Table 3 Step Response Metrics and Best IAE for Heuristic Algorithms for Liquid Level Tank

MEASSA, in contrast, achieved a more balanced response, maintaining moderate overshoot (55.966%) and acceptable rise time (4.92 sec), while still improving accuracy. Although MEASSA’s settling time (82.04 sec) was slightly longer than some competitors’ (e.g., GWO at 61.75 sec), this delay is justified by the more stable and controlled output response observed in Fig. 10. The results indicate that MEASSA avoids aggressive tuning that can lead to system instability, making it more suitable for slow-response systems like the liquid tank.

Fig. 10
figure 10

Liquid Level Tank Response Over Time (in Seconds)

In addition to time-domain performance, the frequency response of the PID-controlled liquid level tank system, as illustrated in Fig. 11, offers further validation of MEASSA’s effectiveness. The Bode plot presents both the magnitude and phase response of the system, which are essential for evaluating stability margins and dynamic behavior in response to frequency-varying inputs.

Fig. 11
figure 11

Bode Plots of the Liquid Level Tank with PID Controller

Compared to controllers designed using other algorithms, the MEASSA-tuned PID exhibits a smoother gain roll-off and a more favorable phase margin, which implies better robustness and less susceptibility to instability due to high-frequency noise or system disturbances. Specifically, the controlled system maintains adequate phase lag across the mid-to-high frequency range, helping prevent excessive phase shift that could lead to oscillations or instability.

Additionally, MEASSA’s design ensures that the gain crossover frequency occurs at a point where both gain and phase margins are balanced. This suggests a well-tuned control system that responds effectively to setpoint changes while resisting disturbances and maintaining system robustness. In contrast, PID parameters derived from algorithms with high overshoot or erratic time-domain behavior (e.g., PSO-ACO, IWO) may reflect sharper magnitude transitions or abrupt phase drops, indicating weaker frequency stability.

Therefore, Fig. 11 complements the time-domain findings by confirming that MEASSA not only minimizes tracking error (IAE) but also designs controllers with stronger frequency stability characteristics, making it a more reliable choice for practical implementations in liquid level systems.

Fourth order system

The fourth-order system like the process described in Eq (28) represents a more complex, higher-dimensional control challenge compared to the previous systems and the state space representation is described in Eq (29). This complexity often leads to increased difficulty in tuning PID parameters for stable and accurate performance

$${G}_{3}\left(S\right)=\frac{s+4}{{ s}^{4}+{12 s}^{3}+{21 s}^{2}+30 s}$$
(28)
$$\left\{\begin{array}{c}{\dot{x}}_{1}\left(t\right)={x}_{2}\left(t\right)\\ {\dot{x}}_{2}\left(t\right)={x}_{3}\left(t\right)\\ {\dot{x}}_{3}\left(t\right)={x}_{4}\left(t\right)\\ {\dot{x}}_{4}\left(t\right)=-30 {x}_{2}\left(t\right)-21 {x}_{3}\left(t\right)-12 {x}_{4}\left(t\right)+ u\left(t\right)\\ y\left(t\right)=4 {x}_{1}\left(t\right)+ {x}_{2}\left(t\right)\end{array}\right\}$$
(29)

Table 4 presents a comparative analysis of MEASSA and several well-established algorithms. The MEASSA algorithm delivered the lowest IAE value (9.697) across all algorithms tested, demonstrating a notable improvement in steady-state and transient performance for this high-order system.

Table 4 Step Response Metrics and Best IAE for Fourth Order System

In contrast, the original ASSA algorithm yielded a significantly higher IAE (16.280), reinforcing the importance of the enhancements introduced in MEASSA, especially in complex scenarios.

Although some algorithms like ASSA and OBL-HG achieved slightly faster settling times (7.2794 sec and 7.686 sec, respectively), they did so at the cost of higher overshoot and poorer tracking accuracy. For instance, PSO showed a relatively quick rise time (0.7592 sec), but its overshoot reached 72.259%, which can cause instability and poor controller robustness. MEASSA, while not the fastest in terms of rise or settling time (11.1927 sec and 1.178 sec, respectively), achieved a more balanced performance with controlled overshoot (43.307%) and high solution accuracy.

Algorithms like PSO-ACO and mJS offered good trade-offs between overshoot and rise time but could not match MEASSA’s optimization of the objective function. MEASSA’s position reflects an optimal balance, prioritizing error minimization (IAE) without compromising stability, making it more suitable for control applications where accuracy and reliability are more critical than speed alone.

MEASSA’s robust performance in this higher-order system stems from the synergistic effect of its evolutionary operator, memory mechanism, and stochastic local search. The evolutionary operator enabled effective global search in the early stages, generating diverse and competitive PID parameter sets. The memory mechanism ensured that promising solutions were retained, avoiding the loss of high-quality individuals due to greedy selection behavior. As optimization progressed, the stochastic local search refined these top solutions, helping the algorithm adapt to the high-dimensional nature of the fourth-order system and achieve better convergence.

Figure 12 presents the step response of the fourth-order system controlled by the optimized PID parameters using the MEASSA algorithm. The response curve demonstrates smooth and stable behavior, with the system gradually reaching the desired setpoint without sharp oscillations or excessive overshoot. Despite the inherent complexity and sluggishness of a fourth-order process, the response in Fig. 12 confirms that MEASSA successfully tunes the controller to achieve a good compromise between speed and stability.

Fig. 12
figure 12

Fourth Order System Response Over Time (in Seconds)

Unlike the overly aggressive responses produced by algorithms like PSO or GWO, which suffer from high overshoot and faster but unstable settling, MEASSA achieves a more controlled rise to the setpoint. The shape of the curve reflects moderate rise time and damped oscillations, aligning well with the IAE and overshoot values reported in Table 4. This confirms that MEASSA provides a reliable control strategy capable of handling the complex dynamics of high-order systems, without inducing instability.

Thus, Fig. 12 visually supports the numerical findings, emphasizing that MEASSA maintains steady convergence, low error, and robust dynamic performance in the time domain all key indicators of a well-designed PID controller.

The Bode plot of the fourth-order system in Fig. 13 further supports MEASSA’s performance advantages. The frequency response exhibits a controlled gain margin and a smooth phase roll-off, reflecting good system robustness and frequency stability. Compared to the designs produced by other algorithms, MEASSA’s PID controller avoids excessive gain spikes or rapid phase drops, indicating that it achieves a more stable behavior across a wide frequency range. This is particularly important in high-order systems where poor frequency response can amplify noise or lead to oscillations.

Fig. 13
figure 13

Bode Plots of the Fourth Order System with PID Controller

Based on Table 5, the comprehensive statistical analysis across three distinct control systems, the Modified Enhanced Sparrow Search Algorithm (MEASSA) demonstrates unequivocal superiority over competing meta-heuristic methods. MEASSA consistently achieves the lowest mean Integral Absolute Error (IAE) values 10.8, 9.8, and 10.5 for the DC Motor, Liquid Level, and Fourth Order systems respectively while also maintaining the smallest standard deviations of 0.7, 0.6, and 0.7. This combination of optimal performance and minimal variability indicates that MEASSA is not only the most accurate but also the most robust and reliable algorithm, effectively balancing exploration and exploitation to avoid local minimum and deliver consistent, high-quality PID controller tuning solutions across diverse system dynamics. This consistent pattern across all three engineering systems strongly validates MEASSA’s general applicability and its enhanced capability for robust automatic controller design.

Table 5 Mean and Standard Deviation for IAE using various meta-heuristics for the three systems

Sensitivity analysis was performed to evaluates the impact of the population size (N), the scaling factor range (F), and the crossover probability (Pc) on MEASSA’s performance. The Fourth-Order System was used as the testbed due to its complexity, and the mean Integral Absolute Error (IAE) over 30 independent runs was the primary performance metric. Table 6 show the results of sensitivity analysis.

Table 6 Sensitivity Analysis for N, F and Pc

The population size dictates the diversity of the search. We tested values by keeping Fmax=0.9, Fmin=0.2, Pc=0.9. While a population of N=20 converges quickly, it results in higher error and instability due to insufficient exploration. Conversely, N=100 is slow and offers no accuracy benefit despite its stability, suggesting inefficient over-exploration. The optimal balance is achieved with N=30 and N=50, with N=30 being the recommended choice as it provides the best trade-off between computational speed, solution accuracy, and stability.

The scaling factor FF controls the magnitude of the mutation. We tested different ranges for Fmax and Fmin, keeping N=30 and Pc=0.9 constant. A low F configuration lacks the power to escape local optima, while a high F is too disruptive for fine-tuning near the optimum. The medium configuration (Fmax=0.9, Fmin=0.2) optimally balances initial exploration with subsequent exploitation, confirming it as the recommended setting.

The crossover probability Pc controls the inheritance of parameters from the mutant vector. We tested different values with N=30 and F=(0.9,0.2). A low Pc (0.7) is too conservative and hinders diversity, while Pc=1.0 makes the search overly random and unstable. A value of Pc=0.9 optimally balances the introduction of new genetic material with the retention of parental information, confirming it as the recommended value.

Convergence analysis

Based on the convergence curves presented in Fig. 14 for the three dynamic systems, the enhanced MEASSA algorithm demonstrates a clear and consistent superiority over the original ASSA and other benchmark algorithms. The curves for MEASSA exhibit a steeper initial descent, indicating a faster convergence rate towards a lower objective function value (IAE). This rapid improvement in the early iterations can be attributed to the effective synergy of its new components: the evolutionary operator promotes a diverse and effective global search, while the memory mechanism immediately preserves promising solutions, preventing the loss of progress that can occur with a purely greedy strategy. Furthermore, MEASSA does not stagnate but continues to refine its solutions, steadily driving the IAE to a significantly lower final value than its competitors. This sustained exploitation is largely due to the stochastic local search, which intensively refines high-quality solutions in the memory population during the later stages of the optimization process.

Fig. 14
figure 14

Convergence Curves for Dynamic Systems, (a) DC Motor, (b) Liquid Level Tanks System (c ) Fourth Order System

In contrast, the original ASSA, while showing a reasonable convergence trend, is consistently outperformed by MEASSA across all test systems. Its curve often plateaus at a higher IAE value, highlighting its limitation of excessive exploration and a lack of sophisticated exploitation mechanisms. The greedy selection strategy of ASSA, which always accepts better solutions but lacks a memory to guide a more focused search, appears to lead to premature convergence on sub-optimal solutions. The performance gap between MEASSA and ASSA visually validates the success of the proposed enhancements in achieving a better balance between exploration and exploitation. Meanwhile, the other algorithms, such as PSO and GWO, typically show slower convergence speeds and converge to higher error levels, further emphasizing MEASSA’s robust and efficient optimization capability for PID controller tuning.

Execution time

The experiments were performed on a Windows 10 Pro desktop computer built around a mid-range Intel Core i5-4210U dual-core processor (1.70 GHz base, 2.40 GHz boost) paired with 8 GB of RAM, providing a capable setup for general-purpose computing, office productivity, and light multimedia tasks.

Based on the execution time data provided in Table 7, it is evident that the computational cost of the algorithms generally increases with the complexity of the control system, from the DC Motor to the more challenging Fourth-Order System. The proposed MEASSA algorithm consistently requires the longest execution time across all three systems (57.5s, 86.3s, and 115.0s, respectively), being slightly but consistently slower than its predecessor, ASSA, due to the added overhead of its memory mechanism, evolutionary operator, and stochastic local search components. This places MEASSA in the highest tier of computational demand alongside CMA-ES and ASSA, confirming the expected trade-off where enhanced exploitation capabilities and superior solution quality, as demonstrated by its lower IAE values in the manuscript, come at the cost of increased runtime. Despite this, the marginal time increase from ASSA to MEASSA is relatively small (approximately 2.5-5 seconds), suggesting that the performance benefits gained from the enhancements are well worth the minor additional computational investment.

Table 7 Execution Time of MEASSA versus Relevant Algorithms

Statistical analysis

To rigorously substantiate the empirical performance advantages demonstrated in the time-domain and frequency-domain analyses, a comprehensive statistical evaluation is employed. This study utilizes the non-parametric Wilcoxon signed-rank test to statistically compare the proposed MEASSA algorithm against all benchmark metaheuristics, including the original ASSA. The test is applied to the results obtained from the three distinct control systems: the DC motor speed regulator, the three-tank liquid level system, and the fourth-order system, to determine the statistical significance of the observed performance differences in minimizing the Integral Absolute Error (IAE).

This analysis aims to conclusively determine whether MEASSA’s superior convergence and precision are statistically significant and consistent across diverse dynamic challenges, thereby providing a robust validation of its efficacy. The null hypothesis (H₀) is that there is no significant difference between the performance of MEASSA and the compared algorithm. A p-value < 0.05 (typically) leads to the rejection of H₀, indicating a statistically significant difference.

According to Table 8, the revised Wilcoxon signed-rank test analysis conclusively demonstrates the statistically significant superiority of the MEASSA algorithm over all competitors, though with varying degrees of confidence. While MEASSA’s performance advantage is most pronounced and highly significant (p ~ 1e-4) against simpler algorithms like SCA, GWO, and its predecessor ASSA, it also maintains a clear and statistically significant edge (p < 0.05) over its closest competitors, mJS and PSO-ACO, which show the smallest yet still definitive p-values (p ~ 1e-2 to 1e-3). Furthermore, the highly significant holistic p-value aggregating results across all three control systems confirms that MEASSA’s enhanced performance is not an artifact of a specific problem but a robust and reliable characteristic, solidifying its status as a superior optimizer for PID controller tuning across diverse dynamic systems.

Table 8 Wilcoxon Signed-Rank Test p-values

Conclusion

This study successfully developed and validated MEASSA, a significantly enhanced version of the Artificial Satellite Search Algorithm, for the precise and robust tuning of PID controllers. The integration of a memory mechanism, an evolutionary operator, and a stochastic local search effectively remedied the core limitations of the original ASSA, its excessive exploration and greedy selection strategy. This synergistic combination fostered a superior balance between global exploration and local exploitation, guiding the search more efficiently toward high-quality solutions. Comprehensive experimental results on three distinct control systems are a DC motor, a liquid level system, and a challenging fourth-order system—consistently demonstrated MEASSA’s superiority. The algorithm achieved the lowest Integral Absolute Error (IAE) values, outperforming a wide range of established and state-of-the-art meta-heuristics. Furthermore, analyses in both the time and frequency domains confirmed that MEASSA-optimized PID controllers provide not only superior reference tracking but also improved transient performance (reduced overshoot, faster settling) and enhanced stability margins. The statistical significance of these results, confirmed by the Wilcoxon signed-rank test, solidifies MEASSA’s reliability and robustness.

Future research may extend MEASSA in the following directions:

  • Evaluate MEASSA performance under noisy conditions with derivative filtering and real-time validation

  • Validating the MEASSA-tuned controllers on a hardware-in-the-loop (HIL) platform to confirm their performance under real-time conditions

  • Extend MEASSA to handle multi-objective formulations (e.g., minimizing both IAE and overshoot simultaneously).

  • Explore the integration of MEASSA with intelligent controllers such as Fuzzy-PID or Neural-PID to further improve adaptability to nonlinear systems.

  • Test the algorithm on broader benchmark datasets and real-world applications, including robotics, biomedical systems, and renewable energy control systems.