Abstract
Engineering design optimization problems are often characterized by high dimensionality, complex constraints, and multimodal search landscapes, which pose significant challenges to conventional metaheuristic algorithms. Although the Whale Optimization Algorithm (WOA) has demonstrated competitive performance, it still suffers from premature convergence, limited population diversity, and an imbalanced exploration-exploitation mechanism in complex optimization scenarios. To overcome these limitations, this paper proposes a Geometric Whale Optimization Algorithm (ESTGWOA), in which multiple geometric strategies are systematically embedded into the canonical WOA framework to enhance population initialization, search guidance, and position update behaviors. By incorporating geometric-based mechanisms, ESTGWOA effectively improves search space coverage and strengthens the coordination between global exploration and local exploitation. Comprehensive experiments on 23 benchmark functions demonstrate that ESTGWOA, with an overall effectiveness of 97.10%, outperformed other algorithms on benchmark functions with different dimensions of 30, 50 and 100 dimensions. And the simulations on a series of constrained engineering design problems demonstrate that ESTGWOA consistently outperforms selected state-of-the-art metaheuristic algorithms. Quantitative results show that ESTGWOA achieves superior average fitness values and lower standard deviations in most cases, with statistical significance verified by Wilcoxon rank-sum and Friedman tests. Furthermore, qualitative analyses of search history, population diversity, and convergence behavior confirm the robustness and stability of the proposed approach. These results indicate that ESTGWOA is a reliable and effective optimization algorithm for complex continuous engineering design problems.
Introduction
Metaheuristic algorithms are a class of algorithms used to solve complex optimization problems by employing search and optimization techniques to find the best or near-optimal solutions. Over the past few decades, metaheuristic algorithms have been developed, widely researched, and applied. Due to the complexity and diversity of many real-world problems, traditional exact algorithms often struggle to find optimized solutions within a reasonable time frame. In contrast, metaheuristic algorithms search through the problem space by leveraging the specific structure and domain knowledge of the problem, progressively approaching the optimal solution with minimal computational resources. The core of metaheuristic algorithms lies in Exploration and Exploitation. Exploration refers to thoroughly exploring the entire search space, as the optimal solution could be located anywhere within it. Exploitation involves utilizing effective information as much as possible, as superior solutions often exhibit certain correlations. These correlations are exploited to gradually adjust and guide the search from the initial solution to the optimal solution. In general, metaheuristic algorithms aim to strike a balance between Exploration and Exploitation.
With the development of computer science, more and more researchers have begun focusing on the development of better metaheuristic algorithms to address complex optimization problems more effectively. As research deepened, metaheuristic algorithms evolved into diverse theoretical and application branches. Based on their underlying inspiration mechanisms, they can generally be categorized into four groups: Evolutionary Algorithms (EAs), Swarm Intelligence Algorithms (SIAs), Physics-Based Algorithms (PBAs), and Human-Based Algorithms (HBAs) in Table 1.
Evolutionary Algorithms are inspired by Darwin’s theory of natural selection. Representative examples include the Genetic Algorithm (GA)1 and Differential Evolution (DE)2, which evolve populations through selection, crossover, and mutation operators, demonstrating strong global exploration capability.
Swarm Intelligence Algorithms emulate the social cooperation behaviors of animal groups3,4,5,6. For instance, In 1992, Italian scholar Marco Dorigo proposed the Ant Colony Optimization (ACO) algorithm based on the study of ant behavior7. In 1995, American psychologists Kennedy and electrical engineer Eberhart, inspired by bird foraging behavior, proposed the Particle Swarm Optimization (PSO) algorithm8. In 2014, S Mirjalili et al. proposed the Grey Wolf Optimizer (GWO). GWO simulates the social structure and hunting process of wolf packs9. Recently proposed algorithms further expand this category. In 2019, Elhamifar and Ahmadi proposed the Harris Hawks Optimization (HHO) algorithm by mimicking the hunting behavior of Harris hawks10. In 2021, H Jia et al. inspired by the parasitic behavior of remora, proposed the Remora Optimization Algorithm (ROA)11. In 2022, Eva et al. proposed the Zebra Optimization Algorithm (ZOA), which imitates the foraging behavior of zebras and their defense strategy against predators’ attacks12. In 2024, S Fu et al. proposed Red-billed Blue Magpie Optimizer (RBMO), inspired by the predation and food-storing behaviors of the Red-billed Blue Magpie, which exhibits strong global search capabilities13. In 2025, T Hamadneh et al. proposed Salamander Optimization Algorithm (SOA)14. SOA is designed based on the remarkable biological characteristics of salamanders, including their regenerative abilities, adaptability to diverse environments, and efficient movement strategies. In 2025, X Wang inspired by the social behaviors of bighorn sheep, proposed the Bighorn Sheep Optimization Algorithm (BSOA)15.
Physics-Based Algorithms are rooted in natural laws of energy transfer and physical dynamics16,17,18,19. For example, Simulated Annealing (SA) controls the trade-off between global and local search through a temperature-cooling process20; the Attraction-Repulsion Optimization Algorithm (AROA) models gravitational and repulsive forces21; Thunderstorm and Cloud Algorithm (TCA) simulates atmospheric processes observed during thunderstorms22; Schrödinger Optimizer (SRA) is motivated by principles of quantum mechanics, specifically Schrödinger’s equation and wave-particle duality23.
On the other hand, Human-Based Algorithms are inspired by social learning and cooperative behaviors24,25,26,27. The Teaching-Learning-Based Optimization (TLBO) algorithm models classroom teaching and self-learning processes28; the Soccer League Competition Algorithm (SLCA) is based on competitive team dynamics29; Psychologist Algorithm (PA) is inspired by psychological behaviours observed in human psychotherapy30; Perfumer Optimization Algorithm (POA) mimics the behavior of a perfumer when making a perfume31.
Metaheuristic algorithms provide a universal optimization strategy, where a single algorithm can be applied to multiple types of optimization problems without requiring a clear mathematical model of the problem. In real-world applications, metaheuristic algorithms have been widely used in various fields due to their strong optimization capabilities, including enhancing wireless sensor network coverage32,33, spring design (stretching/compressing)34,3536, welding beam design37, neural network tuning38,39,40,41,42, feature selection43,44,45, urban planning46,47, path planning48,49, antenna design optimization50, workshop scheduling51, and photovoltaic parameter estimation52,53,54. By efficiently handling complex multi-objective and multi-constrained problems, metaheuristic algorithms significantly enhance decision-making efficiency.
However, metaheuristic algorithms also have certain limitations. These algorithms often face challenges in balancing exploration and exploitation, as well as issues with poor population quality in the later stages of iterations. The Particle Swarm Optimization (PSO) algorithm, when applied to complex optimization problems, is prone to premature convergence, which often leads the algorithm to converge prematurely to a local optimum, halting further exploration for better solutions. The Grey Wolf Optimizer (GWO) is simple in structure and easy to implement, but in complex problems, GWO may also fall into local optima in the later stages of iterations. The Aquila Optimizer, renowned for its strong exploration and exploitation capabilities, faces difficulties due to the complexity of its position update mechanism, making it hard to adjust and unable to guarantee its effectiveness in solving complex problems. Therefore, improving the balance between exploration and exploitation and enhancing population quality have become significant challenges in enhancing the performance of metaheuristic algorithms.
As shown in Table 2, in recent years, researchers have continuously explored integrating different methods to improve metaheuristic algorithms. In 2004, Y. Gao et al. introduced chaotic mapping into population initialization, generating higher-quality populations and improving the optimization ability of the PSO algorithm to some extent55. In 2020, Chiwen Qu et al. proposed an improved Harris Hawks Optimization (HHO) algorithm based on information exchange (IEHHO), allowing Harris hawk individuals to exchange information from shared regions, thereby enhancing the collaborative ability of the Harris Hawks to better balance exploration and exploitation56. In 2023, H Jia et al. combined heat transfer and condensation strategies, proposing an improved Snow Ablation Optimizer (SAOHTC) to address the issue of premature convergence to local optima in high-dimensional problems57. In 2024, J. Huang et al. introduced a multi-strategy hybrid BWO algorithm (HBWO), which integrates QOBL, adaptive spiral predation strategies, and the Nelder-Mead method, to better handle complex high-dimensional problems58. These exceptional algorithms, which incorporate various novel improvement strategies, offer new approaches to enhancing metaheuristic algorithms. In 2025, Lu et al. proposed MRBMO by incorporating Good Nodes Set method and LIOBL, for antenna S-parameter optimization50.
The Whale Optimization Algorithm (WOA), proposed by Mirjalili et al. in 2016, is a metaheuristic optimization algorithm inspired by the hunting behavior of humpback whales59. WOA has a relatively simple structure, making it easy to understand and implement. However, WOA has poor capabilities in balancing exploration and exploitation, and the quality of the population tends to degrade significantly with each iteration, leading to stagnation. In complex multi-modal optimization problems, WOA may fail to perform adequate global exploration and can easily converge prematurely to a local optimum. Therefore, in recent years, many scholars have made various attempts to improve the performance of WOA. In 2020, Liu et al. proposed a hybrid WOA based on Levy flight and Differential Evolution (WOA_LFDE), which used Levy flight to enhance search diversity and combines Differential Evolution (DE) strategies to further improve local search capability while retaining elite individuals60. In 2022, S. Chakraborty et al. introduced a novel improved Whale Optimization Algorithm (ImWOA), which incorporated two different exploration strategies for searching food and introduces a new cooperative hunting strategy. ImWOA aimed to address the shortcomings of traditional WOA in solution diversity and the local optimum problem61. In 2024, Li et al. proposed MISWOA, which incorporated an adaptive nonlinear convergence factor with a variable gain compensation mechanism, adaptive weights, and an advanced spiral convergence strategy, resulting in a significant enhancement in the algorithm’s global search capability, convergence velocity, and precision62. In 2025, Gu et al. incorporated Good Nodes Set initialization and sine-cosine method into WOA, proposed GWOA63. Their modifications to WOA have improved its applicability to solving more complex problems.
Chapter 2 provided a detailed introduction to the current research on engineering design optimization challenges. Chapter 3 primarily presented the specific principles of the Whale Optimization Algorithm (WOA), along with its advantages and disadvantages. Chapter 4 introduced the proposed ESTGWOA. In Chapter 5, we validated the performance of ESTGWOA through a series of experiments. Chapter 6 involves testing ESTGWOA with state-of-the-art (SOTA) metaheuristic algorithms on different engineering design optimization problems to verify the practicality and robustness of ESTGWOA.
Current research on engineering design optimization challenges
Engineering design optimization has long been a fundamental task across aerospace, mechanical, civil, and structural engineering. Before the emergence of metaheuristic algorithms, engineering design problems were primarily solved using traditional mathematical optimization techniques such as linear programming, nonlinear programming, dynamic programming, gradient-based methods, and exhaustive search strategies. Although these classical techniques offer strong theoretical foundations, they often struggle with highly nonlinear, multimodal, constrained, and black-box engineering problems. Their reliance on gradient information and strict mathematical assumptions made them unsuitable for complex real-world scenarios, where the design landscapes are typically discontinuous, nonconvex, and computationally expensive.
With the development of metaheuristic algorithms, researchers gained access to a new class of flexible and powerful optimization tools inspired by natural processes, biological systems, and physical phenomena. Algorithms such as Genetic Algorithms (GA), Particle Swarm Optimization (PSO), Differential Evolution (DE), and later more advanced strategies like Grey Wolf Optimizer (GWO) and Whale Optimization Algorithm (WOA) demonstrated strong global search capabilities and robustness. These methods require no gradient information and can handle complex constraints, making them particularly suitable for engineering design optimization involving uncertain environments, noisy models, and intricate design spaces. Metaheuristics quickly became the mainstream approach for tackling challenging engineering design tasks.
In recent years, the integration of neural networks and deep learning has further expanded the landscape of engineering optimization. Surrogate-assisted optimization, reinforcement learning-driven design, and deep surrogate modeling techniques have been widely employed to accelerate computation and improve solution quality. These approaches can approximate computationally expensive engineering models, enabling rapid evaluations and providing new ways to explore high-dimensional design spaces. Despite their effectiveness, neural network-based optimization often requires large datasets, extensive training, and careful hyperparameter tuning, and may still fall short in global search capability compared with well-designed metaheuristics.
Overall, metaheuristic algorithms maintain unique advantages in engineering design, particularly because of their balance between exploration and exploitation, their independence from gradient information, and their ability to escape local optima in complex search landscapes. These strengths have motivated the continuous development of enhanced metaheuristic variants tailored for engineering applications.
The Whale Optimization Algorithm (WOA) is a bio-inspired metaheuristic algorithm known for its simple structure and ease of implementation. However, its performance in engineering design optimization has been less than satisfactory. Therefore, we proposed a novel geometric WOA with multi-strategy (ESTGWOA), aimed at improving the performance of WOA and enabling it to compete with state-of-the-art (SOTA) metaheuristic algorithms, while exploring additional possibilities for WOA in engineering design optimization. This paper will explore the effectiveness and applicability of ESTGWOA in engineering design optimization, aiming to provide a new optimizer for engineering design optimization.
WOA
Encircling prey
Humpback whales can identify the location of prey and encircle them. Since the optimal position in the search space is unknown, the WOA assumes that the current best candidate solution represents the target prey or is close to the optimal solution. After defining the best whale, which is called an elite, other whales attempt to update their positions toward the elite. This behavior is represented by Eqs. (1) and 2:
where t is the current iteration; A and C are coefficient vectors; \(X^*\) is the position of the elite; X is the position of the whale.
If a better solution is found in each iteration, which means the fitness value of X is smaller than that of \(X^*\), the current position X is updated as the new \(X^*\). The formulae for vector A and C are given below:
where r is a vector of random numbers from 0 to 1; a is a convergence factor which linearly decreases to 0 during the iteration and the calculation of the convergence factor a is shown in Eq. (5):
where t is the current number of iterations; T is the maximum number of iterations.
Fig. 1 illustrates the variation of convergence factor a.
Variation of convergence factor a during the iteration.
Spiral updating
This method first calculates the whale position X and the prey position \(X^*\) in a straight line distance \(D'\) from the prey position as shown in Eq. (6). And then creates a spiral equation to simulate the whale spiraling up to encircle the prey as shown in Eq. (7).
where \(X^*\) is the position of the elite; and X is the position of the whales; b is a constant defining the shape of the logarithmic helix, usually set to 1; \(a_1\) is a linearly varying parameter of [-2, -1]; t is the current number of iterations; T is the maximum number of iterations; Rand is a random number between 0 and 1; the spiral coefficient l takes the values [-2, 1].
Search-for-prey
When the whale moves past the location of the prey, it will abandon its previous direction and randomly search in other directions for other prey, in order to avoid getting trapped in a local optimum. Then a whale individual will be randomly selected from the whale population to explore the area around the position. The modeling of the whale’s search-for-prey behavior is as follows:
where C is a coefficient vector calculated in Eq. (4); \(X_{rand}\) is a random whale chosen from the current population; A is a coefficient vector calculated in Eq. (3).
Population initialization
Like most metaheuristic algorithms, WOA uses random number initialization for population initialization.
where \(X_{i,j}\) is randomly produced population; ub and lb are the upper limit and lower limit of the problem; Rand is a random number between 0 and 1.
This approach, while simple and direct, often results in poor diversity and uneven distribution of solutions, which can lead to inefficiency in the search process.
The pseudo-code of the WOA is shown in Algorithm 1 and the flowchart of WOA is shown in Fig. 2.
Flowchart of WOA.
Advantages and disadvantages of WOA
The structure of WOA is relatively simple, with fewer parameters, making it easy to understand and implement. By dynamically adjusting the convergence factor a, WOA is able to focus on global exploration in the early stages of the search and on local exploitation in the later stages, preventing it from getting trapped in local optima. However, WOA also has certain drawbacks. Despite its ability to balance exploration and exploitation, WOA may not perform sufficient global exploration when dealing with multi-modal optimization problems, leading to premature convergence to local optima, especially in complex search spaces. For some specific problems, WOA’s convergence speed is slow, particularly in the later stages when the population diversity significantly decreases, resulting in stagnation. Furthermore, WOA is quite sensitive to the initial setting of parameters, especially the convergence factor a that controls exploration and exploitation. Therefore, there is significant room for improvement in the WOA.
In the Geometric Whale Optimization Algorithm (ESTGWOA) proposed in this paper, we used Good Nodes Set method to generate a uniformly distributed population, introduced a redesigned Elite Guided Search (EGS) strategy to replace the original search-for-prey strategy, and proposed a redesigned Spiral-based Encircling Prey (SEP) strategy to improve the logic of the original encircling prey strategy. We also introduced a Triangular-based Spiral Hunting (TSH) strategy to enhance the original spiral updating strategy, designed a new update mechanism of the convergence factor a to balance the exploration and exploitation, and incorporated a newly-designed Hybrid Gaussian Mutation based on Differential Evolution to help escape from local optima.
ESTGWOA
Good nodes set initialization
The standard WOA initializes its population using pseudo-random numbers, as illustrated on the left side of Fig. 3. Although this strategy is straightforward and easy to implement, it frequently yields populations with limited diversity and uneven spatial coverage. Such deficiencies may cause individuals to cluster, thereby weakening the algorithm’s search efficiency and global exploration ability.
To overcome these limitations, this study employs the Good Nodes Set (GNS) initialization method64,65,66, which generates more uniformly distributed candidate solutions. Originating from the work of Chinese mathematician Loo-keng Hua, GNS provides a systematic way to produce evenly spaced points. Its most notable benefit is that the distribution quality remains consistent regardless of dimensionality, making it suitable for both low-dimensional and high-dimensional optimization tasks. Consequently, applying GNS can enhance the initial population quality and significantly strengthen the exploration phase of WOA. The population generated by GNS for N=300 is presented on the right side of Fig. 3, showing a notably more homogeneous layout compared to the pseudo-random counterpart and effectively avoiding concentration of individuals.
Pseodo-random number Initialization and Good Nodes Set Initialization.
Let \(U^D\) denote a unit hypercube in a D-dimensional Euclidean space. The construction of a Good Nodes Set is defined by Eq. (13):
where x denotes the fractional part of x, M is the total number of generated nodes, and r is a deviation parameter greater than zero. The constant \(C(r,\varepsilon )\) depends solely on r, while \(\varepsilon\) is a positive constant.
Each element p(k) in \(P_r^M\) is referred to as a Good Node. Given the upper and lower bounds of the \(i^{th}\) dimension of the search space, \(x_{max}^i\) and \(x_{min}^i\), respectively, the mapping from the unit hypercube to the actual search domain is performed using Eq. (14):
Elite guided searching (EGS) strategy
The original WOA searches for prey by randomly selecting a whale individual, which increases the algorithm’s diversity to some extent. However, this mechanism makes it difficult to achieve an effective balance between global exploration and local exploitation. As a result, the search trajectories of individuals often lack clear directionality and regularity, which may lead to premature convergence to local optima. Moreover, the reliance on randomly selected individuals limits the effective utilization of population information, especially when population diversity decreases in later iterations. This randomness may also cause instability and large performance fluctuations, particularly when the population size is small.
To address these shortcomings, this paper proposes a novel Elite Guided Searching (EGS) strategy, as illustrated in Fig. 4. The EGS strategy is directly inspired by cooperative hunting and predator-prey behaviors observed in the natural world, where individuals dynamically adjust their movements by learning from the most successful hunter (elite) in the group.
Elite Guided Searching mechanism.
The modeling of the Elite Guided Searching strategy is shown below:
where t is the current iteration number; T is the maximum number of iterations; X represent the positions of the whale individual; \(X^*\) represent the position of an elite; \(X_R\) represent the average position of all whale individuals.
The Elite Guided Searching strategy uses the position of the current best solution and the average position of the whale individuals to guide the whale population towards the optimal solution. As the iterations progress, the reliance on the optimal solution gradually decreases, allowing the algorithm to naturally balance exploration and exploitation throughout the iteration process. This strategy helps to maintain an appropriate exploration capability in the early stages while enhancing exploitation in the later stages, thus avoiding premature exploitation in the early iterations. Furthermore, the Elite Guided Searching strategy references the average position of the whale population, ensuring that, in the next iteration, the movement direction of a whale individual is influenced not only by the leader’s position but also by the distribution of the entire population. This allows for a more comprehensive use of the population’s information, making the movement of individuals more based on the population’s structural characteristics rather than random behavior. This approach avoids excessive or insufficient interaction among individuals, thereby improving the algorithm’s convergence speed and accuracy. In addition, compared to the original WOA, which explores the unknown domain by randomly selecting a whale individual, the Elite Guided Searching strategy focuses more on the relative positions of the whale individuals and their distance from the current best solution. This reduces the impact of randomness on the algorithm’s performance and gradually guides the whale individuals closer to the optimal solution, enhancing the algorithm’s stability and consistency.
Spiral-based encircling prey (SEP) strategy
From Eq. (2), it can be seen that the encircling prey strategy of the original WOA primarily updates the position of the whale based on the distance between the whale and the prey. Although this approach allows for a gradual convergence towards the optimal solution, in complex and multi-modal problems, it may lead to an overly rapid convergence in the early stages, causing the algorithm to get trapped in local optima. In the original WOA, due to the simplicity of the encircling prey strategy, the position update is solely dependent on the current positions of the prey and the whale, which may lead to insufficient exploration during the local search and make the algorithm sensitive to the initial positions.
To address these limitations, this paper drawn inspiration from the Spiral flight mechanism and introduced a Spiral-based Encircling Prey mechanism. This mechanism is modeled as shown in Eqs. (17) to 23.
Fig. 5 is the simulation of Spiral flight. The schematic of the Spiral-based Encircling Prey is illustrated in Fig. 6.
The silulation of Spiral flight.
Spiral-based Encircling Prey mechanism. The whales move in the spiral shape to attack the prey.
where A and C are coefficient vectors; Z and s are the spiral coefficients; Rand denotes a random number between 0 and 1; \(S_s\) is the step size of spiral flight.
The Spiral-based Encircling Prey strategy incorporated the concept of spiral flight, introducing periodicity and randomness into the position update process. This spiral motion not only enables the whale to approach the prey but also maintains a certain degree of spatial exploration during the approach, thus preventing premature convergence to local optima. The nonlinear trajectory of this strategy helps the algorithm to escape from local optima, enhancing its global search capability. This strategy introduces a cosine oscillation term, which increases the depth of exploration of the search space by the whale individual. As a result, the whale can explore the local space more deeply while approaching the prey. This deeper exploration helps in finding better local solutions, preventing early convergence and local optima in complex problems. The movement trajectory of the whale becomes more diversified and complex, which enhances the algorithm’s adaptability to various types of problems. This complexity improves the robustness of the algorithm, making it more effective in solving a wide range of optimization problems and mitigating the drawbacks of the original WOA.
Triangular-based spiral hunting (TSH) strategy
From Eq. (7), we can see that the spiral updating strategy of the original WOA primarily relies on position updates around the current best solution, resulting in a high dependency on the current optimal solution. This leads to insufficient fine-tuning ability when approaching the optimal solution. Although this strategy incorporates some randomness, its range is relatively limited, lacking more complex multidimensional space exploration and the ability to escape from local optima, which increases the likelihood of getting trapped in local optima. To address these limitations, this paper proposed a novel Triangular-based Spiral Hunting Strategy to replace the original spiral update strategy, as follows:
where \(r_1\) is the scaling factor; r is the random scaling factor; Rand denotes a random number between 0 and 1; \(L_1\) represents the straight-line distance between the whale and the prey; \(L_2\) is the random step size; \(\gamma\) is a random angle; L is the triangular walk step size, as illustrated in Fig. 7.
Compared to the original Spiral updating strategy, the Triangular-based Spiral Hunting Strategy introduces a dynamic scaling random disturbance factor that gradually decreases as the iteration progresses. This guides the particles to explore more widely in the early stages and focus on refined search in the later stages. By calculating the random angle through a cosine function, the strategy introduces more asymmetric disturbances, increasing the uncertainty in particle updates. This prevents all particles from converging onto a single spiral path and further enhances the randomness and directional diversity of the search. The Triangular-based Spiral Hunting strategy allows the population to wander around the optimal position while converging towards it, thereby improving the algorithm’s local optimization capability.
Step size L of Triangular Walk. \(L_1\) represents the straight-line distance between the whale and the prey; \(L_2\) is the random step size; \(\gamma\) is a random angle; L is the triangular walk step size.
Hybrid Gaussian mutation based on differential evolution
The standard position update strategy in WOA primarily relies on biomimetic principles, such as prey encirclement and spiral updating. These strategies typically focus on searching in the vicinity of the current best solution, and especially in the later stages of iterations, WOA may gradually converge to a specific region, leading to a decreased ability to explore the search space. By introducing a mutation strategy after the position update, new disturbances can be introduced into the solution space, creating a certain degree of variation that allows whale individuals to escape from the current local optimum region and explore a broader solution space.
Furthermore, for algorithms like WOA that rely on the best solution in the population for search, a mutation strategy can help individuals escape from locked regions, preventing the algorithm from converging to a local optimum. This is particularly important when solving complex problems with multiple local optima. By incorporating a mutation strategy into WOA, it can compensate for the limitations that may arise during the later stages of convergence, ensuring that WOA continues to explore rather than prematurely converging to a sub-optimal solution. In ESTGWOA, we introduced a novel designed Hybrid Gaussian Mutation based on Differential Evolution.
First, we generate an intermediate solution \(X''_{new}\) by Differential Evolution.
where \(X_i\) represents the current position of the individual; \(X_D\), \(X_E\), \(X_F\) and \(X_G\) represent the positions of the four different individuals randomly selected from the population respectively; F is the factor controlling the scaling of the difference vector, which is calculated as in Eq. (32).
where Rand denotes a random number between 0 and 1.
Then, execute Hybrid Gaussian Mutation on the generated intermediate solutions \(X'_{new}\):
where \(\alpha\) is the weighting coefficient that controls the mixing ratio of the two Gaussian disturbances; \(G_1^{\sim }N\)(0,\(\sigma _1\)) is the first Gaussian-distributed random disturbance with standard deviation; \(G_2^{\sim }N\)(0,\(\sigma _2\)) is the second Gaussian-distributed random disturbance with standard deviation. In this study, we set \(\alpha\)=0.5, \(\sigma _1\) =0.1 and \(\sigma _2\)=0.5.
Then, after applying the mutation operation to the intermediate solution, boundary checks and adjustments are necessary to prevent the population from degenerating. If the solution exceeds the upper or lower bounds, it is set to the corresponding boundary value. The boundary checking method is as follows:
where ub and lb represent the upper and lower bounds of the problem respectively.
Finally, if the fitness of the mutated solution \(X_{new}\) is better than that of the original solution \(X_i\), then use \(X_{new}\) to replace \(X_i\).
In the later iteration, WOA typically focuses on utilizing existing solutions for local search to further refine the accuracy of the solution. Introducing this Hybrid Gaussian Mutation based on Differential Evolution can enhance the diversity of the population. By incorporating random disturbances with different scales from Gaussian distributions, it allows the whale population, upon finding a relatively optimal solution, to guide some individuals out of the current region and explore a broader solution space. This improves the algorithm’s global search ability and effectively prevents premature convergence to local optima. Such a mutation operation helps the algorithm fine-tune the positions of solutions, increasing the final convergence accuracy and preventing the optimal solution’s exact position from being overlooked.
Redesign of convergence factor a
Although the integration of the above strategies improves the performance of WOA, the convergence factor a defined in the traditional WOA no longer meet the specific requirements of the ESTGWOA. The original linear update method of convergence factor a decreases linearly from 2 to 0. This update method results in a gradual change in the algorithm’s search behavior throughout the entire iteration process, especially towards the end of the iterations, where the reduction is minimal, affecting the convergence speed. Therefore, this paper proposed a new update mechanism based on the Sigmoid function for the convergence factor a, to balance global exploration and local exploitation. The computation is given by Eq. (35). Fig. 8 compares the proposed Sigmoid-based update with the original linear decrement.
where t is the current number of iterations; T is the maximum number of iterations; k is the scaling factor of Sigmoid function.
The Sigmoid function exhibits an ’S-shaped’ curve, characterized by slower variation at both ends and rapid change near the midpoint. When applied to parameter updating, this shape produces a gradual decline during the early iterations, a sharp drop in the middle phase, and a slower reduction toward the end. Such a non-linear adjustment pattern effectively models a more sophisticated transition process, allowing the algorithm to display distinct convergence behaviors at different stages. Specifically, the slow early decrease broadens exploration, the mid-phase acceleration enhances convergence speed, and the later mild reduction preserves exploratory potential while strengthening local refinement.
This dynamic adjustment mechanism achieves a more effective balance between global exploration and local exploitation, offering flexible convergence characteristics, mitigating premature convergence, improving robustness across diverse search scenarios, and enhancing the precision of the final solution. The scaling factor k in Eq. (35) governs the evolution of A and therefore directly influences the exploration-exploitation trade-off. The optimal value of k will be determined experimentally in Experiment 5.
Comparison between the proposed convergence factor a and the original a. The light blue straight line represents the original convergence factor a; the blue, green, orange, and red curves represent the proposed convergence factor a for (k=5), (k=10), (k=15) and (k=20), respectively66.
The pseudo-code of the complete ESTGWOA is provided in Algorithm 2 and the flowchart of ESTGWOA is shown in Fig. 9.
ESTGWOA
Flowchart of ESTGWOA.
Time complexity analysis
Time complexity of WOA
Assume that the time complexity of initialization in WOA is O(ND). During each iteration, the time complexity of boundary checking is O(ND), the time complexity of fitness evaluation is O(ND), and the total time complexity of position updates is O(ND). Therefore, the total time complexity per iteration is O(ND). If the algorithm iterates T times, the total time complexity is calculated as:
Total Time Complexity1 = Initialization + T * (the total time complexity per iteration) = O(ND) + T * O(ND) = \(O(T*ND)\)
Time complexity of ESTGWOA
Assume that the time complexity of initialization in ESTGWOA is O(ND). During each iteration, the time complexity of boundary checking is O(ND), the time complexity of fitness evaluation is O(ND), and the total time complexity of position updates is O(ND). Therefore, the total time complexity per iteration is O(ND). If the algorithm iterates T times, the total time complexity is calculated as:
Total Time Complexity2 = Initialization + T * (the total time complexity per iteration) =O(ND) + T * O(ND) = \(O(T*ND)\)
In summary, the time complexity of ESTGWOA and WOA are the same, both are \(O(T*ND)\).
Experiments and analysis
The experimental environment for this study is as follows: Windows 11 (64-bit), Intel(R) Core(TM) i5-8300H CPU @ 2.30GHz processor, 8GB RAM, and Matlab R2023a as the simulation platform. To verify the performance and effectiveness of ESTGWOA, the following five experiments were designed. The algorithm is tested on a set of 23 selected benchmark functions, as shown in Table 3, and engineering design optimization tests are conducted in Chapter 6.
-
Parameter sensitivity analysis experiment The four values of the scaling factor k in Eq. (35) were tested on the selected 23 benchmark functions in CEC2005 to determine the optimal value of k that best balances the exploration and exploitation capabilities of the ESTGWOA67.
-
Ablation study Five improved strategies were sequentially removed from the ESTGWOA, and an ablation study was conducted on the benchmark functions;
-
Qualitative analysis experiment A qualitative analysis experiment was performed by applying ESTGWOA on the 23 benchmark functions to comprehensively evaluate the performance, robustness and exploration-exploitation balance of ESTGWOA in different kinds of optimization problems, by assessing agents’ search behavior, exploration-exploitation ratio and population diversity;
-
Comparative experiment ESTGWOA was compared with other SOTA MAs on the benchmark functions on dimension of 30;
-
Scalability experiment ESTGWOA was compared with the SOTA MAs on the benchmark functions in higher dimensions on higher dimension of 50 and 100.
Parameter sensitivity analysis experiment
In WOA, the convergence factor a controls the balance between the exploration and exploitation phases of the algorithm. The traditional linear update approach indicates that as the iterations progress, the convergence factor a linearly decreases from 2 to 0, implying a strong exploration phase in the early iterations and a gradually increasing exploitation phase in the later iterations. However, when a more precise adjustment of the exploration-exploitation balance is required, the linearly varying convergence factor a may limit the adaptability of WOA, making it difficult to balance exploration and exploitation effectively. To address this issue, the update mechanism of convergence factor a based on a Sigmoid function was proposed in this study, replacing the linear update approach and further enhancing WOA’s ability to balance exploration and exploitation. However, since the trend of the Sigmoid-based convergence factor a depends on the value of k, the selection of k is crucial in determining WOA’s balance between exploration and exploitation.
The objective of this experiment is to adjust the value of k to influence the trend of convergence factor a, thereby identifying the optimal value of k for the algorithm, enabling a more flexible adjustment of the switch between exploration and exploitation phases. We selected k=10, k=15, k=20, and k=25 for comparison, with the corresponding trends of convergence factor a shown in Fig. 8. ESTGWOA with these values of k were tested on 23 benchmark functions using the Friedman test, and the results were presented in Table 4. The results indicated that, among the 23 benchmark functions, the ESTGWOA with k=20 performed the best overall. Therefore, in this study, the scaling factor k=20 was chosen for the Sigmoid-based updating mechanism.
Ablation study
In the ablation study, we removed six improvement strategies from ESTGWOA respectively as follows:
-
ESTGWOA1: The ESTGWOA that Good Nodes Set Initialization was replaced by Pseudo-random number initialization;
-
ESTGWOA2: The ESTGWOA that the update method of convergence factor a was replaced by the one in original WOA;
-
ESTGWOA3: The ESTGWOA that the Elite Guided Search strategy was replaced with the original WOA search-for-prey strategy;
-
ESTGWOA4: The ESTGWOA that the Spiral-based Encircling Prey strategy was replaced with the original WOA encircling prey strategy;
-
ESTGWOA5: The ESTGWOA that the Triangular-based Spiral Hunting Strategy was replaced with spiral updating strategy in the original WOA;
-
ESTGWOA6: The ESTGWOA without the Hybrid Gaussian Mutation based on Differential Evolution.
Furthermore, the number of iterations was set to T=500 and the population size was set to N=30 for all experiments. Each algorithm was run 30 times on 23 benchmark functions to perform a performance analysis. Parametric and non-parametric results of the algorithms were recorded in Table 5. The iteration curves were shown in Fig. 10.
From Fig. 10, it could be observed that the Good Nodes Set Initialization produced a more uniformly distributed population, which was advantageous for solving complex multi-modal problems such as F21-F23, significantly improving the algorithm’s convergence accuracy and stability. In the Elite Guided Search strategy, as the iterations progress, the movement of the whale individuals gradually decreased their dependence on the optimal solution, allowing the algorithm to more naturally balance exploration and exploitation throughout the iterations. This improved the convergence speed of WOA on F1-F4 and F9-F11 and increases solution accuracy for problems such as F5, F6, F12-F15. The Spiral-based Encircling Prey strategy introduced a certain level of periodicity and randomness through the oscillation term, allowing the algorithm to continually escape local optima. The Triangular-based Spiral Hunting Strategy resulted in faster convergence speed and higher convergence accuracy when solving both simple uni-modal problems such as F1-F4 and complex problems like F9-F11, quickly converging to the optimal value. Additionally, the Sigmoid-based updating method for convergence factor a, as proposed in this study, endowed ESTGWOA with a better balance between global exploration and local exploitation, further enhancing the solution accuracy. This allowed the algorithm to better focus on local exploitation in the later stages of iterations, continuously searching for better solutions. The novel Hybrid Gaussian Mutation based on Differential Evolution increased the diversity of the population, enabling the algorithm to escape from local optima more effectively. As shown in Table 5, the average Friedman value of ESTGWOA is 3.3290, ranking first. This indicates that ESTGWOA is the optimal choice.
Iteration curves of the algorithms in ablation study.
Qualitative analysis experiment
In the qualitative analysis experiments, the maximum number of iterations was fixed at \(T = 500\), and the population size was set to \(N = 30\). Under these settings, ESTGWOA was independently executed on 23 benchmark test functions listed in Table 3 to investigate its search history, exploration and exploitation ratio, and population diversity. To enhance interpretability and comparative analysis, the corresponding landscapes and iteration curves for each test function are also illustrated. The qualitative results are summarized in Fig. 11, Fig. 12, and Fig. 13, which mainly consist of the following aspects:
-
Landscapes of the benchmark functions;
-
Search history of the whale population;
-
Exploration and exploitation ratio curves;
-
Population diversity curves;
-
Iterative convergence curves of ESTGWOA.
The search history diagram illustrates the spatial distribution of individuals throughout the optimization process. In this visualization, red markers denote the global best solution, whereas blue trajectories represent the movement paths of individual agents. As observed, ESTGWOA is capable of thoroughly covering the search space during the optimization process.
For uni-modal benchmark problems (e.g., F1-F6), ESTGWOA converges rapidly, enabling individuals to locate optimal solutions within a limited number of iterations, which leads to a swift aggregation of agents around the optimum. In contrast, when addressing more challenging landscapes (e.g., F7-F8, F14, and F17-F23) characterized by numerous local optima, ESTGWOA emphasizes extensive global exploration during the initial phases, followed by intensified local refinement in later stages. Consequently, the majority of search trajectories span a broad region of the solution space while gradually clustering near high-quality solutions.
From the perspective of exploration-exploitation coordination, ESTGWOA exhibits strong adaptive regulation throughout the iterative process. For functions F1-F13, the algorithm maintains a high exploratory tendency at the beginning, then progressively strengthens exploitation, reflecting its robust global optimization capability. Conversely, for functions F17-F23, a stronger exploitation behavior is observed in the early iterations, which then decreases steadily over time, highlighting the algorithm’s effectiveness in balancing global and local search strategies.
Moreover, for highly multi-modal functions such as F5-F8 and F12-F23, the population diversity curve of ESTGWOA remains highly fluctuating while sustaining relatively high diversity levels. This behavior indicates that the algorithm effectively prevents premature convergence by avoiding excessive individual aggregation. Overall, ESTGWOA demonstrates superior performance in terms of search coverage, convergence efficiency, and robustness, achieving an effective balance between global exploration and local exploitation.
Results of qualitative analysis experiment (F1-F8).
Results of qualitative analysis experiment (F9-F16).
Results of qualitative analysis experiment (F17-F23).
Comparative experiment
To further assess the performance of ESTGWOA, we conducted comparative experiments involving several well-known MAs, including the Remora Optimization Algorithm (ROA)11, Zebra Optimization Algorithm (ZOA)12, GWO9, Attraction-Repulsion Optimization Algorithm (AROA)21, HHO10, MWOA68, MSWOA69, IPSO48, and WOA. These methods were tested on the benchmark suite listed in Table 3. Algorithm descriptions are provided in Table 6, while their corresponding parameter settings are summarized in Table 7. For consistency, all algorithms were executed with a fixed number of iterations (T=500) and a population size of N=30. Each approach was independently executed 30 times across the 23 benchmark problems. Performance metrics-including average fitness (Ave), standard deviation (Std), p-values from the Wilcoxon rank-sum test, and Friedman rankings-were collected for comprehensive evaluation. The outcomes of these experiments are presented in Fig. 14, Table 8, and Table 9.
Parametric analysis
The parametric experimental results demonstrated that ESTGWOA outperformed all other algorithms comprehensively, with a significant improvement in overall performance compared to the original WOA. From Fig. 14 and Table 8, it could be seen that in solving the 23 benchmark functions, ESTGWOA performed the best in terms of both mean and standard deviation across all algorithms. For the majority of functions, ESTGWOA could quickly find the optimal solution, exhibiting fast solution speed and high accuracy, which proves that ESTGWOA had good adaptability and robustness in handling various types of problems.
In the performance evaluation of algorithms, the average fitness and standard deviation were commonly used to measure the convergence and stability of algorithms. However, they did not intuitively reflect the performance quality of the algorithm. Relying solely on the average fitness and standard deviation for performance comparison has certain limitations. Therefore, non-parametric tests, such as the Wilcoxon rank-sum test and Friedman test, are often introduced. These statistical tests provide deeper analysis and reliability verification.
Non-parametric analysis
In this section, non-parametric statistical methods are employed to evaluate the significance of performance differences among the compared algorithms. Specifically, the Wilcoxon rank-sum test is used to perform pairwise comparisons between ESTGWOA and each of the baseline methods, assessing whether the observed differences are statistically significant. As shown in Table 9, ESTGWOA had significant differences compared to ROA, GWO and AROA across all benchmark functions. However, ESTGWOA did not show significant differences compared to ZOA, HHO, MWOA, MSWOA and IPSO on F9-F11, as all three algorithms found the optimal value at a similar speed. Similarly, ESTGWOA did not show significant differences from MWOA on F1 and F3, as both algorithms quickly converged to the optimal value.
Additionally, the Friedman test is conducted to provide a global ranking of all algorithms based on their performance across multiple benchmark functions. These non-parametric approaches are suitable for algorithm evaluation since they do not assume normality in the data distributions. Based on the average Friedman values of each algorithm, Table 9 showed that ESTGWOA had the lowest average Friedman value of 1.7949, ranking first and far surpassing the second-place ZOA (4.2152). MSWOA, ROA, HHO, WOA and GWO had average Friedman values of 5.2833, 5.3333, 5.5920, 5.8058 and 5.9406, ranking third, fourth, fifth, sixth and seventh, respectively. IPSO, MWOA and AROA had average Friedman values of 6.0993, 6.4442 and 8.4913, ranking eighth, ninth and tenth.
In summary, ESTGWOA demonstrated the best overall performance among all algorithms, showing strong competitiveness compared to other excellent MAs.
Iteration curves of different algorithms in comparison experiment.
Scalability experiment
Among the 23 benchmark functions, F1-F13 are scalable in dimensionality, while F14-F23 are limited to fixed dimensions. To assess the adaptability of ESTGWOA to varying problem scales and complexities, the dimensions of F1-F13 were extended to 50 and 100, whereas F14-F23 retained their original dimensionality. Comparative experiments were conducted between ESTGWOA and several other algorithms, including ZOA, ROA, GWO, AROA, HHO, MWOA, MSWOA, IPSO, and WOA, under settings of Dim=50 and Dim=100. The parameters used for each method are detailed in Table 7, with the iteration count set to T=500 and population size to N=30. Each algorithm was independently executed 30 times on the test suite. For performance evaluation, the Wilcoxon rank-sum test and Friedman test were applied, and the outcomes are presented in Table 10.
The experimental results indicate that ESTGWOA exhibited superior scalability. In both 50- and 100-dimensional tasks, it ranked first in the Friedman evaluation, as shown in Table 10. Furthermore, the Wilcoxon test revealed statistically significant differences between ESTGWOA and the compared state-of-the-art (SOTA) methods in both dimensional settings. These results confirm ESTGWOA’s robust optimization ability and highlight its strong competitiveness among advanced MAs.
A comprehensive summary of performance, measured by the overall effectiveness (OE) metric, is shown in Table 11. Here, w, t, and l represent the number of wins, ties, and losses respectively. OE scores were calculated using Eq. (36) as referenced in70.
where N is the total number of tests; L is the total number of losing tests for each algorithm.
The results demonstrate that ESTGWOA, with an OE of 97.10%, outperformed other algorithms on benchmark functions with different dimensions of Dim=30, Dim=50, and Dim=100. It proved to be the most effective algorithm among the competitors.
Engineering design optimization challenges
In modern engineering, design optimization plays a critical role in enhancing system performance, reducing costs, and satisfying a wide range of constraints. Engineering design problems are often characterized by nonlinearity, multimodality, strong constraints, and complex variable interactions. Traditional optimization techniques such as gradient-based methods and linear programming often struggle to address these challenges due to limitations like the curse of dimensionality, local optima entrapment, and reliance on gradient or problem-specific information.
MAs have gained increasing attention as robust alternatives, owing to their global search ability, independence from derivative information, and flexibility across diverse problem domains. Algorithms such as PSO8, DE2 and GWO9 have been widely applied to solve complex engineering design problems by mimicking natural phenomena or evolutionary processes.
In order to ensure transparency and reproducibility, the mathematical formulations of all seven engineering design problems considered in this study are explicitly provided in this section, including decision variables, objective functions, and nonlinear constraints. These problems are not only well-established benchmarks in the optimization literature but also originate from real-world engineering contexts such as automotive systems (multi-disk clutch brake), energy infrastructure (gas transmission compressor), chemical processes (reactor network), and structural/mechanical design (I-beam, piston lever, truss, and spring). Their inherent nonlinearity, multimodality, and constraint-coupling reflect the complexity faced in real-world industrial optimization tasks, which are difficult to address effectively with traditional gradient-based or mathematical programming methods. This justifies the adoption of metaheuristic algorithms such as ESTGWOA for robust and scalable optimization.
Despite its popularity due to simplicity and ease of implementation, WOA has shown only moderate performance in many engineering optimization tasks59. Its limitations, such as slow convergence and premature stagnation, have restricted its broader applicability in this domain. Therefore, the primary objective of this study is to propose an enhanced variant of WOA-namely, ESTGWOA-to improve optimization performance and further explore the untapped potential of WOA in engineering design optimization.
To rigorously validate the proposed algorithm, this chapter investigates a set of representative engineering design problems, including the multi-disk clutch brake, gas transmission compressor, reactor network, I-beam, piston lever, three-bar truss and tension/compression spring design. These case studies span structural design, mechanical systems, and energy optimization, providing a comprehensive benchmark to assess the robustness and effectiveness of ESTGWOA under constrained and high-complexity conditions. To handle the constraints inherent in these problems, the penalty function method is adopted, transforming constrained optimization problems into unconstrained ones by incorporating constraint violations into the objective function.
All engineering design optimization problems investigated in this study are constrained optimization problems involving nonlinear inequality constraints. To handle these constraints in a unified and implementation-consistent manner, a static penalty function approach is adopted. Specifically, constraint violations are incorporated into the objective function through a quadratic penalty term.
The penalty function is defined as:
where n indicates the number of inequality constraints; \(g_i({\textbf{x}}) \le 0\) denotes the \(i^{th}\) inequality constraint. The penalized fitness function is then formulated as:
where \(f({\textbf{x}})\) represents the original objective function.
This penalty mechanism assigns increasingly large fitness values to infeasible solutions, thereby discouraging constraint violations and guiding the population toward the feasible region during the optimization process. The same constraint handling strategy is consistently applied to all engineering design problems to ensure fairness and comparability of the optimization results.
Multi-disk clutch brake
The multi-disk clutch brake (MDCB), as shown in Fig. 15, is a critical mechanical component widely used in automotive transmissions and industrial machinery to transmit torque and control rotational motion. The design of an MDCB involves optimizing several geometric and operational parameters to achieve desired performance while minimizing size, weight, or material cost under multiple constraints.
The structure of a multiple-disc clutch brake.
In this study, the MDCB design problem is formulated as a constrained optimization problem with five decision variables:
-
\(X_1\): inner radius of the friction disks;
-
\(X_2\): outer radius;
-
\(X_3\): thickness of the disks;
-
\(X_4\): actuating force applied to engage the clutch;
-
\(X_5\): number of friction surfaces.
The objective is typically to minimize the total mass or cost of the clutch brake system while ensuring sufficient torque transmission capacity and compliance with geometric and physical constraints. Due to the nonlinear and constrained nature of the problem, MAs offer an effective solution approach. The objective function for the MDCB design problem can be described as:
Variable:
Minimize:
Subject to:
Where:
Variable range:
In this study, we conducted comparative tests between ESTGWOA and ZOA, ROA, GWO, AROA, HHO, MWOA, MSWOA, IPSO and WOA. The configuration parameters for all algorithms are detailed in Table 7. The iteration count was uniformly set at T=500, with a population size of N=30. To ensure reliability, each algorithm was executed independently 30 times on the multi-disk clutch brake design problem. The performance metrics, including Ave and Std, were recorded for evaluation. The experimental results are shown in Fig. 16 and Table 12. As shown in Table 12, ESTGWOA demonstrated significantly superior optimization accuracy and stability compared to other algorithms. This indicated that ESTGWOA had a substantial advantage in handling such problems.
Iteration curves of the algorithms in MDCB design.
Gas transmission compressor
The gas transmission compressor (GTC), as shown in Fig. 17, plays a vital role in energy infrastructure, responsible for transporting natural gas from production facilities to end users through a network of pipelines and compressor stations35. The design of an efficient gas transmission system requires a careful balance between construction costs, operational efficiency, and safety constraints, as well as compliance with physical and flow-related limitations.
The structure of a gas transmission compressor system.
In this study, the gas transmission system design problem is formulated as a constrained nonlinear optimization task involving four decision variables:
-
\(x_1\): the distance between compressor stations;
-
\(x_2\): the compression ratio, defined as the ratio of inlet to outlet pressure at the compressor;
-
\(x_3\): the inner diameter of the pipeline;
-
\(x_4\): the gas velocity at the output side of the pipeline.
The main objective is to minimize the total annual cost, which typically includes pipeline construction, compressor installation, and operational expenses. The optimization must also satisfy a set of nonlinear constraints related to gas dynamics, pressure loss, pipe capacity, and speed limits. Due to the highly nonlinear and coupled nature of the design parameters, metaheuristic optimization algorithms offer a practical and effective approach for this complex engineering problem. The objective function for the GTC design problem can be described as:
Variable:
Minimize:
Subject to:
Where:
Variable range:
In this study, we conducted comparative tests between ESTGWOA and ZOA, ROA, GWO, AROA, HHO, MWOA, MSWOA, IPSO and WOA. The configuration parameters for all algorithms are detailed in Table 7. The iteration count was uniformly set at T=500, with a population size of N=30. To ensure reliability, each algorithm was executed independently 30 times on the gas transmission compressor design problem. The performance metrics, including Ave and Std, were recorded for evaluation. The experimental results are shown in Fig. 18 and Table 12. As shown in Table 12, ESTGWOA demonstrated significantly superior optimization accuracy and stability compared to other algorithms. This indicated that ESTGWOA had a substantial advantage in handling such problems.
Iteration curves of the algorithms in GTC design.
Reactor network
The reactor network (RNW) design problem, as shown in Fig. 19, is a critical task in chemical process engineering, aiming to determine the optimal arrangement and operating conditions of chemical reactors to enhance overall process efficiency36. It involves strategic decisions regarding the flow distribution, concentration control, and sequencing of reactors to maximize the final product yield or concentration, while satisfying chemical and physical constraints.
The structure of a reactor network.
In this study, the reactor network is modeled with four key decision variables, each representing concentrations at various stages of the reaction process:
-
\(x_1\): reactant concentration entering the first reactor;
-
\(x_2\): product concentration leaving the first reactor;
-
\(x_3\): reactant concentration entering the second reactor;
-
\(x_4\): final product concentration at the output.
The optimization focuses on improving reaction performance through fine-tuning reactor configurations and operating parameters, with consideration for reaction equilibrium, mass conservation, and conversion efficiency. Several nonlinear constraints govern the relationships among stages to ensure physical feasibility and chemical consistency. Given the coupled and nonlinear nature of the system, MAs offer a robust approach to solve this class of problems efficiently.
The reactor network optimization problem is subject to several nonlinear constraints to ensure chemical feasibility and system integrity. Specifically, the constraints \(h_1(x)\) through \(h_4(x)\) are defined as follows:
-
\(h_1(x)\): ensures the balance of reactants and products within the first reactor, reflecting stoichiometric consistency;
-
\(h_2(x)\): enforces mass conservation between the outlet of the first reactor and the inlet of the second reactor;
-
\(h_3(x)\): maintains equilibrium in the reactant concentration levels between the two reactors, accounting for reaction kinetics and system continuity;
-
\(h_4(x)\): ensures overall mass conservation between intermediate and final products, preserving system closure and yield accuracy;
These constraints are essential for maintaining realistic chemical behavior in the reactor network and preventing physically infeasible or chemically inconsistent configurations. The objective function for the RNW design problem can be described as:
Variable:
Minimize:
Subject to:
Where:
Variable range:
In this study, we conducted comparative tests between ESTGWOA and ZOA, ROA, GWO, AROA, HHO, MWOA, MSWOA, IPSO and WOA. The configuration parameters for all algorithms are detailed in Table 7. The iteration count was uniformly set at T=500, with a population size of N=30. To ensure reliability, each algorithm was executed independently 30 times on the reactor network design problem. The performance metrics, including Ave and Std, were recorded for evaluation. The experimental results are shown in Fig. 20 and Table 12. As shown in Table 12, ESTGWOA demonstrated significantly superior optimization accuracy and stability compared to other algorithms. This indicated that ESTGWOA had a substantial advantage in handling such problems.
Iteration curves of the algorithms in RNW design.
I-beam
The I-beam (IB) design optimization problem, as shown in Fig. 21, is a constrained structural design task aimed at minimizing material usage while ensuring mechanical strength35. It involves four decision variables: the web height (\(x_1\)), flange width (\(x_2\)), web thickness (\(x_3\)), and flange thickness (\(x_4\)). Two nonlinear constraints, \(g_1(x)\) and \(g_2(x)\), are imposed to ensure the design meets stress and deflection requirements under loading conditions. The problem reflects typical scenarios in civil and mechanical engineering where lightweight and strength-efficient designs are essential.
The structure of an I-beam.
The objective function for the I-beam design problem can be described as:
Variable:
Maximize:
Subject to:
Where:
Variable range:
Iteration curves of the algorithms in I-beam design.
In this study, we conducted comparative tests between ESTGWOA and ZOA, ROA, GWO, AROA, HHO, MWOA, MSWOA, IPSO and WOA. The configuration parameters for all algorithms are detailed in Table 7. The iteration count was uniformly set at T=500, with a population size of N=30. To ensure reliability, each algorithm was executed independently 30 times on the I-beam design problem. The performance metrics, including Ave and Std, were recorded for evaluation. The experimental results are shown in Fig. 22 and Table 12. As shown in Table 12, ESTGWOA demonstrated significantly superior optimization accuracy and stability compared to other algorithms. This indicated that ESTGWOA had a substantial advantage in handling such problems.
Piston lever
The piston lever (PL) optimization problem, as shown in Fig. 23, focuses on designing a mechanical lever system that transmits force efficiently65. The problem includes four design variables: \(x_1\) and \(x_2\) represent the primary length and width dimensions that define the overall geometry of the lever; \(x_3\) denotes the radius of the cross-section at the force application point, which directly affects the stress distribution; and \(x_4\) relates to the geometry around the support point. The objective is typically to minimize weight or maximize load-bearing efficiency while satisfying geometric and mechanical constraints.
Structure of a piston lever.
The objective function for the piston lever design problem can be described as:
Variable:
Minimize:
Subject to:
Variable range:
Where:
In this study, we conducted comparative tests between ESTGWOA and ZOA, ROA, GWO, AROA, HHO, MWOA, MSWOA, IPSO and WOA. The configuration parameters for all algorithms are detailed in Table 7. The iteration count was uniformly set at T=500, with a population size of N=30. To ensure reliability, each algorithm was executed independently 30 times on the piston lever design problem. The performance metrics, including Ave and Std, were recorded for evaluation. The experimental results were shown in Fig. 24, and Table 12. As shown in Table 12, in the piston lever design problem, ESTGWOA demonstrated significantly superior optimization accuracy and stability compared to other algorithms. This indicated that ESTGWOA had a substantial advantage in handling such problems.
Iteration curves of the algorithms in Piston Lever design.
Three-bar truss
The three-bar truss (TBT) design problem, as shown in Fig. 25, is a well-known benchmark in structural optimization, involving two continuous decision variables: the cross-sectional areas \(x_1\) and \(x_2\)36. The objective is to minimize the total volume of the truss under loading conditions. The problem includes a nonlinear objective function and three nonlinear inequality constraints that ensure stress and displacement limits are not violated. This compact yet challenging design problem serves as a standard test for evaluating the capabilities of MAs in handling nonlinearity and constraints.
The structure of a three-bar truss.
The objective function for the three-bar truss design problem can be described as follows:
Variable:
Minimize:
Subject to:
Where:
Variable range:
In this study, we conducted comparative tests between ESTGWOA and ZOA, ROA, GWO, AROA, HHO, MWOA, MSWOA, IPSO and WOA. The configuration parameters for all algorithms are detailed in Table 7. The iteration count was uniformly set at T=500, with a population size of N=30. To ensure reliability, each algorithm was executed independently 30 times on the three-bar truss design problem. The performance metrics, including Ave and Std, were recorded for evaluation. The experimental results were shown in Fig. 26 and Table 12. As shown in Table 12, ESTGWOA demonstrated significantly superior optimization accuracy and stability compared to other algorithms. This indicated that ESTGWOA had a substantial advantage in handling such problems.
Iteration curves of the algorithms in Three-bar Truss design.
Tension/compression spring
The tension/compression spring (TCS) design problem, as shown in Fig. 27, is a classical constrained engineering task widely used in mechanical design35,36,66. It involves three decision variables: the wire diameter (d), the mean coil diameter (D), and the number of active coils (N).
-
d: the wire diameter;
-
D: the mean coil diameter;
-
N: the number of active coils.
The structure of a tension/compression spring.
The objective is to minimize the spring’s weight while maintaining structural integrity and mechanical performance. Four nonlinear constraints, denoted as \(g_1(x)\) to \(g_4(x)\), are used to enforce limits on shear stress, free length, deflection, and surge frequency. These constraints ensure that the spring remains safe and functional under operational conditions. The objective function for the tension/compression spring design problem can be described as:
Variable:
Minimize:
Subject to:
Where:
Variable range:
Iteration curves of the algorithms in tension/compression spring design.
In this study, we conducted comparative tests between ESTGWOA and ZOA, ROA, GWO, AROA, HHO, MWOA, MSWOA, IPSO and WOA. The configuration parameters for all algorithms are detailed in Table 7. The iteration count was uniformly set at T=500, with a population size of N=30. To ensure reliability, each algorithm was executed independently 30 times on the tension/compression spring design problem. The performance metrics, including Ave and Std, were recorded for evaluation. The experimental results are presented in Fig. 28, and Table 12. As shown in Table 12, ESTGWOA demonstrated significantly superior optimization accuracy and stability compared to other algorithms. This indicated that ESTGWOA had a substantial advantage in handling such problems.
It is worth noting that although classical optimization techniques (e.g., nonlinear programming, penalty methods) have been applied to these problems in prior studies, their performance is often limited due to the highly non-convex and constrained search spaces. In contrast, ESTGWOA demonstrates robust optimization accuracy and stability across all cases, confirming its suitability for complex engineering applications.
Discussion
ESTGWOA effectively addresses several limitations of the canonical WOA, including premature convergence, low population diversity in later iterations, slow convergence speed, low convergence accuracy, and an imbalance between exploration and exploitation. As demonstrated by the experimental results, ESTGWOA exhibits strong competitiveness when compared with other state-of-the-art optimization algorithms.
From a theoretical perspective, the time complexity analysis presented in Section 4.7 shows that ESTGWOA and WOA share the same asymptotic time complexity, i.e., \(O(T * ND)\). However, in practice, ESTGWOA generally requires slightly more computational time than the canonical WOA due to the introduction of the Hybrid Gaussian Mutation based on Differential Evolution strategy, which increases the number of arithmetic operations and fitness evaluations per iteration. This additional computational cost represents a trade-off for improved convergence accuracy, robustness, and population diversity.
Consequently, while ESTGWOA may not be the most suitable choice for large-scale real-time optimization tasks with strict time constraints, it demonstrates strong potential for solving complex real-world optimization problems. Its application scope includes mechanical design optimization, path planning, production scheduling, and neural network parameter tuning, where high-dimensional search spaces and practical constraints are commonly encountered. In addition, with appropriate encoding schemes and solution representation, ESTGWOA can be extended to address discrete and combinatorial optimization problems, such as feature selection, job-shop scheduling, and routing problems. ESTGWOA is particularly well suited for problems requiring a balanced exploration-exploitation mechanism, such as three-dimensional traveling salesman problems, manufacturing process simulation, and multi-objective engineering design. Future work will focus on deploying ESTGWOA in practical industrial environments, exploring discrete and combinatorial variants, and further improving its computational efficiency under dynamic and uncertain conditions.
Abbreviations and their corresponding full names used in this paper are listed in Table 13.
Conclusion
ESTGWOA introduced Good Nodes Set Initialization to generate a uniformly distributed whale population. It employed newly-designed strategies including the Elite-guided Searching strategy, Spiral-based Encircling Prey strategy, Triangular-based Spiral Hunting strategy, and Hybrid Gaussian Mutation based on Differential Evolution. ESTGWOA adopted the newly-designed convergence factor a to balance global exploration and local exploitation. Experimental results demonstrated that ESTGWOA effectively balances exploration and exploitation during the optimization process, achieving high convergence accuracy, fast convergence, and maintaining population diversity.
To validate the effectiveness of the improvements in ESTGWOA, distinct tests were conducted, including Parameter sensitivity analysis experiment, Ablation study, Comparative experiment with SOTA algorithms, to verify ESTGWOA’s superiority. And ESTGWOA was applied to seven engineering design optimization challenges, verifying its feasibility in engineering design optimization. ESTGWOA provided a novel approach for the application of WOA in engineering design.
Data availability
To support the experimental study in this paper, we used the Standard Benchmark Functions. The relevant data has been uploaded to Figshare, and the link for the specific modeling of Standard Benchmark Functions is: https://doi.org/10.6084/m9.figshare.28440863 , only for reference and further analysis by the readers. The specific modeling of engineering optimization challenges in this study has been uploaded to Figshare, and the link is: https://figshare.com/articles/thesis/engineering_m/28673777?file=53256305, only for reference and further analysis by the readers.
Code availability
To support the experimental study in this paper, we used the Standard Benchmark Functions. The relevant data has been uploaded to Figshare, and the link for the specific modeling of Standard Benchmark Functions is: https://doi.org/10.6084/m9.figshare.28440863, only for reference and further analysis by the readers67. The specific modeling of engineering optimization challenges in this study has been uploaded to Figshare, and the link is: https://doi.org/10.6084/m9.figshare.28673777.v1, only for reference and further analysis by the readers.
References
Holland, J. H. Genetic algorithms. Sci. Am. 267, 66–73 (1992).
Das, S. & Suganthan, P. N. Differential evolution: A survey of the state-of-the-art. IEEE Trans. Evol. Comput. 15, 4–31 (2010).
Xue, J. & Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 8, 22–34 (2020).
Gao, Y., Wang, J. & Li, C. Escape after love: Philoponella prominens optimizer and its application to 3d path planning. Clust. Comput. 28, 81 (2025).
Gámez, M. G. M. & Vázquez, H. P. A novel swarm optimization algorithm based on hive construction by tetragonula carbonaria builder bees. Mathematics 13, 2721 (2025).
Agushaka, J. O. et al. Greater cane rat algorithm (gcra): A nature-inspired metaheuristic for optimization problems. Heliyon 10, e31629 (2024).
Dorigo, M., Birattari, M. & Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 1, 28–39 (2007).
Kennedy, J. & Eberhart, R. Particle swarm optimization. In Proceedings of ICNN’95-International Conference on Neural Networks vol. 4, 1942–1948 (IEEE, 1995).
Mirjalili, S., Mirjalili, S. M. & Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 69, 46–61 (2014).
Heidari, A. A. et al. Harris hawks optimization: Algorithm and applications. Futur. Gener. Comput. Syst. 97, 849–872 (2019).
Jia, H., Peng, X. & Lang, C. Remora optimization algorithm. Expert Syst. Appl. 185, 115665 (2021).
Trojovská, E., Dehghani, M. & Trojovskỳ, P. Zebra optimization algorithm: A new bio-inspired optimization algorithm for solving optimization algorithm. Ieee Access 10, 49445–49473 (2022).
Fu, S. et al. Red-billed blue magpie optimizer: A novel metaheuristic algorithm for 2d/3d uav path planning and engineering design problems. Artif. Intell. Rev. 57, 134 (2024).
Hamadneh, T. et al. Salamander optimization algorithm: A new bio-inspired approach for solving optimization problems. Int. J. Intell. Eng. Syst 18, 550–561 (2025).
Wang, X. Bighorn sheep optimization algorithm: A novel and efficient approach for wireless sensor network coverage optimization. Phys. Scr. https://doi.org/10.1088/1402-4896/ade378 (2025).
Alibabaei Shahraki, M. Cloud drift optimization algorithm as a nature-inspired metaheuristic. Discov. Comput. 28, 173 (2025).
Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inf. Sci. 222, 175–184 (2013).
Mirjalili, S., Mirjalili, S. M. & Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 27, 495–513 (2016).
Hashim, F. A., Houssein, E. H., Mabrouk, M. S., Al-Atabany, W. & Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Futur. Gener. Comput. Syst. 101, 646–667 (2019).
Van Laarhoven, P. J. & Aarts, E. H. Simulated annealing. In Simulated Annealing: Theory and Applications 7–15 (Springer, 1987).
Cymerys, K. & Oszust, M. Attraction-repulsion optimization algorithm for global optimization problems. Swarm Evol. Comput. 84, 101459 (2024).
Zraiqat, A. et al. Thunderstorm and cloud algorithm: A novel parameter-free metaheuristic inspired by atmospheric dynamics for complex optimization tasks. Int. J. Intell. Eng. Syst. 18, 153–167 (2025).
Hussein, N. K. et al. Schrödinger optimizer: A quantum duality-driven metaheuristic for stochastic optimization and engineering challenges. Knowl.-Based Syst. 328, 114273 (2025).
Zraiqat, A. et al. Library and readers algorithm (lra): A novel human-inspired parameter-free metaheuristic for efficient global optimization. Int. J. Intell. Eng. Syst. 18, 526–540 (2025).
Zraiqat, A. et al. Driver and navigator algorithm: A novel parameter-free human-inspired metaheuristic for efficient global optimization’. Int. J. Intell. Eng. Syst. 18, 555–569 (2025).
Zraiqat, A. et al. Court and judge algorithm (cja): A novel human-inspired metaheuristic for engineering optimization. Int. J. Intell. Eng. Syst. 18, 60–72 (2025).
Zraiqat, A. et al. Community-based crisis management algorithm (ccma): A novel parameter-free metaheuristic for complex constrained optimization. Int. J. Intell. Eng. Syst. 18, 593–606 (2025).
Rao, R. V., Savsani, V. J. & Vakharia, D. P. Teaching-learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 43, 303–315 (2011).
Moosavian, N. & Roodsari, B. K. Soccer league competition algorithm: A novel meta-heuristic algorithm for optimal design of water distribution networks. Swarm Evol. Comput. 17, 14–24 (2014).
Zraiqat, A. et al. Psychologist algorithm: A human-inspired metaheuristic for solving complex constrained optimization problems. Int. J. Intell. Eng. Syst. 18, 124–137 (2025).
Hamadneh, T. et al. Perfumer optimization algorithm: A novel human-inspired metaheuristic for solving optimization tasks. Int. J. Intell. Eng. Syst. 18, 633–643 (2025).
Wei, J. et al. An enhanced whale optimization algorithm with log-normal distribution for optimizing coverage of wireless sensor networks. arXiv preprint arXiv:2511.15970 (2025).
Wang, X. & Yao, L. Cape lynx optimizer: A novel metaheuristic algorithm for enhancing wireless sensor network coverage. Measurement 256, 118361 (2025).
Paredes, M., Sartor, M. & Masclet, C. An optimization process for extension spring design. Comput. Methods Appl. Mech. Eng. 191, 783–797 (2001).
Wei, J. et al. Lsewoa: An enhanced whale optimization algorithm with multi-strategy for numerical and engineering design optimization problems. Sensors 25, 2054 (2025).
Wei, J., Gu, Y., Lu, B. & Cheong, N. Rwoa: A novel enhanced whale optimization algorithm with multi-strategy for numerical optimization and engineering design problems. PLoS ONE 20, e0320913 (2025).
Dey, V. et al. Optimization of bead geometry in electron beam welding using a genetic algorithm. J. Mater. Process. Technol. 209, 1151–1157 (2009).
Gu, Y., Wei, J. & Cheong, N. Credit card fraud detection based on minikm-svmsmote-xgboost model. In Proceedings of the 2024 8th International Conference on Big Data and Internet of Things 252–258 (2024).
Agrawal, U. K. & Panda, N. Quantum-inspired adaptive mutation operator enabled pso (qamo-pso) for parallel optimization and tailoring parameters of kolmogorov-arnold network. J. Supercomput. 81, 1310 (2025).
Agrawal, U. K., Panda, N., Tejani, G. G. & Mousavirad, S. J. Improved salp swarm algorithm-driven deep cnn for brain tumor analysis. Sci. Rep. 15, 24645 (2025).
Wei, J. et al. Nawoa-xgboost: A novel model for early prediction of academic potential in computer science students. arXiv preprint arXiv:2512.04751 (2025).
Qaraad, M. et al. Comparing ssaleo as a scalable large scale global optimization algorithm to high-performance algorithms for real-world constrained optimization benchmark. IEEE Access 10, 95658–95700 (2022).
Mahapatra, A. K., Panda, N. & Pattanayak, B. K. Quantized orthogonal experimentation ssa (qox-ssa): A hybrid technique for feature selection (fs) and neural network training. Arab. J. Sci. Eng. 50, 1025–1056 (2025).
Mahapatra, A. K., Panda, N. & Pattanayak, B. K. Adaptive dimensional search-based orthogonal experimentation ssa (adox-ssa) for training rbf neural network and optimal feature selection. J. Supercomput. 81, 212 (2025).
Mahapatra, A. K., Panda, N., Mahapatra, M., Jena, T. & Mohanty, A. K. A fast-flying particle swarm optimization for resolving constrained optimization and feature selection problems. Clust. Comput. 28, 91 (2025).
Levi, Y., Bekhor, S. & Rosenfeld, Y. A multi-objective optimization model for urban planning: The case of a very large floating structure. Transp. Res. Part C 98, 85–100 (2019).
Wei, J. et al. Ahrrt: An enhanced rapidly-exploring random tree algorithm with heuristic search for uav urban path planning. Preprints:2025111805 (2025).
Wei, J., Gu, Y., Law, K. E. & Cheong, N. Adaptive position updating particle swarm optimization for uav path planning. In 2024 22nd International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt) 124–131 (IEEE, 2024).
Xie, Z. et al. Research on uav applications in public administration: Based on an improved rrt algorithm. arXiv preprint arXiv:2508.14096 (2025).
Lu, B. et al. Mrbmo: An enhanced red-billed blue magpie optimization algorithm for solving numerical optimization challenges. Symmetry 17, 1295 (2025).
Käschel, J., Teich, T. & Zacher, B. Real-time dynamic shop floor scheduling using evolutionary algorithms. Int. J. Prod. Econ. 79, 113–120 (2002).
Mahmood, B. S., Hussein, N. K., Aljohani, M. & Qaraad, M. A modified gradient search rule based on the quasi-newton method and a new local search technique to improve the gradient-based algorithm: solar photovoltaic parameter extraction. Mathematics 11, 4200 (2023).
Qaraad, M. et al. Photovoltaic parameter estimation using improved moth flame algorithms with local escape operators. Comput. Electr. Eng. 106, 108603 (2023).
Qaraad, M. et al. Quadratic interpolation and a new local search approach to improve particle swarm optimization: Solar photovoltaic parameter estimation. Expert Syst. Appl. 236, 121417 (2024).
Gao, Y. & Xie, S.-L. Chaos particle swarm optimization algorithm. Comput. Sci. 31, 13–15 (2004).
Qu, C., He, W., Peng, X. & Peng, X. Harris hawks optimization with information exchange. Appl. Math. Model. 84, 52–75 (2020).
Jia, H. et al. Improved snow ablation optimizer with heat transfer and condensation strategy for global optimization problem. J. Comput. Des. Eng. 10, 2177–2199 (2023).
Huang, J. & Hu, H. Hybrid beluga whale optimization algorithm with multi-strategy for functions and engineering optimization problems. J. Big Data 11, 3 (2024).
Mirjalili, S. & Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 95, 51–67 (2016).
Liu, M., Yao, X. & Li, Y. Hybrid whale optimization algorithm enhanced with lévy flight and differential evolution for job shop scheduling problems. Appl. Soft Comput. 87, 105954 (2020).
Chakraborty, S., Sharma, S., Saha, A. K. & Saha, A. A novel improved whale optimization algorithm to solve numerical optimization and real-world applications. Artif. Intell. Rev. 55, 4605–4716 (2022).
Li, C. et al. Evolving the whale optimization algorithm: The development and analysis of miswoa. Biomimetics 9, 639 (2024).
Gu, Y. et al. Gwoa: A multi-strategy enhanced whale optimization algorithm for engineering design optimization. PLoS ONE 20, e0322494 (2025).
Xiao, C., Cai, Z. & Wang, Y. A good nodes set evolution strategy for constrained optimization. In 2007 IEEE Congress on Evolutionary Computation 943–950 (IEEE, 2007).
Wei, J. et al. Tswoa: An enhanced woa with triangular walk and spiral flight for engineering design optimization. In 2025 8th International Conference on Advanced Algorithms and Control Engineering (ICAACE) 186–194 (IEEE, 2025).
Wei, J. et al. Lswoa: An enhanced whale optimization algorithm with levy flight and spiral flight for numerical and engineering design optimization problems. PLoS ONE 20, e0322058 (2025).
Suganthan, P. N. et al. Problem definitions and evaluation criteria for the cec 2005 special session on real-parameter optimization. KanGAL Rep. 2005005, 2005 (2005).
Anitha, J., Pandian, S. I. A. & Agnes, S. A. An efficient multilevel color image thresholding based on modified whale optimization algorithm. Expert Syst. Appl. 178, 115003 (2021).
Yang, W. et al. A multi-strategy whale optimization algorithm and its application. Eng. Appl. Artif. Intell. 108, 104558 (2022).
Nadimi-Shahraki, M. H., Taghian, S., Mirjalili, S. & Faris, H. Mtde: An effective multi-trial vector-based differential evolution algorithm and its applications for engineering design problems. Appl. Soft Comput. 97, 106761 (2020).
Acknowledgements
The supports provided by Macao Polytechnic University (MPU Grant no: RP/FCA-03/2022; RP/FCA-04/2022; RP/FCA-06/2022; RP/FCA-01/2025) and MacaoScience and Technology Development Fund (FDCT Grant no: 0044/2023/ITP2; FDCT-MOST (0018/2025/AMJ) enabled us to conduct data collection, analysis,and interpretation, as well as cover expenses related to research materials and participant recruitment. MPU and FDCT investment in our work (MPU Submission Code: fca.def5.c31c.0) have significantly contributed to the quality and impact of our research findings.
Funding
This work is supported by the grant from FDCT-MOST (0018/2025/AMJ) and Macao Polytechnic University (RP/FCA-01/2025).
Author information
Authors and Affiliations
Contributions
J.W. conceived and designed the study; J.W. developed the methodology and implemented the software; J.W. performed the validation; R.Z., S.W., Z.L. and W.Z. conducted the formal analysis; J.W. carried out the investigation; Z.W., N.C., S.-K.I., Y.W, and X.Y. provided the resources; W.Z., S.W., Z.L., J.W., Y.L., Y.G., and R.Z. performed the data curation; J.W. prepared the original draft; Z.W., N.C., S.-K.I., Y.W, and X.Y. reviewed and edited the manuscript; W.Z., S.W., Z.L., J.W., Y.L., Y.G., and R.Z. contributed to the visualization; Z.W., N.C., S.-K.I., Y.W, and X.Y. supervised the research and managed the project; N.C, Y.W and X.Y. acquired the funding. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Wei, J., Zhang, R., Gu, Y. et al. A Geometric Whale Optimization Algorithm with Triangular Flight for Numerical Optimization and Engineering Design. Sci Rep 16, 8526 (2026). https://doi.org/10.1038/s41598-026-37387-0
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41598-026-37387-0





























