Abstract
The Multi-Objective Sinh-Cosh Optimization Algorithm (MOSCHO) is presented in this article based on the memorized technique. MOSCHO is an extension version of the recently proposed Sinh-Cosh optimizer for multiple objective optimizations. The memorized local optimum is integrated with the global optimal solution to bound the search space and update positions of solutions for obtaining non-dominated solutions. The proposed method is tested on mathematical non-constrained functions, SRN constrained function, and three real-world design engineering applications, as a vital challenge to handle the difficulties of real-world engineering applications. The MOSCHO’s performance was evaluated by seven performance metrics compared to some of the most popular multi-objective optimization algorithms. The results demonstrate the ability of MOSCHO to achieve a high convergence and a good diversity. The results clarify that three functions have the best performance for all tested performance metrics: ZDT3, ZDT4, and MMF14. Five functions have the best performance for more than 75% of the performance metrics. Two functions have the best performance for more than 50% of performance metrics. The others have only the best values for more than 25% of performance metrics. However, SRN and real-world problems exhibit the best performance in more than 75% of the tested performance metrics.
Similar content being viewed by others
Introduction
For the last decades, meta-heuristic algorithms have been used more and more for more complex, non-linear, high-dimensional, large-scale problems. The exact algorithm achieves more accurate solutions due to coverage of the whole search space and higher consumption of processing time. But Meta-heuristic algorithms achieve an optimal solution for many complicated applications and consume less processing time for complex, high-dimensional, large-scale applications, especially more complicated engineering applications1.
Meta-heuristic optimization is the process of finding good solutions, where this solution is not necessarily an optimal solution, but mainly discards bad solutions. Meta-heuristic is used for a wide range of variant problems, such as engineering applications, medical applications, etc., because it has many variant algorithms that achieve good solutions, which are sufficient to be an optimal solution for many complex problems. With the vast computer-aided design, researchers and engineers can convert a real system to a computer model that contains its variables and constraints as a Simulink model concerned with system architecture, without the need to contain a real system or its prototype to test that real-world system. That leads to reduced human interactions and expensive methods of overall real systems or design models for manufacturing methods. Due to structural properties, constraints, ambiguous weights, operating conditions, manufacturing processing details, and all other system conditions, huge researchable problems as engineering problems. But predetermined optimization algorithms may lead to an inconsistent, inefficient, and design failure. So, many nondeterministic optimization algorithms, such as meta-heuristic algorithms, can solve many different design problems of real or manufacturing processes of various problems2.
Meta-heuristic optimization algorithm handles any application based on repetitive iterations with random solutions enhanced at each iteration to reach optimal solutions. During the iterative method, the meta-heuristic directs candidate solutions towards optimal solutions and allows these solutions to evade the portions of the search space that contain the local optimal solutions or optimal solutions that are not located in these portions of the search space. Meta-heuristic algorithms depend on methods that are inspired by nature, mathematical functions, and scientific rules that can avoid local optimal solutions, such as Particle Swarm Optimization (PSO)3,4, Salp Swarm Algorithm (SSA)5, Whale Optimization Algorithm (WOA), Dynamic Arithmetic optimization (DAO), MRBMO: An Enhanced Red-Billed Blue Magpie Optimization Algorithm for Solving Numerical Optimization Challenges6, Sinh Cosh Optimizer (SCO)7 and Artificial Lemming Algorithm8. Metaheuristic algorithms have proven highly effective in solving complex engineering optimization problems, with successful applications spanning diverse fields such as bioinformatics9,10, Sequence Alignment10,11,12,13,14,15, PID controller optimization16,17,18, solar energy systems19,20,21, Fuel Cell20,21 and passive suspension systems22. Consequently, these techniques have become a prominent approach for controller design.
Meta-heuristic methods use only one solution, which is used rarely for simpler applications or many solutions, called population-based, to look for optimal solutions that are more famous, effective, and realistic for various applications. They are used to find only one optimal solution, called a single objective optimization algorithm, or find multiple related solutions, called a multi-objective optimization algorithm. Single objective optimization has more ability to avoid local optimal solutions and consumes less processing time with a straightforward structure that is easy to implement and has very good precision. But it handles only one objective to find the maximum or minimum optimal solution. But multi-objective optimization handles simultaneously many conflicting objectives like maximizing efficiency, minimizing cost, and minimizing the impact on the environment. It has become useful and popular for many fields, like industrial, engineering, physical, economic, and medical applications.
Multi-objective optimization is considered a decision-making process that can balance multiple objectives simultaneously, but single-objective optimization is considered a part of multi-objective optimization that represents only one objective and focuses its process on that objective only, like a cost or any other objective concerned with the problem. Multi-objective optimization has gained a vast majority for many problems because there is a need to handle more complicated and multiple objective applications, especially over the last few decades23. The goal of the multi-objective optimization is to find multiple non-dominated optimal solutions called Pareto optimal solutions, in which improving each objective does not lead to deteriorating any other objective.
But it is hard to find exact Pareto-optimal solutions with an exact size for all applications. So, metaheuristics try to find an approximation or effective subset of solutions as a Pareto-optimal non-dominated set. A Pareto-optimal solution is a non-dominated solution that is formed by a solution that has increasing efficiency of at least one objective while maintaining the best efficiency reached of other objectives or has an increase in their efficiencies. That collection reached a Pareto optimal set or Pareto front set24, as in Fig. 1.
The general form of multi-objective optimization of any problem is expressed as follows in Eq. 1.
Where n ≥ 2 expresses the number of objectives, x (\(\:{x}_{1}\:,\:{x}_{2},\dots\:\dots\:\dots\:..,{x}_{k}\)), x represents the vector of problem decision variables, k represents dimension of decision variables, S represents decision space or search space of equality and inequality problem constrains for its boundaries, F(x) represents objective vector to be optimized through multi-objective optimization algorithm that is defined as a cost function of each objective solution where objective cost function is a continuous and constrained multivariable problem25.
No-Free-Lunch theorem (NFL) states that no algorithm or method is the best for solving all optimization problems of different natures and different data26. This principle is a good chance for improving existing algorithms or implementing new algorithms to obtain the best solution for a particular problem or find a reliable solution for unconstrained problems. So new algorithms may have an effect on finding the best applicable solution of one or many of the unconstrained and constrained optimization problems27.
There are major categories established to handle multi-objective optimization problems, such as the single-to-multi transformation approach, that based on aggregating many objectives in a single scalar form, non-dominated sorting methods, decomposition-based methods that are based on ranking solutions according to Pareto dominance, decomposition-based methods that are based on decomposing the problem into sub-problems, and indicator-based methods that are based on using performance indicators to guide selection.
While extending single-objective algorithms to handle multiple objectives simultaneously. Many techniques are established. The simple and early method is to aggregate objectives with fixed weights or optimize only one objective while considering the others as constraints. But the most reliable technique is adapting the swarm optimization to generate a distributed set of Pareto optimal solutions as multi-objective particle swarm optimization28.
The non-dominated sorting methods are proposed based on Pareto dominance according to crowding strategies to rank solutions as early non-dominated sorting genetic algorithm (NSGA), then the Pareto Archived Evolution Strategy (PEAS), then improving NSGA with a fast non-dominated sorting genetic algorithm and crowding distance that results in NSGAII29.
The decomposition-based methods in which problems are decomposed into a scalar subproblems. Handling each subproblem as a separate problem, then combining the results to obtain a problem solution. These methods are proposed as a Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Weighted Tchebycheff Method, and Penalty Boundary Intersection (PBI)30.
Finally, indicator-based methods that use the performance indicators as hypervolume for the selection method as the indicator-Based Evolutionary Algorithm and the S-Metric Selection Evolutionary Multi-Objective Algorithm30.
There are many models that have been proposed by swarm algorithms or any evolutionary algorithms over the last years to reach a Pareto set to solve multi-objective optimization problems. Many algorithms are extended to solve multiple interfering objectives by finding a set of non-dominated solutions, such as the non-dominated sorting genetic algorithm (NSGAII)24. It is one of the multi-objective genetic algorithms. A Pareto Achieved Evolution Strategy (PEAS) was also one of the multi-objective genetic algorithms proposed to solve multi-objective optimization problems. Its process starts by generating initially many candidate solutions, then evaluates them by many objective functions to find archived non-dominated solutions. Multi-objective genetic algorithms focus on confining the search space precisely. So, they are reliable, effective, population-based approaches for many multi-objective problems31.
A Dynamic Multi-Objective Evolutionary Algorithm was another proposed model in which cell population size varies dynamically through its process. Its strategies depend on diversity and dominance data to estimate density and rank solutions as a qualitative parameter to decide whether to add or remove candidate solutions to the cell. Its diversity strategy leads to a well Pareto optimal front for many constraints, but according to the NFL theorem, it isn’t able to maintain all problem characteristics for many of the test problems32.
A distributed cooperative coevolutionary algorithm has a competitive advantage in that distributed sub-populations with a shared archive are processed on many computers within the same network to reduce running time33. For interval multi-objective problems, an evolutionary algorithm with a preference polyhedron is proposed, where the preference polyhedron and decision makers interact with each other to reach the best performance30. Varied subspaces of weighting vectors are extracted from objective space uniformly for A Decomposition-based achieving approach, in which Pareto dominance normalized distance-based is considered for accepting subspace solution or not. When accepting a non-dominated subspace solution, it is added to the shared diversity archive. The last approach speeds convergence and enhances diversity for various test functions used34. But other interval algorithms for various local searches generated for good convergence and better distribution of the Pareto optimal set proposed by Sun35.
Many subproblems with neighboring information processing simultaneously of the multi-objective problem are the main method of another decomposition algorithm, which is tested for many versions of the multi-objective genetic algorithm, such as the local search genetic algorithm and NSGAII36.
For multi-objective big data problems, An Automated differential evolution as a local search that enhances exploration is used for both multi-objective and single-objective problems37. Besides, a hybrid of SSA and differential evolution is used to enhance the exploitation of SSA processed for multi-objective big data problems38. An adapted indicator method of inverted generational distance processed for variant multi-objective problems with many different shapes of the Pareto front improved the performance of the evolutionary algorithms based on the referenced point method39. But the balanced multi-objective optimization algorithm of the referenced points’ method is proposed by Abdel-Basset et al.40. Based on the preferences of a decision maker technique, the multi-objective interactive evolutionary algorithms are integrated for many interval problems41. Although the goal selection repository is integrated into the multi-objective grasshopper algorithm42. The leader selection method to update non-dominated solutions is an integrated multi-objective artificial sheep algorithm43.
An alternative repository is integrated into a multi-objective particle swarm optimization algorithm44., while handling the full repository by removing solutions from the least populated regions, is integrated in a multi-objective Salp swarm optimization algorithm for solving the engineering applications5. But the crowding distance method to enhance diversity of non-dominated solutions, producing different non-dominated solutions for engineering applications, is integrated in the multi-objective sine-cosine optimization algorithm45. A hybrid PSO and genetic algorithm is integrated to enhance local search for more distributed non-dominated solutions through less crowded regions46. While multiple thresholding levels for image segmentation are integrated for the whale optimization algorithm47.
While swarm optimization algorithms can find sufficient solutions for complex, simple, and multi-objective problems. The Multi-Objective Sparrow Search algorithm, which is one of the most recent multi-objective swarm algorithms that has reached sufficient performance for the carbon fiber drawing process that proposed a scouter strategy while the algorithm searched for the optimal set with an accelerating convergence and better diversity48. Besides49, provided a comprehensive review of the sparrow search algorithm (SSA), detailing its core principles, key variants, including hybrid, chaotic, and multi-objective versions and diverse applications in fields such as machine learning and energy systems, while also outlining prospective research directions.
In the same context, many optimization algorithms are proposed for multi-objective problems with multiple search mechanisms, such as the multi-objective harmony search algorithm (MHS)50, multi-objective whale algorithm (MWO)51, the multi-objective crow search algorithm, and the multi-objective flower pollination algorithm.
Despite the proliferation of multi-objective metaheuristics, the pursuit of more efficient and robust optimizers remains a central challenge in the field, as dictated by the No-Free-Lunch theorem26. Many existing algorithms still struggle with achieving an optimal balance between convergence speed and solution diversity, particularly with problems with complex Pareto front geometries, such as those that are disconnected, concave, or non-uniform. Furthermore, the performance of swarm-based algorithms is heavily dependent on their core search mechanisms; while some excel in exploration, they may lack the focused exploitation needed for precise convergence, and vice versa. This persistent trade-off underscores the need for novel algorithms that incorporate more sophisticated and balanced search strategies from their inception.
Recently, a novel Sinh-Cosh optimizer(SCHO) as a single objective optimization algorithm has been proposed to tune the controller of automatic voltage regulation to be more stable52. While SCHO has been proposed for photovoltaic-thermal power systems, it has also53. But the arithmetic optimization algorithm has been integrated with SCHO to enhance SCHO’s performance for better prediction of biological activities54. SCHO has also been proposed for aircraft pitch control and steam condenser systems in power plants to enhance the performance of their controller55,56. Besides, the Sinh-Cosh method has been integrated with the Dung Beetle Optimization algorithm for the global optimization problem57.
A Sinh Cosh Optimizer (SCHO)7 represents a recent and innovative entry into the meta-heuristic landscape, distinguished by its foundation in the mathematical properties of the hyperbolic sine and cosine functions. Its promising performance in single-objective optimization stems from a sophisticated architectural design that provides several key advantages over more conventional algorithms.
The primary advantage of SCHO lies in its structured and multi-faceted approach to managing the exploration-exploitation balance. Unlike many algorithms that rely on a single, often linear, transition between these phases, SCHO employs two dedicated phases for both exploration and exploitation. This granular structure allows for more specialized and effective search behaviors: the first phases perform broad, coarse-grained searches, while the second phases conduct intensive, fine-grained searches within promising regions. This is a more nuanced strategy than found in algorithms like PSO or GWO, leading to a more robust search process that is less likely to stagnate at local optima. Furthermore, the adaptive switching mechanism, governed by parameter A, ensures a dynamic and timely transition between these phases throughout the optimization run, rather than being dependent on a simple iteration count.
A second critical advantage is the dynamic bounded search strategy. By leveraging not only the global best solution but also the second-best solution, SCHO intelligently contracts the search space around the most promising regions. This mimics a focused hunting strategy, dramatically accelerating convergence by preventing the dispersion of candidate solutions into unproductive areas of the search space. This is a more focused approach compared to the often fixed or linearly decreasing boundaries in other algorithms. Finally, the use of Sinh and Cosh functions within the position update equations is not merely a novelty; these functions, with their exponential nature, facilitate a wider range of movement dynamics. They can generate more aggressive jumps during exploration for better space coverage and more delicate adjustments during exploitation for precise convergence, offering a search dynamic that is distinct from the linear, trigonometric, or purely random movements prevalent in other swarm-based methods.
The combination of these characteristics, granular phase management, intelligent search space bounding, and unique mathematical drivers, equips SCHO with a powerful and balanced search capability. It is this demonstrated potential for rapid convergence coupled with strong resilience to local optima in single-objective problems that provides the compelling motivation to explore its extension into the multi-objective domain.
According to the SCHO’s combined characteristics and its performance at single objective optimization problems. That proves SCHO’s promising performance, in which SCHO’s balance between exploration and exploitation accelerates convergence through a small number of iterations. A bounded search strategy that is responsible for directing candidate solutions toward the global optimum. And the switching between different phases of exploration and exploitation keeps better diversity while accelerating convergence. These characteristics improve the algorithm’s ability to achieve better solutions for complicated problems. So, according to the advantages of SCHO’s last-mentioned characteristics. The decision is taken according to the reasonably clear idea to extend SCHO for multi-objective problems.
SCHO characteristics achieve many benefits as follows:
-
The Sinh and Cosh methods are used to direct solutions in the direction of optimal solutions.
-
Not only depends on global solutions while updating populations, but it also uses second optimal solutions to bound the search space during updating populations.
-
Bounded strategy resizes the bounded search for a smaller range towards an optimal solution. Many phases of exploration and exploitation allow a variety of updated candidate solutions to focus more on the global optimal and avoid existing in local optima.
-
Switching strategy between phases of exploration and exploitation achieves a balance during search between exploration and exploitation methods.
-
The combination of these strategies enhanced SCHO performance for fast convergence and escaping from local optimum solutions.
-
So SCHO is a good chance to find the optimal solution for many complicated applications.
In the same context, expanding SCHO for multi-objective optimization is expected to achieve good solutions for multi-objective complicated problems. However, there are many proposed multi-objective algorithms for many design problem models for engineering, medical, and industrial applications. When working for MOSCHO’s, the main question is what the need is for this algorithm to be processed for real-world applications. The answer to this question is based on the NFL theorem, which indicates there is no meta-heuristic algorithm that can solve all existing applications with sufficient accuracy for these design applications. And many of the design problems still need more research to obtain better solutions for better performance between the many problem’s objectives and the problem’s constraints.
There are many limitations for multi-objective problems, such as multiple objectives that need to be optimized simultaneously. While testing environments vary according to the applications and their constraints. To handle these limitations for multi-objective problems, there is a need for more multi-objective algorithms that have more ability to handle these limitations. So MOSCHO is proposed based on its characteristics to be used for design problems. According to MOSCHO’s characteristics. It has more ability to handle different complex problems.
So, A memorized multi-objective Sinh-Cosh optimization algorithm (MOSCHO) is a distinct approach proposed in this paper. A method of MOSCHO depends on the Pareto dominance of the non-dominated achieved solutions with a list of leaders that are similar to Multi-objective Particle Swarm Optimization (MOPSO)28. Besides the last-mentioned method, a memorized optimal solution can be integrated to bound the search objective towards optimal solutions of many mathematical problems or real applications. So, the proposed approach MOSCHO has a combination of using a leader solution beside memorized local solution in updating positions of population, which helps generate solutions close to the global optimum Pareto set (Pareto front). The experiment is performed on several variant mathematical bi-objectives and tri-objectives, well-known multi-objective benchmark problems, and real-world engineering problems. The proposed MOSCHO demonstrates its ability to solve complicated multiple objective problems in the comparison of some well-known multi-objective algorithms.
The key differentiator of MOSCHO is the integration of this powerful search engine with a ‘memorized’ local search technique. While other multi-objective algorithms use a global leader or a repository, MOSCHO uniquely bounds the search for each candidate solution using both the global leader (from the repository) and its own personal best-found position. This dual guidance system allows for a more nuanced and focused convergence towards the Pareto front while the switching mechanism between multiple exploration and exploitation phases maintains population diversity. Therefore, MOSCHO is not merely another multi-objective variant but a distinct approach that leverages a novel, mathematically-grounded search strategy to simultaneously enhance both convergence and coverage.
The main contributions of this article are as follows:
-
1.
Firstly, extending SCHO for multi-objective optimization to propose a new MOSCHO according to their characteristics to maintain better diversity and convergence.
-
2.
The memorized local optimal solutions proposed to bound the search strategy and focus solutions towards the global optimum.
-
3.
Test experiments are processed for unconstrained mathematical benchmark functions ZDT, DTLZ, CEC2009, CEC2020, and both SRN and welded beam as constrained applications. the performance is checked using many metrics, varied for convergence metrics, coverage metrics, and success performance indicators.
-
4.
Eventually, test the proposed method to solve two real-world electrical engineering applications as the optimal power flow and the Optimal Setting of the Droop controller.
-
5.
Compare the MOSCHO’s performance with the state-of-the-art algorithms as MHS, MWO, MALO, MSSA, MOPSO, and MOSCA.
The remainder of this article has sections as follows:
Section 2 presents the conventional Sinh-Cosh optimizer. Section 3 presents the proposed approach of a novel memorized multi-objective Sinh Cosh algorithm for multi-objective optimization. Section 4 presents performance metrics and MOSCHO parameter sensitivity. Section 5 presents the results and their analysis. Section 6 presents the conclusion of the work.
Sinh-Cosh algorithm
SCHO7 It is a meta-heuristic algorithm inspired by Sinh and Cosh mathematical functions, suggested in 2023. SCHO performs as other meta-heuristic algorithms. The meta-heuristic algorithm starts with random positions and updates each value at each iteration towards the global optimal solution. So at the last iteration, the meta-heuristic algorithm suggests its optimal solution. The search mechanism of the meta-heuristic algorithm is to achieve a balance between two main processes, exploration and exploitation, trying to reach the optimal solution during iterations.
However, SCHO acts as a meta-heuristic operation while performing many steps, such as switching between variant phases of exploration and exploitation processes, and applying a bounded strategy. These steps are combined to achieve a balance between exploration and exploitation processes while updating positions and optimizing the search space towards the optimal solutions.
This balance helps solutions escape from local optima. SCHO depends on executing many steps,
Like all Meta-Heuristic algorithms, the SCHO algorithm randomly initializes positions of multiple candidate solutions (N) within the boundary range, between the upper boundary Ub and the lower boundary Lb, where X refers to the position vector of each solution. Each position vector consists of many variables, which refers to the dimension (D) of the position vector as in Eq. (2).
where rand has a value in the range 0 to 1.
The SCHO algorithm updates positions based on the last position and the last global solution reached at the last iteration. Where the two phases of exploration are represented in Eqs. 3 and 6, where i refers to the number of solutions between populations, j refers to the index of the position variable, and t refers to the current iteration, \(\:{X}_{(i,j)}^{t\:}\)refers to the current position of the ith solution in the current iteration t, \(\:{X}_{(i,j)}^{t+1}\) refers to the update of the position for the next iteration, \(\:{X}_{best}^{j}\) refers to the dimension index for the optimal solution obtained from the last iteration, random values r1 and r2 have values in the range 0 to 1.
.
, \(\:\epsilon\:\) refers to a positive number for the second phase exploration, and from the range 0 to 1 \(\:\epsilon\:=0.003\)is the recommended value from the experiment of this article, as indicated in Sect. 4.3, \(\:{W}_{1}an{d\:W}_{2}\:\)The weight coefficients of exploration phase 1 and phase 2, respectively, as in Eqs. 4 and 7, random values r3,r4,r5, and r6 have values in the range 0 to 1. Where m and u are sensitive coefficients to control the accuracy of exploration or exploitation in the first phase. But parameter n controls the accuracy of exploration in the second phase. Tuning of m, u, and n for this experiment indicated in Sect. 4.3.
.
And it also has two exploitation phases, as in Eqs. 9 and 11. Where r7, r8, r9, r10, and r11 have values in the range 0 to 1.
While two phases of exploration and two phases of exploitation are based on Sinh and Cosh methods formulas at W1 and W3 for the first phases of either exploration or exploitation.
At each iteration, the SCHO bounded search strategy mimics the last stage of animal hunting through optimization of the search space according to available range. SCHO initializes the search space as in Eq. 12 and updates the potential space as in Eq. 13 within each iteration. So, the updated Ub and Lb are calculated as in Eqs. 14 and 15, within each iteration, to bound the search space.
Where \(\:\beta\:\) customizes initialization of the bounded search space.
Where k is a positive value, the starting value equals 1, \(\:\alpha\:\) controls exploration and exploitation accuracy in potential space.
Where \(\:{X}_{second}^{\left(j\right)}\) represent a sub-optimal solution. Also, Eqs. 13 and 14 are used to rearrange the limits of searching boundaries.
Meta-heuristic algorithm explores solutions within early iterations and exploits solutions in the last iterations. But, switching step processing is allowed to allow SCHO exploring besides exploiting within all iterations to avoid local optimal solutions. A is a switch parameter to allow SCHO to focus on exploration and a small amount of exploitation at early iterations, and in contrast, at the last iterations. A represents as in Eq. 16.
Where p and q control the balance between exploration and exploitation, tuning of p and q is indicated in Sect. 4.3.
So, A handles the switching step between exploration and exploitation. But, T handles two phases of exploration and two phases of exploitation as in Eq. 17. In early iterations, the algorithm executes the first phase of exploration and exploitation of solutions towards optimal solutions, and the second phase executes deep exploration and exploitation within the potential search space. Also, \(\:{BS}_{k}\) bounds search space of the problem and diversifies solutions to the potential space, ct is a setting coefficient working as a switching point in two phases. SCHO repeats these steps until reaching a stop condition, like the maximum number of iterations.
As a result of different phases, switching between them and bounded search strategies, fast convergence is achieved due to the switching between variant phases of exploration and exploitation. While SCHO’s ability to escape from local optimum solutions is achieved due to a balance between exploration and exploitation, and a bounded search strategy.
Memorized multi-objective Sinh-Cosh optimization algorithm (MOSCHO)
MOSCHO is a multi-objective variant of the SCHO algorithm7 to solve both constrained and unconstrained multi-objective problems. Many upgrades are required to extend SCHO for multi-objective optimization. The former is an archive of the maximum number of vectors that contains non-dominated solutions for the Pareto optimal set, called the repository. The latter is a leader’s feature. While the search mechanism was upgraded to merge the last-mentioned features with the conventional algorithm strategies for multi-objective problems. The main functions proposed are similar to those used to extend the Particle Swarm Optimization (PSO) algorithm to solve multiple objectives28. In which a repository contains the archived Pareto optimal solutions obtained, and the leader selection feature represents another function28,58.
While processing a multi-objective algorithm, the archive handles many different cases. At the start of the algorithm, the repository is initialized with the first generation of non-dominated solutions. The repository serves as an archive, updating at each iteration within the algorithm process to preserve non-dominated optimal solutions using a sorting mechanism to form a Pareto optimal set containing new generations of non-dominated solutions. Updating the repository results involves checking the dominance of the existing archive and the resulting Pareto optimal set from the current iteration to obtain an updated version of the repository. So, the repository obtained the supposed non-dominated solutions at the end of the algorithm.
During iterations after updating positions for optimal solutions, new non-dominated solutions were obtained. the repository handles different cases. case 1, while a new generation of non-dominated solutions is less than or equal to the maximum repository size. In that case, the repository obtained all of the new non-dominated solutions. But the other case occurs when the number of new non-dominated solutions exceeds the maximum repository size.
When updating a repository that contains several non-dominating solutions exceeding the allowed repository size, the deletion function is invoked to remove many solutions from the most densely populated area of the repository, considering a predetermined number of each hypercube until reaching the maximum allowed repository size. The deletion function eliminates several repository solutions when the solution is placed outside the hypercube.
Leader solution is a global optimal solution selected from the multiple non-dominated solutions found in the repository. But the leader solution is selected from the least crowded area of the search space. Using a hypercube of lower area population density increases the probability of a leader being chosen for the next iteration. For both selecting the leader function and deletion function, a roulette wheel selection is first used to determine hypercube probability (Pi) as in Eq. 18, where c is an integer constant with a value greater than one. Where selection and deletion functions use different constants. But hypercubes of lower probability are more recommended to select a new leader.
MOSCHO resulted from the combination of many characteristics that developed SCHO for multi-objective optimization. The SCHO method has a variant of two exploration and two exploitation phases to obtain fast convergence and good performance on many commonly used single objective functions. MOSCHO has many characteristics that require a decision maker method for adding and deleting non-dominated solutions from the repository. Besides the leader selection of an optimal solution method and memorized optimal solutions resulting from the best solution obtained for each candidate solution, which are integrated with the Sinh-Cosh two phases of exploration and two phases of exploitation. In addition to the bounded search strategy and switching strategy between many phases.
However, SCHO used the second minimum or maximum global solution to bound the search space to update the population’s positions. But MOSCHO used the second solution as a memorized best local solution that leads to a good effect on converging the non-dominated set compared to the Pareto front to retain most of the problem properties and constraints. Where the memorized best local solution is the best position reached by each of the populations through the search space. So, for each candidate solution, there is a variable for the best position that achieves the best objective solutions reached till the current iteration.
MOSCHO characteristics formula involves many functions to achieve the target of each optimization algorithm to reach the global solution. MOSCHO has two sets, where the first set initially contains the random populations and the other contains non-dominated solutions of the populations. Where MOSCHO changes the two sets and estimates the global optimum solution eventually for the optimization problem used according to the following steps.
-
(a)
The MOSCHO populations initialized and MOSCHO search parameters.
-
(b)
Evaluate the objective functions of each population to determine their fitness.
-
(c)
Check the dominance of populations to initialize the repository by non-dominated solutions.
-
(d)
Select the leader solution from the repository according to the last-mentioned leader selection features in this section.
-
(e)
Update parameter A to switch between exploration and exploitation methods using Eq. 16.
-
(f)
For each iteration, determine the memorized best local solution to bound search space Ub and Lb and update Ub and Lb using Eqs. 14 and 15, respectively.
-
(g)
According to the updated value of A, if its value is greater than 3, go to the exploration phases.
-
(h)
But the value of A is lower than or equal to 3, go to the exploitation phases.
-
(i)
Then, according to the T value calculated in Eq. 17, if T is greater than or equal to the current iteration value, update the position according to the first phase of exploration or the first phase of exploitation determined from step e and h.
-
(j)
Evaluate the fitness of populations’ objective functions.
-
(k)
Check the dominance of newly obtained solutions and the last non-dominated solutions from the repository to obtain new non-dominated solutions for the repository.
-
(l)
If the number of new non-dominated solutions exceeds the maximum size of the repository, delete some solutions from the repository according to the deletion selection features mentioned earlier in this section until the repository contains only the maximum size of solutions.
-
(m)
Step d to step l repeated for the number of iterations for the experiment.
-
(n)
Eventually. The repository obtained the suggested non-dominated solutions as the global optimal solutions.
In the same context, steps of selecting a leader solution, update candidate solutions depending on the leader solution, memorize the local optimal solution, check dominance to update the repository of non-dominated optimal set. In addition to checking if the repository size resulting from updating the maximum repository size for the repository. So, delete some solutions from the repository to keep the number of solutions not exceeding the maximum repository size.
Where \(\:{X}_{memorized}^{\left(j\right)}\) represents a memorized local optimal solution, \(\:{X}_{best}^{\left(j\right)}\) represents a leader solution, Ubk represents the updated upper boundary, Lbk represents the updated lower boundary, t represents the current iteration, and max_iteration represents the maximum number of iterations to repeat the algorithm’s process. Equations 19 and 20 are used to rearrange the limits of searching boundaries towards the global solution. Bounding the search space fasts convergence and distributes the solutions at the true Pareto front. MOSCHO steps are illustrated in the pseudo code of MOSCHO in Algorithm (1), and the relation between these steps is indicated in both Algorithm (1) and the MOSCHO flow chart, as indicated in Fig. 2.

Algorithm 1 : Pseudo-code of MOSCHO Algorithm
SCHO computational time complexity is O(N2). Although the multi-objective algorithm’s computational time complexity is O(MN3), the combination of leader selection and repository archived leads to a computational time complexity of O(MN2). Where M represents the number of objectives, but N represents the number of populations.
Experimental results and discussion
The performance measurements of the proposed algorithm, MOSCHO, which comprises twenty-four experiments, contain nonlinear problems and non-convex complicated problems. MOSCHO algorithm was tested on twenty-two unconstrained multi-objective problems, bi-objective and tri-objective. The experiment includes problems ZDT59, DTLZ60, CEC 200959 and CEC202061 as test functions. It also includes a constrained multi-objective test problem62,63 and a constrained real-world welded beam as an engineering application64. They are used to check out MOSCHO availability and efficiency compared to other well-known multi-objective algorithms.
Before MOSCHO’s processing, many of MOSCHO’s parameters required tuning. They let MOSCHO handle multi-objective problems. These parameters are responsible for achieving the best of MOSCHO’s performance for many simple or complicated problems. The tuning process’s results are performed for ZDT problems, and the results are found in Sect. 4.3.
The experiment was conducted on eighteen unconstrained mathematical test problems of bi-objectives and four mathematical test problems of tri-objectives. The details of the mathematical test functions are illustrated as follows. Three ZDT problems, seven CEC2009 problems, nine CEC2020 problems, and three DTLZ problems. Each of the three evaluation problems in ZDT has two goals (objective 1 and objective 2) with thirty variables for ZDT1 and ZDT3, but ZDT4 is processed with only ten variables59 where the main details and equations of functions are in Appendix A. While seven CEC 2009 problems have two goals (objective 1 and objective 2) with two variables for each function65 where the main details and equations of functions are in Appendix A. But eight of CEC2020 have two goals (objective 1 and objective 2) with two variables for each function, except MMF14, which is processed in three goals (objective 1, objective 2, and objective 3) with three variables61 where the main details and equations of functions are in Appendix A. Each of the three evaluation problems in DTLZ has three goals (objective 1, objective 2, and objective 3) with seven variables for DTLZ1 and twelve variables for DTLZ4, but DTLZ660, where the main details and equations of functions are in Appendix A.
They are also tested for two constrained real applications, SRN as a test problem has two goals (objective 1 and objective 2) with two variables63 detailed in Appendix B, the welded beam design has two goals (objective 1 and objective 2) with four variables64 detailed in Appendix B as a test problem. SRN problem63 and welded beam design problem64 also has two objectives. These benchmark mathematical functions include the most recent disconnected convex or nonconvex shape functions. They are also used for two electrical real-world applications, the Optimal Power Flow as an electrical design problem that has four goals (objective 1 to objective 4) with thirty-four variables66 and the Optimal Setting of Droop controller as an electrical design problem that has three goals (objective 1 to objective 3) with six variables67. These two applications are detailed in Appendix C.
Performance metrics
To evaluate the performance of multi-objective algorithms’ results, many performance metrics are used to analyze convergence and spreading of repository solutions68,69.
Generational distance (GD)
GD is the total distance between the true Pareto front and the non-dominated optimal set reached from variant approaches. GD is a multi-objective algorithm’s convergence indicator. The algorithm with the smallest value is the best one.
Where N is the number of repository sizes obtained, di is the Euclidean distance calculated in objective space between the ith obtained Pareto optimal solution and the closest solution from the Pareto front set70.
Spacing (S)
S determines the distance between the obtained solutions and each other. where the minimum value refers to the best algorithm71.
Where \(\:\stackrel{-}{d}\) represent the average distance of all di distances.
Inverted generational distance (IGD)
IGD is a distance that indicates the quality of the obtained Pareto optimal sets compared to the true Pareto front. The lowest IGD value determines the best algorithm72.
Where nt refers to the size of true Pareto optimal solutions, d’i is a Euclidean distance calculated in objective space between the ith true Pareto optimal solution and the closest obtained solution from the Pareto optimal set.
Hypervolume metric (HV)
HV indicates the algorithm performance of the convergence and the diversity of the obtained Pareto optimal set. HV is a hypercube shape that requires a reference point.
Where the highest HV value refers to the best one70,73.
Diversity (∆)
The diversity metric analyzes the spreading of the obtained optimal solutions. It represents the average Euclidean distances between the solution and its neighbors of obtained Pareto optimal solutions, considering two extreme boundary solutions of each objective.
Where df and dl represent Euclidean distances between the obtained boundary and extreme solutions29.
Error ratio (ER)
ER metric counts the number of obtained Pareto optimal solutions that existed on the true Pareto front set74.
Where ei represents the difference between the obtained Pareto optimal and Pareto front solutions. And n represents the number of solutions for both the Pareto front set and the obtained Pareto optimal set.
Success counting (SCC)
SCC metric represents the count of obtained Pareto optimal solutions that are found on the true Pareto front set72.
Where pi represents the obtained solution from the Pareto optimal set. TPF represents a true Pareto optimal set. And n represents the number of solutions for both the Pareto front set and the obtained Pareto optimal set.
The non-dominated optimal set obtained from each implemented algorithm is compared to the Pareto front set to analyze convergence and spreading of the Pareto optimal reached set to analyze the algorithm’s performance. Although GD, IGD, and HV metrics quantify convergence, S and ∆ determines coverage of the obtained Pareto optimal solutions by the algorithms. But ER and SCC determine the success performance indicator.
A non-parametric statistical test called Wilcoxon’s Rank-Sum Test (WRT) is used to evaluate relations of many data sets generated by each algorithm in the comparison. For variant algorithms with the same performance metrics, WRT examines the algorithms’ differences using =, +, and - symbols to compare algorithms with each other’s75.
In this experiment, A laptop HP of Intel(R) Core (TM) i7-8550U CPU with 1.80 GHz and 12 GB RAM is used to run a program running on Windows 10 of a 64-bit operating system, MATLAB 2021a to perform the operational process.
MOSCHO’s performance analysis results
MOSCHO’s performance was compared to many relevant algorithms in the literature review, such as NSGAII29, Multi-objective Salp Swarm algorithm (MSSA)5, Multi-objective Ant Lion optimizer (MALO)62, MOPSO28, MWO51, MHS50, and Multi-objective Sine-Cosine Algorithm (MOSCA)45. They work at 100 iterations for each algorithm. Where each algorithm initiates with 100 candidate solutions, and the repository obtains only 100 non-dominated solutions. The performance is tested using twenty-two non-constrained mathematical benchmark functions for both eighteen bi-objective and four tri-objective functions. It is also tested in two real-world design problems. Because the constrained problems and non-constrained problems for the experiment have known Pareto front sets called true Pareto optimal sets.
There are many performance metrics used to validate the efficiency of multi-objective algorithms. The used performance metrics vary between convergence metrics, coverage metrics, and the success performance indicator. While the referenced true Pareto optimal sets were used as a reference to the obtained Pareto sets by each algorithm, they were required to evaluate most of the performance metrics used. To obtain the most accurate values for each performance metric at any given algorithm, the operation is repeated fifteen times, and then the average (AV) and standard deviation (SD) parameters of repeated metrics are calculated to validate the performance of each metric. But the difference among the used algorithms is illustrated by Wilcoxon’s Rank-Sum Test (WRT) operating at 5% significance level.
While WRT is a non-parametric statistical test used to evaluate relations of many data sets generated by each algorithm in the comparison. For variant algorithms with the same performance metrics, WRT examines the algorithms’ differences using =, +, and - symbols to compare algorithms with each other. The = symbol refers to no differences in performance between the two used algorithms. But the + symbol indicates a positive difference between algorithms and is used for parameters of minimum value, which refers to the best. The – symbol indicates a negative difference between algorithms and is used for parameters of maximum value, which refers to the best75.
Analysis results of non-constrained mathematical benchmark functions
Tables 1 and 2 show the performance of the generational distance (GD) metric of comparative algorithms for this experiment. GD is a metric evaluating the difference between true Pareto optimal solutions (true PF) and obtained Pareto optimal solutions by each algorithm. For metric GD, MOSCHO has a superior result for eight variant functions, where two functions from ZDT problems, as ZDT3 and ZDT4, two functions from CEC2009 as UF6andUF7, three functions from CEC2020 as MMF7, MMF10, and MMF14, and two functions from DTLZ problems as DTLZ1and DTLZ4. Where obtained solutions exist in the true Pareto front for the mentioned functions.
For ZDT problems, MOSCHO has excellent performance for ZDT3 and ZDT4. Where ZDT1 has better performance using algorithms MOSCA and MPSO. For CEC 2009 problems, MOSCHO has better performance for UF6 and UF7 with MOPSO and MOSCA. Where for UF1 to UF5, MOSCA has better performance, with MOPSO, but MALO and MSSA have better performance for SD of GD and AV of GD, respectively. For CEC 2020 problems, MOSCHO has excellent performance for MMF14 and has better performance for MMF7 and MMF10. But MALO has excellent performance for MMF4, and MSSA has excellent performance for MMF8. For DTLZ problems, MOSCHO has better performance at DTLZ1 and DTLZ4.
While WRT approves that MOSCHO has differences of more than 5% differential, as in Tables 1 and 2. For the GD metric, the best value is the minimum value, where MOSCHO has more than 80% positive differences in eighteen functions. compared to MALO, more than 60% positive differences in fourteen functions) compared to MSSA, more than 65% positive differences in fifteen functions were observed compared to MOPSO. MOSCHO has positive results from the GD metric performance over MALO, MSSA, and MOPSO. But WRT performance of MOSCA is equivalent to MOSCHO, where the number of positive values equals the number of negative values in 11 functions for each one.
MOSCHO has excellent performance for the mentioned functions due to the concurring of MOSCHO performance with functions’ search methods in which memorized solutions drive the search space to be more bounded towards the region of the optimal solutions. While switching mechanisms between exploration and exploitation achieves a balance between variant phases. And variant phases allow MOSCHO to alternate exploration and exploitation methods during iterative search.
So, a Combination of memorized optimal solution, besides global solution, and different phases of exploration and exploitation achieves fast convergence of the obtained Pareto optimal set (obtained PF) compared to the true Pareto front set (true PF).
Figures 3 and 4, and 5 indicate true and obtained Pareto sets for the performance of MOSCHO and other compared algorithms, MOSCA, MPSO, MALO, and MSSA, for ZDT1, ZDT3, and ZDT4 problems. In which MOSCHO has great performance and a great distribution of the obtained set according to the agreement of the multiple objective search space with the MOSCHO search strategies. So, MOSCHO has good performance for both disconnected ranges of several convex functions and good performance for only one convex.
Figure 3 indicates the performance of MOSCHO, in which the repository has non-dominated solutions that achieve fast convergence through the number of iterations used, with the best diversity of the repository compared to the true Pareto front. While MOSCA achieves the nearest performance. But the other compared algorithms achieve poor performance, except MOPSO, which has better diversity among them. This figure proved the MOSCHO’s ability to achieve the best performance for the continuous search space, such as the ZDT1 problem.
Figure 4 indicates MOSCHO’s performance for the separated search spaces. It has a repository of non-dominated solutions with the fastest convergence and the best diversity, in which the obtained Pareto set covers the separated search spaces. Although MOSCA achieves the nearest performance, the others achieve poor performance. This figure proves the MOSCHO’s ability to achieve the best performance for a separated search space, such as the ZDT3 problem.
Figure 5 indicates MOSCHO’s performance for the more complicated problem for a continuous search space, such as the ZDT4. In which the MOSCHO achieves the fastest convergence and the best diversity compared to the other used algorithms. That proves the MOSCHO’s ability for more complicated problems with a continuous search space.
The last three figures indicate that MOSCHO reached the best convergence and best distribution of repository elements compared to true Pareto sets through a hundred iterations. MOSCA has better performance compared to the other compared algorithms. The MOSCHO repository elements are placed and distributed over the whole range of the true Pareto set for ZDT1, ZDT3, and ZDT4 problems. They prove the MOSCHO performance compatibility with these problems. In which MOSCHO’s characteristics direct populations towards the Pareto front set based on the equilibrium between the exploration and exploitation process and bounding search space towards the Pareto front.
However, ZDT3 has more than one convex and not a connected search space. MOSCHO is the best one of the used algorithms for this experiment, which can reach the optimal solutions and apply the greatest distribution of obtained PF on true PF compared to the other used algorithms. MOSCHO focuses the search space in the direction of the optimal solutions early. So MOSCHO has the fastest convergence between the used algorithms.
While ZDT1 has only one convex for its search space. MOSCHO has fast convergence of obtained PF and performs a good distribution of obtained PF to true PF. However, ZDT4 is a more complicated function compared to ZDT1 and ZDT3. ZDT4’s search space has only one convex. MOSCHO has the best performance for ZDT4 for both convergence and diversity of the algorithms.
Because of SCHO characteristics, such as many variant exploration and exploitation phases that enable MOSCHO to reach an optimal solution early. The used switched mechanism has a good alternative way between the exploration and exploitation phases. And memorizing the local optimum solution reduces the search space towards the region containing the optimal solution. While these mechanisms are iterative through the MOSCHO iteration.
So, ZDT1, ZDT3, and ZDT4 have fast convergence and the best diversity of each one. But either ZDT1 or ZDT4 has only one convex. While ZDT3 has disconnected several convex components. The best performance of ZDT1, ZDT3, and ZDT4 proves the MOSCHO’s performance superiority and the agreement between MOSCHO mechanisms and function search spaces.
Tables 3 and 4 show the statistical results of the inverted generational distance (IGD) metric for all benchmark functions. For metric IGD, MOSCHO has a better performance of eight variant functions, as two functions ZDT3and ZDT4 from ZDT problems, two functions of CEC2009 as UF1 and UF6, two functions for CEC2020 as MMF13 and MMF14, and two functions of DTLZ problems as DTLZ4 and DTLZ6.
MOSCHO has excellent performance for ZDT3 and ZDT4. However, MOPSO and MOSCA have better performance for ZDT1. MOSCHO has better performance for UF1 and UF6. But other tested algorithms are distributed to obtain better performance for other CEC2009 functions. MOSCHO has excellent performance for MMF14 and better performance for MMF13. MSSA has excellent performance for MMF4 and MMF7. For DTLZ problems, MOSCHO has better performance for DTLZ4 and DTLZ6, but MALO has better performance for DTLZ1.
While WRT approves that MOSCHO has differences of more than 5% differential, as in Tables 3 and 4. For the IGD metric, the best value is the minimum value, where MOSCHO has more than 70% positive differences in sixteen functions. compared to MALO, up to 60% positive differences in thirteen functions) compared to MSSA, more than 60% positive differences in fourteen functions compared to MOPSO, and more than 50% positive differences in 11 functions compared to MOPSO, and there are no differences for function UF3 between both MOSCHO and MOPSO. Because MOSCHO has positive results from the IGD metric performance over MALO, MSSA, and MOPSO. But WRT performance of MOSCA is equivalent to MOSCHO, where the number of positive values equals the number of negative values in 11 functions for each one, and there are no differences for function MMF14 between both MOSCHO and MOSCA.
Due to the good characteristics of MOSCHO, the memorized local search bounded the search space to converge solutions towards the global optimal region. Bounded search and leader optimal solutions selected enhance the convergence of the obtained PF. And switching between variant phases of exploration and exploitation further enhances convergence MOSCHO. So, Tables 3 and 4 confirm the good convergence of the MOSCHO algorithm. And WRT approves MOSCHO as superior in its convergence performance.
Figures 6, 7, 8, 9, 10, 11 and 12 indicate true and obtained Pareto sets (obtained PF) for the performance of MOSCHO and other compared algorithms, MOSCA, MPSO, MALO, and MSSA. In which MOSCHO has superior performance expressed in a great convergence and great distribution of the obtained set compared to the true Pareto set for all CEC 2009 functions except UF2 and UF7 due to their complicated objective space non-uniform shape that contains many convex/concave components.
MOSCHO has good performance for UF1 to UF6 except UF2. However, they have more than one convex except UF3, and UF6 has more than one concave. And they have regular search sets achieved by MOSCHO as a result of bounded search based on the memorized local solution and a good balance between exploration and exploitation phases.
Figures illustrate fast convergence and good distribution of the obtained PF compared to the true PF. This convergence performance results from bounded search reached from memorized local search solutions and from the variation of balanced distribution of exploration and exploitation phases. These mechanisms also enhance the convergence performance.
But MOSCHO characteristics are not enough to obtain fast convergence and good diversity due to the use of a little inappropriate search method followed by MOSCHO to handle many irregular search sets or the complexity of both UF2 and UF7. UF2 has the most complex irregular search set. in the same context, the other used algorithms perform poorly in terms of convergence and distribution for UF2. But for UF7, MOPSO has the fastest convergence and good distribution, unlike MOSCHO, which has good convergence and lower distribution than MOPSO. That is because MOPSO updates populations based on a combination of global and local optimal solutions.
Tables 5 and 6 illustrate the statistical results of the spacing performance metric (S) of all used mathematical benchmark functions. From metric spacing’s results, MOSCHO has a better performance of nine variant functions, as two functions, ZDT3 and ZDT4 of ZDT problems, three functions UF3, UF4, and UF5 of CEC2009 functions, three functions MMF10, MMF13, and MMF14 from CEC2020, but only one function DTLZ4 from DTLZ problems, as in Tables 5 and 6.
MOSCHO has excellent performance for ZDT4and better performance for ZDT3. Where MALO, MOPSO, and MOSCA have better performance for ZDT1 to ZDT3. For CEC 2009, MOSCHO has excellent performance for UF4 and better performance for UF3 and UF5, but MOPSO and MOSCA have excellent performance for UF2 and UF6, respectively. For CEC 2020, MOSCHO has superior performance at MMF10 and MMF14. MOSCHO has better performance for MMF13. MOSCA has excellent performance for MMF5 and MMF13. For DTLZ, MOSCHO has better performance for DTLZ4 and DTLZ6, where all tested algorithms are distributed for better performance of DTLZ problems.
While WRT approves that MOSCHO has differences of more than 5% differential, as in Tables 5 and 6. For the spacing metric, the best value is the minimum value, where MOSCHO has more than 70% positive differences in sixteen functions. compared to MALO, more than 50% positive differences in twelve functions compared to MSSA, up to 60% positive differences in thirteen functions compared to MOPSO, and more than 40% positive differences in eleven functions compared to MOPSO, and there are no differences for function MMF13 between both MOSCHO and MOPSO. MOSCHO has positive results from the spacing metric performance over MALO, MSSA, and MOPSO.
As a result of a balance between exploration phases and exploitation phases and bounded search to resize the search space based on both the global optimal solution and memorized global solutions, MOSCHO has a great distribution of obtained Pareto optimal sets, which refers to good coverage of the algorithm.
Figures 13, 14, 15, 16, 17, 18, 19, 20 and 21 indicate true and obtained Pareto sets for the performance of MOSCHO and other compared algorithms, MOSCA, MPSO, MALO, and MSSA. In which MOSCHO has a good performance of the obtained set compared to the true Pareto set. The good convergence and good distribution resulted from MOSCHO characteristics while trying to find non-dominated solutions. But due to the complicated objective search space of MMF5, MMF12, and MMF14, MOSCHO achieves poor convergence and good distribution.
MMF4 has a uniform search space of one convex, and MOSCHO has the best convergence and coverage performance. While MMF5 has an irregular search space of two levels, MOSCHO achieves the best performance of both convergence and coverage compared to other algorithms used. MMF7 and MMF8 have more than one convex and irregular objective space, but MOSCHO achieves applicable convergence and good distribution compared to the other algorithms.
But MMF10 has one concave of a two-level objective space, and MOSCHO had good convergence and good distribution, while MOSCHO handles MMF10 as it has a discrete multi-convex objective space. MMF11 and MMF13 have a linear and complicated objective space. But MOSCHO achieves the best performance.
MMF12 has a multi-convex discrete objective space. All of the used algorithms have poor performance for this function. MMF14 is a three-dimensional convex shape. MOSCHO achieves the best performance compared to other used algorithms for this experiment.
However, MOSCHO has good performance for MMF4 and MMF10 due to its continuous objective space of one convex and applicable performance for MMF5, MMF7, MMF8, and MMF13 due to disconnected objective search, followed by the problems in a complicated objective space. While MOSCHO has good distribution due to and poor convergence for linear discrete objective space problems, such as MMF12.
For CEC2020, MOSCHO achieves the best obtained performance compared to the other used algorithms due to the integrated strategies of MOSCHO that reach these results. But CEC2020 requires more complicated algorithms to handle complicated CEC2020 functions.
Tables 7 and 8 illustrate the statistical results of the hyper volume metric (HV) according to a reference point of each bi-objective problem. From metric hyper-volume’s results, MOSCHO has a better performance of fifteen variant functions of eighteen functions for two objectives, as three functions ZDT problems, seven functions of CEC2009 functions, and eight functions of CEC2020, as shown in Tables 7 and 8.
However, MOSCHO has better performance in ZDT1, ZDT3, and ZDT4 of the three ZDT problems. MOSCHO has UF1, UF2, UF4 and UF6 of seven CEC2009 functions. But MOSCHO has better performance of eight functions from MMF4, MMF5, MMF7, MMF8, MMF10, MMF11, MMF12, and MMF13.
While WRT approves that MOSCHO has differences of more than 5% differential, as in Tables 7 and 8. For the hyper-volume metric, the best value is the maximum value. where MOSCHO has more than 90% negative differences in 17 functions) compared to MALO, more than 90% negative differences in seventeen functions compared to MSSA, more than 65% negative differences in twelve functions compared to MOPSO, and more than 80% negative differences in 15 functions compared to MOSCO. MOSCHO has negative results from the hyper-volume metric performance over MALO, MSSA, MOPSO, and MOSCA.
From Tables 7 and 8, the hyper-volume metric of MOSCHO has a superior performance for functions except for fifteen benchmark problems. MOSCHO has a superior performance for the five mentioned ZDT functions, for seven CEC2009 processed problems except UF2 and UF7, and for ten bi-objective problems of CEC2020 processed. So, the MOSCHO achieves Great coverage results due to the integrated characteristics of memorized best solutions to bound the search space and balance the exploration phases and exploitation phases. These integrated characteristics make it easy to reach optimal solutions with the best diversity of obtained Pareto sets.
Figures 22 and 23, and 24 indicate true and obtained Pareto sets for the performance of MOSCHO and other compared algorithms, MOSCA, MPSO, MALO, and MSSA. In which MOSCHO has greater performance for both better distribution and good convergence of the obtained set compared to the true Pareto set for DTLZ4 and DTLZ6, and good performance for DTLZ1.
Where DTLZ1, DTLZ4, and DTLZ6 have concave geometries, and they are processed for three objectives. MOSCHO achieves good coverage and good convergence because of balanced integrated characteristics. The switch mechanism applies a good balance between exploration and exploitation phases, besides the difference between global solutions and memorized best local solutions to resize the search space to uniformly distribute the obtained solutions and update the next penetration of populations towards the global optimal solution, as shown in the Results of well-known problems (SRN, welded beam design problem).
Analysis results of the constrained benchmark function SRN, and the constrained welded beam design application
MOSCHO algorithm and other chosen algorithms are also tested in constrained applications, such as SRN and welded beam design applications. Design problems are the most challenging problems. Appendix B includes these two problems and all their constraints. The purpose of the SRN problem is to minimize two objectives, F1 and F2, simultaneously, which have two constraints. The purpose of the welded beam design problem is to minimize two objectives: end deflection and cost of the beam, which has four constraints. For testing MOSCHO and other chosen algorithms, the process is repeated fifteen times to make the results more consistent and more reliable.
Table 9 shows the statistical results of the GD performance metric. MOSCHO has better performance for the welded beam design problem. But MOSCA has better performance for the SRN problem. Where SRN is a problem with only two constraints, and not as complicated as the welded beam application of four constraints. And MOSCHO obtains fast convergence at only 100 iterations for a complicated application.
The integrated characteristics of MOSCHO prove its convergence priority through the GD metric in welded beam engineering design application. And WRT represents positive differences more than negative differences because the best value of the GD metric is the minimum value. So, these results confirm MOSCHO’s ability to achieve the fastest convergence for complicated design engineering problems, as in the welded beam design application.
Table 10 shows the statistical results of the diversity performance (∆) metric. MOSCHO has better performance for the SRN problem. But NSGAII and MOHS have better performance for the welded beam design problem. And WRT results confirm that the positive results are more than the negative results for the ∆ metric, the minimum value is the best. The Great diversity of obtained non-dominated solutions illustrated through the MOSCHO ∆ metric confirms the best distribution of the MOSCHO algorithm at 100 iterations for not complicated problems. And MOSCHO has good diversity for more complex engineering design applications. The ∆ metric’s results prove MOSCHO’s best coverage as a result of the best combination of MOSCHO characteristics.
Figure 25 indicates true and obtained Pareto sets for the performance of MOSCHO and other compared algorithms, MOSCA, MPSO, MHS, MWO, MALO, NSGAII, and MSSA. MOSCHO has better coverage and also a better convergence for the SRN problem due to its superior value of ∆ and GD metrics’ results. In which MOSCHO has great performance and great distribution of the obtained sets for the SRN problem compared to the true Pareto front set. The great performance confirms the superior benefits of combined characteristics for a constrained problem, such as bounded search based on global and memorized optimal solutions, and a switching mechanism between several phases. As illustrated in Tables 9, 10, 11 and 12. While the ER and SCC metrics’ results confirm the superior performance of MOSCHO.
Table 11 shows statistical results of the error ratio (ER) performance metric of the constrained applications: SRN and the welded beam design problem. Where MOSCHO has better performance for both applications, SRN and welded beam design problems. Those two problems have the most solutions of the obtained PF that exist on the true PF. The ER results confirm the best performance (coverage and convergence) of MOSCHO’s success performance indicator of constrained simple and complicated problems as a result of MOSCHO’s good convergence and diversity of combined characteristics.
Table 12 shows statistical results of the success counting (SCC) performance metric. MOSCHO has better performance for both applications, SRN and welded beam design problems. Where most of MOSCHO’s obtained PF are exist on the true PF of both simple and complex constrained problems. The SCC results illustrate MOSCHO’s best success performance because of the resulting great convergence and great distribution.
From Tables 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12, MOSCHO has an excellent performance for some benchmark functions, such as ZDT3 and ZDT4 from ZDT problems. MOSCHO has better results for performance metrics for all CEC2009 for some metrics of each function. But for CEC2020, MOSCHO has an excellent performance for MMF14 and has superior values for more metrics of all CEC2020 except MMF1 and MMF2. MOSCHO has a better performance for DTLZ1, DTL4, and DTLZ6. however, MOSCHO has a better performance for GD, ∆, ER, and SCC metrics of the SRN constrained problem. MOSCHO has better performance for ER and SCC metrics of the welded beam design problem. These results confirm MOSCHO’s ability for complicated constrained engineering design problems.
Figure 26 indicates true and obtained Pareto sets for the performance of MOSCHO and other compared algorithms, MOSCA, MPSO, MHS, MWO, NSGAII, MALO, and MSSA. In which MOSCHO has excellent convergence and excellent distribution of the obtained set compared to the true Pareto set of the welded beam design application. Besides these figures, Tables 9, 10, 11 and 12 confirm MOSCHO’s performance superiority as a result of the integrated methods proposed for this experiment. Tables 11 and 12 confirm that the best number of obtained PF exists on the true PF achieved by the MOSCHO algorithm.
Analysis results of modern for real-world design problems
MOSCHO algorithm and other chosen algorithms are also tested in real-world applications, such as Optimal Power Flow66 and Optimal Setting of Droop Controller67 design electrical applications. Design problems are the most challenging problems. Appendix C includes details of these two problems. The purpose of the Optimal Power Flow problem is to minimize four objectives, F1, F2, F3, and F4, simultaneously. The purpose of the Optimal Setting of Droop Controller design problem is to minimize three objectives. For testing MOSCHO and other chosen algorithms, the process is repeated fifteen times to make the results more consistent and more reliable.
Table 13 shows the statistical results of the GD performance metric. MOSCHO has better performance for the Optimal Power Flow design problem. But MOPSO and NSGAII have better performance for the Optimal Setting of the Droop controller problem. The Optimal Power Flow design problem is a problem with only four objectives, and is more complicated than the Optimal Power Flow of only three objectives. And MOSCHO obtains fast convergence at only 100 iterations for a complicated application.
The integrated characteristics of MOSCHO prove its convergence priority through the GD metric in the Optimal Power Flow engineering design application. And WRT represents positive differences for all tested algorithms for the Optimal Power Flow design problem and more positive differences than negative differences for the Optimal Setting of Droop controller design problem, because the best value of the GD metric is the minimum value. So, these results confirm MOSCHO’s ability to achieve the fastest convergence for complicated design engineering problems, as in the Optimal Power Flow design application.
Figure 27 indicates the best comprise solution(BCS) and obtained Pareto sets for the performance of MOSCHO and other compared algorithms, MOSCA, MPSO, MHS, MWO, NSGAII, MALO, and MSSA. In which MOSCHO has better convergence and better distribution of the obtained set compared to the BCS of the Optimal Power Flow design application. But NSGAII handles objectives as separate subproblems. Besides these figures, Tables 14, 15 and 16 confirm MOSCHO’s performance superiority as a result of the integrated methods proposed for this experiment.
Table 14 shows the statistical results of the IGD performance metric. MOSCHO has better performance for the Optimal Power Flow design problem. But MOPSO and MHS has better performance for the Optimal Setting of Droop controller design problem. The Optimal Power Flow design problem is a problem with only four objectives, and is more complicated than the Optimal Power Flow of only three objectives. And MOSCHO obtains fast convergence at only 100 iterations for a complicated application.
The integrated characteristics of MOSCHO prove its convergence priority through the GD metric in the Optimal Power Flow engineering design application. And WRT represents positive differences for all tested algorithms for the Optimal Power Flow design problem and more positive differences than negative differences for the Optimal Setting of Droop controller design problem, because the best value of the GD metric is the minimum value. So, these results confirm MOSCHO’s ability to achieve the fastest convergence for complicated design engineering problems, as in the Optimal Power Flow design application.
Figure 28 indicates best comprise solution(BCS) and obtained Pareto sets for the performance of MOSCHO and other compared algorithms, MOSCA, MPSO, MHS, MWO, NSGAII, MALO, and MSSA. In which MOSCHO has better convergence and better distribution of the obtained set compared to the BCS of the Optimal Setting of Droop controller design application. Besides these figures, Tables 14, 15 and 16 confirm MOSCHO’s performance superiority as a result of the integrated methods proposed for this experiment.
Table 15 illustrates the statistical results of the spacing performance metric (S) of modern real-world electrical applications. From metric spacing’s results, MOSCHO has the best performance of the Optimal Power Flow problem, but MHS and MOPSO have the best performance for the Optimal Setting of Droop controller.
While WRT approves that MOSCHO has differences of more than 5% differential, as in Table 13. For the spacing metric, the best value is the minimum value, where MOSCHO has 100% positive differences in the Optimal Power Flow problem. compared other tested algorithms, up to 30% positive differences in two algorithms compared to other tested algorithms.
As a result of a balance between exploration phases and exploitation phases and bounded search to resize the search space based on both the global optimal solution and memorized global solutions, MOSCHO has a great distribution of obtained Pareto optimal sets, which refers to good coverage of the algorithm.
Parameters sensitivity analysis
To extend SCHO for handling multiple objective problems, many parameters need to be tuned to obtain good performance. MOSCHO’s parameter sensitivity is analyzed for 100 iterations and 100 candidate solutions for both population and repository to reach the best values.
For the original SCHO algorithm, the sensitivity parameters, such as A, ct, u, m, n, and Ꜫ, are tuned to perform SCHO for multi-objective problems. ZDT1, a multi-objective problem used to tune SCHO parameters for multi-objective optimization problems. Due to randomization, processing is repeated three times for the sensitivity parameters for GD and IGD metrics.
Parameter (A) is responsible for switching between exploration and exploitation optimization methods. Table 16 shows the sensitivity analysis of parameter (A) for various values, where the best value is 3.
The parameter (ct) is responsible for switching between the two phases of both exploration and exploitation methods. Table 16 summarizes tuning values for the parameter, where the best value is 4.5, which produces the best GD and IGD. The parameters (m) and (u) are responsible for controlling the accuracy of the 1 st phase of exploration and exploitation, which have values for best GD and IGD equal 0.1 and 0.6, as shown in Table 17. The parameters (n) and (Ꜫ) control the accuracy of the 2nd phase of exploration, while parameter (n) controls the 2nd phase exploitation only. The best values of both n and Ꜫ equal 0.7 and 0.003, respectively, which achieve the best GD and IGD values as shown in Table 18.
While parameters of SCHO are responsible for achieving a balance between exploration and exploitation methods, as mentioned before in this section. But the following tests are used to tune the parameters of the multi-objective main methods.
The main parameters of the MOSCHO algorithm are nGrid, α, β, γ, and the mutation parameter (mu), which control the proposed method for this experiment. Where nGrid and α parameters for the hyper-cube method, nGrid represents the number of candidate solutions for each hyper-cube, and α is an inflation rate that controls the nGrid parameter. While the β parameter controls the leader selection method, the γ parameter controls the deletion selection method. The mu parameter is a factor for the accepted percentage of the new candidate solution’s position obtained. As shown in Tables 19 and 20, and 21, for best performance, nGrid has a value equal to 25, α has a value equal to 0.3, β has a value equal to 3, γ has a value equal to 1.5, and mu has a value equal to 0.6.
After tuning MOSCHO parameters, all parameters setting of all algorithms are aggregated in Table 22.
Table 22 shows all the parameter settings of the used algorithms in this experiment. Each algorithm has specific parameters that affect its process while updating the new positions of the candidate solutions. Table 22 summarizes initial values for the parameters of all tested algorithms and the initial parameters for the experiment, such as 100 populations, 100 repositories, and a maximum of 100 iterations were used for each experiment. But the parameters nGrid, α, β, and γ that were added while expanding the multi-objective techniques are used for all algorithms except NSGAII.
Limitations of moscho’s proposed method
MOSCHO’s proposed method faces many main limitations:
-
MOSCHO requires careful tuning of its parameters, and the tuning process requires time consumption, as indicated in Sect. 4.3. MOSCHO requires tuning more than ten parameters, which consumes more time for the parameter sensitivity process.
-
The MOSCHO’s performance varies according to the problem characteristics and the problem environment and constraints, as indicated in Sect. 4.2 for both benchmark functions and real-world design applications. So the performance varies according to the testing environment.
-
There are many difficulties faced in the multi-objective problems, in which maintaining equilibrium between all objectives of the problem is required to obtain an optimal solution of all objectives simultaneously and achieve balance between many objectives at the same time, as illustrated in the different problems’ results in Sect. 4.2.
-
Handling multiple objectives makes the meta-heuristic algorithms more complicated.
Conclusion and future works
The proposed MOSCHO algorithm provides excellent solutions for complex benchmark functions for all performance metrics in comparison with relevant works in the literature review. MOSCHO obtains good convergence with a perfect spreading simultaneously between a repository archived and a Pareto front set. where the memorized technique is based on saving the best local position of each candidate solution reached from the last iteration, and using the saved best solution of each population while updating the position at the current iteration. Memorized Multi-objective of Sinh-Cosh optimization algorithm has a better performance for convergence of repository comparing to known Pareto front set of each benchmark functions and real-world engineering design problems, as illustrated from tables of convergence, coverage, and performance success metrics’ results. MOSCHO achieves fast convergence. And MOSCHO has an excellent spreading of repository solutions on the Pareto front set due to spacing and diversity metrics. However, MOSCHO has better performance for the success performance indicator error ratio and success counting. It has better efficiency in successful performance indicator metrics for the tested real-world problem. But one of the MOSCHO’s main limitations, that MOSCHO requires multiple careful tuning of its parameters. The MOSCHO’s performance varies according to the problem characteristics and
Future works
-
Extension of MOSCHO to many-objective problems (more than three objectives).
-
Application of MOSCHO to more complex real-world engineering problems.
-
Hybridization of MOSCHO with other algorithms to improve performance on specific problem types.
-
Improvement of the algorithm’s constraints handling mechanism.
-
Adaptation of MOSCHO for dynamic multi-objective optimization problems.
Data availability
All data generated or analyzed during this study are included in this published article.
Code availability
For inquiries related to the code implementation and design, please contact the corresponding author.
References
Jiang, J., Zhao, Z., Liu, Y., Li, W. & Wang, H. DSGWO: an improved grey Wolf optimizer with diversity enhanced strategy based on group-stage competition and balance mechanisms. Knowl. Based Syst. 250, 109100 (2022).
Khodadadi, N., Talatahari, S. & Dadras Eslamlou, A. MOTEO: a novel multi-objective thermal exchange optimization algorithm for engineering problems. Soft. Comput. 26, 6659–6684 (2022).
Kennedy, J. & Eberhart, R. Particle swarm optimization, in Proceedings of ICNN’95-international conference on neural networks, pp. 1942–1948. (1995).
Wei, J., Gu, Y., Law, K. E. & Cheong, N. Adaptive position updating particle swarm optimization for UAV path planning, in 2024 22nd International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt), pp. 124–131. (2024).
Mirjalili, S. et al. Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 114, 163–191 (2017).
Lu, B. et al. “MRBMO: an enhanced red-billed blue magpie optimization algorithm for solving numerical optimization challenges,”. Symmetry 17, 1295 (2025).
Bai, J. et al. A Sinh Cosh optimizer. Knowl. Based Syst. 282, 111081 (2023).
Xiao, Y., Cui, H., Khurma, R. A. & Castillo, P. A. Artificial lemming algorithm: a novel bionic meta-heuristic technique for solving real-world engineering optimization problems. Artif. Intell. Rev. 58, 84 (2025).
Issa, M. Sequence analysis algorithms for bioinformatics Application (GRIN, 2014).
Issa, M., Helmi, A., Abd Elaziz, M. & Bhattacharyya, S. Chaotic Ions Motion Optimization (CIMO) for Biological Sequences Local Alignment: COVID-19 as a Case Study, Recent Trends in Signal and Image Processing: ISSIP, vol. 1333, p. 1, 2021., vol. 1333, p. 1, 2021. (2020).
Issa, M. Expeditious Covid-19 similarity measure tool based on consolidated SCA algorithm with mutation and opposition operators. Appl. Soft Comput. 104, 107197 (2021).
Issa & Abd Elaziz, M. Analyzing COVID-19 virus based on enhanced fragmented biological local aligner using improved ions motion optimization algorithm. Applied Soft Computing, p. 106683, (2020).
Issa, M., Abd Elbaset, A., Hassanien, A. E. & Ziedan, I. PID Controller Tuning Parameters Using Meta-heuristics Algorithms: Comparative Analysis,. In Machine Learning Paradigms: Theory and Application 413–430 (Springer, 2019).
Issa, M. & Hassanien, A. E. Multiple Sequence Alignment Optimization Using Meta-Heuristic Techniques, in Handbook of Research on Machine Learning Innovations and Trends, ed: IGI Global, pp. 409–423. (2017).
Issa, M., Helmi, A. M., Elsheikh, A. H. & Abd Elaziz, M. A biological sub-sequences detection using integrated BA-PSO based on infection propagation mechanism: case study COVID-19. Expert Syst. Appl. 189, 116063 (2022).
Issa, M. Performance Optimization of PID Controller Based on Parameters Estimation Using Meta-Heuristic Techniques: A Comparative Study, in. In Metaheuristics in Machine Learning: Theory and Applications 691–709 (Springer, 2021).
Issa, M. Enhanced arithmetic optimization algorithm for parameter estimation of PID controller. Arabian J. Sci. Eng. 48(2), 2191–2205 (2022).
Issa, M. Enhanced artificial satellite search algorithm with memory and evolutionary operator for PID controller parameter Estimation. Sci. Rep. 15, 39551 (2025).
Jordehi, A. R. Parameter Estimation of solar photovoltaic (PV) cells: A review. Renew. Sustain. Energy Rev. 61, 354–371 (2016).
Ghetas, M. & Issa, M. Extracting optimal fuel cell parameters using dynamic fick’s law algorithm with cooperative learning strategy and k-means clustering. Expert Syst. Appl. 262, 125601 (2025).
Issa, M., Elaziz, M. A. & Selem, S. I. Enhanced hunger games search algorithm that incorporates the marine predator optimization algorithm for optimal extraction of parameters in PEM fuel cells. Sci. Rep. 15, 4474 (2025).
Issa, M. & Samn, A. Passive vehicle suspension system optimization using Harris Hawk optimization algorithm. Math. Comput. Simul. 191, 328–345 (2022).
Ehrgott, M. & Wiecek, M. M. Multiobjective programming, Multiple criteria decision analysis: State of the art surveys, vol. 78, pp. 667–708, (2005).
Jangir, P. & Trivedi, I. N. Non–Dominated sorting moth flame optimizer: A novel Multi–Objective optimization algorithm for solving engineering design problems. Eng. Technol. Open. Access. J. 2, 17–31 (2018).
Liang, J. et al. Multimodal multiobjective optimization with differential evolution. Swarm Evol. Comput. 44, 1028–1059 (2019).
Wolpert, D. H. & Macready, W. G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1, 67–82 (1997).
Khodadadi, N., Abualigah, L., El-Kenawy, E. S. M., Snasel, V. & Mirjalili, S. An archive-based multi-objective arithmetic optimization algorithm for solving industrial engineering problems. IEEE Access. 10, 106673–106698 (2022).
Coello, C. C. & Lechuga, M. S. MOPSO: A proposal for multiple objective particle swarm optimization, in Proceedings of the 2002 Congress on Evolutionary Computation. CEC’02 (Cat. No. 02TH8600), pp. 1051–1056. (2002).
Deb, K., Agrawal, S., Pratap, A. & Meyarivan, T. A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II, in International conference on parallel problem solving from nature, pp. 849–858. (2000).
Gong, D., Sun, J. & Ji, X. Evolutionary algorithms with preference polyhedron for interval multi-objective optimization problems. Inf. Sci. 233, 141–161 (2013).
Knowles, J. & Corne, D. The pareto archived evolution strategy: A new baseline algorithm for pareto multiobjective optimisation, in Proceedings of the congress on evolutionary computation-CEC99 (Cat. No. 99TH8406), 1999, pp. 98–105., 1999, pp. 98–105. (1999).
Yen, G. G. & Lu, H. Dynamic multiobjective evolutionary algorithm: adaptive cell-based rank and density Estimation. IEEE Trans. Evol. Comput. 7, 253–274 (2003).
Tan, K. C., Yang, Y. & Goh, C. K. A distributed cooperative coevolutionary algorithm for multiobjective optimization. IEEE Trans. Evol. Comput. 10, 527–549 (2006).
Zhang, Y., Gong, D., Sun, J. & Qu, B. A decomposition-based archiving approach for multi-objective evolutionary optimization. Inf. Sci. 430, 397–413 (2018).
Sun, J. et al. Interval multiobjective optimization with memetic algorithms. IEEE Trans. Cybernetics. 50, 3444–3457 (2019).
Zhou, A., Zhang, Q. & Zhang, G. A multiobjective evolutionary algorithm based on decomposition and probability model, in 2012 IEEE Congress on Evolutionary Computation, pp. 1–8. (2012).
Elsayed, S. & Sarker, R. Differential evolution framework for big data optimization. Memetic Comput. 8, 17–33 (2016).
Abd Elaziz, M., Li, L., Jayasena, K. N. & Xiong, S. Multiobjective big data optimization based on a hybrid salp swarm algorithm and differential evolution. Appl. Math. Model. 80, 929–943 (2020).
Tian, Y., Cheng, R., Zhang, X., Cheng, F. & Jin, Y. An indicator-based multiobjective evolutionary algorithm with reference point adaptation for better versatility. IEEE Trans. Evol. Comput. 22, 609–622 (2017).
Abdel-Basset, M., Mohamed, R. & Abouhawwash, M. Balanced multi-objective optimization algorithm using improvement based reference points approach. Swarm Evol. Comput. 60, 100791 (2021).
Gong, D., Ji, X., Sun, J. & Sun, X. Interactive evolutionary algorithms with decision-maker׳ s preferences for solving interval multi-objective optimization problems. Neurocomputing 137, 241–251 (2014).
Mirjalili, S. Z., Mirjalili, S., Saremi, S., Faris, H. & Aljarah, I. Grasshopper optimization algorithm for multi-objective optimization problems. Appl. Intell. 48, 805–820 (2018).
Lai, X., Li, C., Zhang, N. & Zhou, J. A multi-objective artificial sheep algorithm. Neural Comput. Appl. 31, 4049–4083 (2019).
Coello, C. A. C., Pulido, G. T. & Lechuga, M. S. Handling multiple objectives with particle swarm optimization. IEEE Trans. Evol. Comput. 8, 256–279 (2004).
Tawhid, M. A. & Savsani, V. Multi-objective sine-cosine algorithm (MO-SCA) for multi-objective engineering design problems. Neural Comput. Appl. 31, 915–929 (2019).
Mousa, A., El-Shorbagy, M. A. & Abd-El-Wahed, W. F. Local search based hybrid particle swarm optimization algorithm for multiobjective optimization. Swarm Evol. Comput. 3, 1–14 (2012).
El Aziz, M. A., Ewees, A. A., Hassanien, A. E., Mudhsh, M. & Xiong, S. Multi-objective whale optimization algorithm for multilevel thresholding segmentation,. In Advances in soft computing and machine learning in image processing 23–39 (Springer, 2017).
Xue, J., Zhang, C., Wang, M. & Dong, X. MOSSA: an efficient swarm intelligent algorithm to solve global optimization and carbon fiber drawing process problems. IEEE Internet Things Journal, (2024).
Xue, J. & Shen, B. A survey on sparrow search algorithms and their applications. Int. J. Syst. Sci. 55, 814–832 (2024).
Pavelski, L. M., Almeida, C. P. & Goncalves, R. A. Harmony search for multi-objective optimization, in 2012 Brazilian symposium on neural networks, pp. 220–225. (2012).
Kumawat, I. R., Nanda, S. J. & Maddila, R. K. Multi-objective whale optimization, in Tencon –2017 ieee region 10 conference, 2017, pp. 2747–2752., 2017, pp. 2747–2752. (2017).
Izci, D. et al. Refined Sinh Cosh optimizer tuned controller design for enhanced stability of automatic voltage regulation. Electr. Eng. 106, 6003–6016 (2024).
Ekinci, S., Izci, D., Bajaj, M. & Blazek, V. Novel application of Sinh Cosh optimizer for robust controller design in hybrid photovoltaic-thermal power systems. Sci. Rep. 15, 2825 (2025).
Ibrahim, R. A. et al. Boosting Sinh Cosh Optimizer and arithmetic optimization algorithm for improved prediction of biological activities for indoloquinoline derivatives,. Chemosphere 359, 142362 (2024).
Ekinci, S. et al. Optimized FOPID controller for steam condenser system in power plants using the sinh-cosh optimizer. Sci. Rep. 15, 6876 (2025).
Abualigah, L., Ekinci, S. & Izci, D. Aircraft pitch control via filtered proportional-integralderivative controller design using sinh cosh optimizer. Int. J. Rob. Control Syst. 4(2), 746–757 (2024).
Wang, X. et al. A sinh–cosh-enhanced DBO Algorithm applied to global optimization problems. Biomimetics 9, 271 (2024).
Ray, T., Tai, K. & Seow, K. C. Multiobjective design optimization by an evolutionary algorithm. Eng. Optim. 33, 399–424 (2001).
Zitzler, E., Deb, K. & Thiele, L. Comparison of multiobjective evolutionary algorithms: empirical results. Evolution. Comput. 8, 173–195 (2000).
Huband, S., Hingston, P., Barone, L. & While, L. A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans. Evol. Comput. 10, 477–506 (2006).
Liang, J., Suganthan, P. N., Qu, B., Gong, D. & Yue, C. Problem definitions and evaluation criteria for the cec 2020 special session on multimodal multiobjective optimization, 201912. Zhengzhou University, doi, 10, (2019).
Mirjalili, S., Jangir, P. & Saremi, S. Multi-objective ant Lion optimizer: a multi-objective optimization algorithm for solving engineering problems. Appl. Intell. 46, 79–95 (2017).
Srinivasan, N. & Deb, K. Multi-objective function optimisation using non-dominated sorting genetic algorithm. Evolutionary Comp. 2, 221–248 (1994).
Ray, T. & Liew, K. A swarm metaphor for multiobjective design optimization. Engineering optimization 34, 141–153 (2002).
Zhang, Q. et al. Multiobjective optimization test instances for the CEC 2009 special session and competition, (2008).
Kumar, A., Jha, B. K., Singh, D. & Misra, R. K. A new current injection based power flow formulation. Electr. Power Compon. Syst. 48, 268–280 (2020).
Kumar, A., Jha, B. K., Dheer, D. K., Misra, R. K. & Singh, D. A nested-iterative Newton-Raphson based power flow formulation for droop-based islanded microgrids. Electr. Power Syst. Res. 180, 106131 (2020).
Nebro, A. J., Durillo, J. J. & Coello, C. A. C. Analysis of leader selection strategies in a multi-objective particle swarm optimizer, in 2013 IEEE congress on evolutionary computation, pp. 3153–3160. (2013).
Knowles, J. D. & Corne, D. W. Approximating the nondominated front using the Pareto archived evolution strategy. Evolution. Comput. 8, 149–172 (2000).
Van Veldhuizen, D. A. & Lamont, G. B. Multiobjective evolutionary algorithm research: A history and analysis, Citeseer (1998).
Schott, J. R. Fault Tolerant Design Using Single and Multicriteria Genetic Algorithm Optimization (Massachusetts Institute of Technology, 1995).
Sierra, M. R. & Coello Coello, C. A. Improving PSO-based multi-objective optimization using crowding, mutation and∈-dominance, in International conference on evolutionary multi-criterion optimization, pp. 505–519. (2005).
Zitzler, E. & Thiele, L. Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Trans. Evol. Comput. 3, 257–271 (2002).
Van Veldhuizen, D. A. Multiobjective evolutionary algorithms: classifications, analyses, and new innovations (Air Force Institute of Technology, 1999).
Okoye, K. & Hosseini, S. Wilcoxon Statistics in R: Signed-Rank Test and Rank-Sum Test,. In R Programming: Statistical data analysis in research 279–303 (Springer, 2024).
Funding
Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB).
Author information
Authors and Affiliations
Contributions
D.E data curation, supervision, and writing original draft preparation, methodology, Conceptualization, and experimental tests. M.I: Conceptualization, supervision, methodology, and writing--original draft preparation. I.Z: Writing--review and editing. All authors have read and agreed to the published version of the manuscript.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
El-Nagar, D., Zeidan, I. & Issa, M. A memorized multi-objective Sinh-Cosh optimizer for solving multi-objective engineering design problems. Sci Rep 16, 3039 (2026). https://doi.org/10.1038/s41598-025-33789-8
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41598-025-33789-8






























