Abstract
The grasshopper optimization algorithm (GOA) is a meta-heuristic algorithm proposed in 2017 mimics the biological behavior of grasshopper swarms seeking food sources in nature for solving optimization problems. Nonetheless, some shortcomings exist in the origin GOA, and GOA global search ability is more or less insufficient and precision also needs to be further improved. Although there are many different GOA variants in the literature, the problem of inefficient and rough precision has still emerged in GOA variants. Aiming at these deficiencies, this paper develops an improved version of GOA with Levy Flight mechanism called LFGOA to alleviate the shortcomings of the origin GOA. The LFGOA algorithm achieved a more suitable balance between exploitation and exploration during searching for the most promising region. The performance of LFGOA is tested using 23 mathematical benchmark functions in comparison with the eight well-known meta-heuristic algorithms and seven real-world engineering problems. The statistical analysis and experimental results show the efficiency of LFGOA. According to obtained results, it is possible to say that the LFGOA algorithm can be a potential alternative in the solution of meta-heuristic optimization problems as it has high exploration and exploitation capabilities.
Similar content being viewed by others
Introduction
Till date, researchers and practitioners have presented and experimented with various nature-inspired metaheuristic algorithms to handle various search problems. O. N. Oyelade et al.1 (2022) proposed an appealing Ebola Optimization Search Algorithm, they achieved some attractive results, especially when the EOSA algorithm was applied to address the complex problem of selecting the best combination of convolutional neural network (CNN) hyperparameters in the image classification of digital mammography. But the mathematical model of EOSA is a little complicated. Laith Abualigah et al.2 (2022) proposed a unique Reptile Search Algorithm (RSA) and achieved better results than the other competitive optimization algorithms when applied their RSA algorithm to solve seven real-world engineering problems. Since the RSA algorithm introduction, many RSA variants have been proposed. It will be better; if they gave the statistical numerical results (such as mean and standard deviation) of the RSA algorithm and other comparative algorithms in solving seven engineering problems. Abualigah, Laith Mohammad et al.3 (2021) proposed a novel mathematically modelled: Arithmetic Optimization Algorithm (AOA), that utilizes the main arithmetic operators: Multiplication (M), Division (D), Subtraction (S), and Addition (A). Although, the better performance of the AOA is evaluated using twenty-nine benchmark functions and several real-world engineering design problems. But the parameter Math Optimizer Accelerated (MOA) is increased linearly from 0.2 to 0.9 still needs extensively discussed. Hussien, A.G et al.4 (2022) comprehensively reviewed the recent widespread applications and variants of Harris hawk optimizer (HHO) in-depth. The authors thoughtfully investigated several possible future directions and possible ideas of the recent applications and variants of well-established HHO. As soon as Snake Optimizer (SO) is proposed (2022) by Fatma A. Hashim et al.5, the SO algorithm attracted researchers and practitioners and the SO algorithm was applied to many dominions, as the SO optimization algorithm is simple and efficient. Since the SO algorithm introduction, many SO variants have been proposed to tackle optimization problems. Zheng, Rong et al.6 (2022) proposed an improved wild horse optimizer (IWHO) integrated three improvements: random running strategy (RRS), dynamic inertia weight strategy (DIWS), and competition for waterhole mechanism (CWHM). The IWHO algorithm has successfully overcome the crucial drawbacks of the origin WHO may be stuck in local optimal regions or has a slow convergence. The IWHO algorithm is evaluated by classical benchmark functions and five real-world optimization problems and compared with nine well-known algorithms. Huangjing Yu et al.7 (2022) proposed an improved Aquila optimizer (mAO), the highlight of the mAO algorithm is the restart strategy, which is simple but effective. Their mAO algorithm has solved five engineering optimization problems; but has not been compared with other algorithms by numerical statistics such as mean and standard deviation. Feature selection problem is one of the main difficulties in machine learning domain to find the smaller number of informative features among a huge amount of feature space which guides the maximum classification ratio. Hussien, A.G., Amin, M.8 (2022) proposed an improved version of HHO called IHHO, which not only solves 5 constrained engineering problems but also has been applied to solve feature selection problems using 7 UCI datasets.
Pengchuan Wang et al.9 (2020) comprehensively and extensively overviewed the recent widespread applications and variants of Complex-valued encoding algorithm in-depth. The authors successfully tested eight complex-valued encoding algorithms by standard benchmark functions and solved five engineering optimization design problems. But the mathematical model of Complex-valued encoding algorithm is a little complicated. Chen et al10 (2020) proposed an improved arithmetic optimization algorithm (IAOA) based on the population control strategy to solve numerical optimization problems, which successfully solved optimization problems to consume less energy during robotic arm movement.
Grasshopper optimisation algorithm, variants, and applications
According to the behavior of grasshopper swarms in nature, Shahrzad Saremia. et al. in 2017 proposed a unique and novel swarm intelligence algorithm called the grasshopper optimization algorithm (GOA)11, making utilization of the swarm intelligence to solve optimization problems. This algorithm is proven to be efficient in solving global unconstrained and constrained optimization problems. Since 2017, GOA has attracted increasing interest from academics and researchers, most researchers and practitioners have achieved success with the GOA algorithm to solve various complex and real-world problems in many different domains12,13,14. On the other hand, to fully extend the performances of the GOA, most researchers and practitioners constructed a variety of hybrid variants15 based on GOA and other metaheuristics; and embedded different key parameters into the GOA, to solve their practical fields’ complex real-world problems. Arora, S et al.16 (2019) introduced the chaotic method with GOA for solving global optimization. Zhao, S et al.17 (2021) embedded trigonometric substitution into GOA to enhance Cauchy mutation. Ahmed A et al.18 (2022) merged Crossover Operators with GOA for feature selection and solving engineering problems. Yildiz et al.19 (2021) proposed using elite opposition-based learning to enhance GOA for solving real-world engineering problems. Yi Feng et al.20 (2020) introduced Dynamic Opposite Learning assisted GOA for the Flexible Job Scheduling Problem. Qin, P et al.21 (2021) have successfully applied improved GOA to optimise the parameters of the BP neural network for predicting the closing prices of the Shanghai Stock Exchange Index and the air quality index (AQI) of Taiyuan, Shanxi Province.
The nature-inspired meta-heuristic algorithm with levy flight
Wang, Shuang et al.22 (2022) proposed an improved version of ROA called Enhanced ROA (EROA) using three different techniques: adaptive dynamic probability, SFO with Levy flight, and restart strategy; and have successfully overcome slow convergence and stagnation in local optima of the origin ROA. As soon as the Levy flight trajectory-based WOA (LWOA) algorithm is proposed by Zhou, Y., Ling, Y. and Luo, Q.23 (2018), which attracted researchers and practitioners and applied the LWOA algorithm to many dominions, because the LWOA algorithm effectively adaptation, few control parameters, and simplicity of structure. Xuan Chen et al.24 (2021) employments of Opposition-based learning and the Genetic algorithm with Levy’s flight to improve the Wolf Pack Algorithm and achieved maintain the diversity of the initial population during the global search. Their experimental results show that their proposed algorithm has a better global and local search capability, especially in the presence of multi-peak and high-dimensional functions.
Above mentioned cases are only a few typical models, but they show the nature-inspired meta-heuristic algorithm gets the best global value largely dependent on together with levy flight. On the other hand, these studies affirm that levy flight can considerably enhance the performance of meta-heuristic optimizers.
Our main contribution is to use the grasshopper optimization algorithm with Levy Flight distribution strategy (LFGOA) to seven real-world problems, which cover hybrids (continuous, discrete, and integer variables) nonlinear constrained optimization, such as Himmelblau’s nonlinear optimization problem, Cantilever beam design, Car Side Impact Design, Gear train Design, Pressure vessel design, Speed Reducer Design, and tabular column design.
Another contribution is that the levy flight strategy is properly embedded with GOA to help explore the search space. The comprehensive effect of levy flight mechanisms strengthens the exploration-exploitation balance during the search process.
The third contribution is that performance of the LFGOA algorithm was validated by 23 mathematical benchmark functions in comparison with the eight well-known meta-heuristic algorithms (AHA, AO, DA, DMOA, GBO, HGS, HHO, and MVO) and the comprehensive performance of the LFGOA algorithm is superior to the eight algorithms and the origin GOA algorithm.
The fourth contribution is that the extensibility test with different scales of dimensions 50, 100, 300, and 500, is undertaken by comparing LFGOA with the original GOA to assess the dimensional influence on problem consistency and optimization quality. The comparisons show that the proposed LFGOA algorithm still holds a simple and efficient structure that significantly improves the performance of the origin GOA algorithm.
In the rest of this paper, Section 2 provides the key idea and structure of the Grasshopper Optimization Algorithm (GOA). Section 3 provides Grasshopper Optimization Algorithm with Levy Flight (LFGOA), improvement steps in in-depth and LFGOA pseudo-code. Section 4 extensively introduces the experimental design and simulation results. Section 5 presents seven real applications of LFGOA in nonlinearly-constrained engineering optimization problems. Finally, section 6 concludes the paper and future directions.
The grasshopper optimization algorithm (GOA)
GOA algorithm is inspired by the foraging and swarming behavior of grasshoppers in nature for solving numerical optimization issues. The life cycle of the grasshopper includes two stages called nymph and adulthood. The nymph stage is characterized by small steps and slow movements, while the adulthood stage is characterized by long-range and abrupt movements. The movements of nymphs and adulthood constitute the intensification and diversification phases of GOA. Intuitively speaking, the GOA search process splits into two stages: exploration and exploitation are shown in Fig. 1.
In the exploration stage, we update all the positions’ values and compute the fitness value of all grasshopper swarms (search for food sources). In the exploitation stage, we find the best solution among all solutions (search for better food sources).
Principal of the grasshopper optimization algorithm
In the GOA algorithm, each grasshopper represents a solution in the population. The grasshopper swarms behavior is mathematically modelled and used to calculate the position \(X_i\) of each solution as follows:
where \(X_i\) indicates the ith grasshopper’s position, \(S_i\) denotes the grasshopper interaction between the solution and the other grasshoppers’ swarms, \(G_i\) is the gravity force on the ith solution, and \(A_i\) represents the wind advection, which can be represented by the below equations:
where N denotes the number of grasshoppers, \(d_{ij}= |x_j-x_i|\) defines the Euclidean distance between the ith and the jth grasshoppers swarm, \(\widehat{d_{ij}}=\frac{| x_j-x_i|}{d_{ij}}\) represents the unit vector from the ith to the jth grasshopper swarm. In addition, s represents the strength of two social forces (repulsion and attraction between grasshopper swarms), where l is the attractive length scale and f is the intensity of attraction.
When the distance between two grasshoppers swarm in the range [0, 2.079], repulsion occurs, and when the distance between two grasshoppers swarm is exactly 2.079, neither attraction nor repulsion occurs, which forms a comfort zone. When the distance exceeds 2.079, the attraction force increases, then progressively decreases until it reaches 4. The function s fails to apply forces between grasshoppers’ swarms when the distance between them is larger than 10. To solve this problem, we map the distance of grasshoppers’ swarms in the interval [1, 4].
The equation below shows how to calculate the force of gravity \(G_i\):
where g denotes the gravitational constant and \(\widehat{e_g}\) is unit vector toward center of earth.
The equation below shows how to compute \(A_i\):
where u represents the drift constant and \(\widehat{e_w}\) is the unit vector in the wind direction.
After replacing the values of \(S_i\), \(G_i\), and \(A_i\), equation (1) can be reconstructed as follows by Equations 2, 3, 4 and 5:
However, the mathematical model of equation (6) cannot be used directly to solve the optimization problems, as mainly the grasshoppers quickly reach their comfort zone and the grasshopper’s swarms from failing to converge to the location target or a specified point (global optimum). To solve optimization issues and prevent grasshopper swarms from quickly reaching their comfort zone, the equation truly actuarily applied to solve optimization problems is proposed by the author as follows:
where \(UB_d\) and \(LB_d\) are the upper and lower bounds in the dth dimension respectively, \(\widehat{T_d}\) denotes the best solution found so far in the dth dimension space. In Eq. (7), the gravity force is not considered, that is, there is no \(G_i\) component. And assume that the wind direction (\(A_i\) component) is always towards a target \(T_d\). The second term \(\widehat{T_d}\), simulates the tendency of grasshoppers to move towards the food source.
The key parameter c in mathematical model
In the grasshopper swarm algorithm, parameter c in Eq. (7) is very important for local and global search. The inner c in Eq. (7) is used to reduce the repulsion, attraction and comfort zone between grasshoppers correspondingly to the number of iterations; is also responsible for the reduction of repulsion/attraction forces between grasshoppers’ swarms, which is proportional to the number of iterations. The outer c in Eq. (7) is responsible to reduce the grasshopper’s movements around the target (food) and helps reduce the search coverage around the target as the iteration goes on increasing. The coefficient c is proposed as follows:
where \(c_{max}\) and \(c_{min}\) are the maximum and minimum values of c respectively, \(c_{max}\) and \(c_{min}\) can be set as 1 and 0.00001 respectively, where t is the current iteration, and \(t_{max}\) is the maximum iteration value. The position of a grasshopper is updated based on its current position, the global best position, and the positions of other grasshoppers within the swarm.
The grasshopper optimization algorithm with levy flight
Mantegna’s algorithm from levy flights random walks
The study shows that the distribution probability density function of the variation of the Levy’s flight step can be approximated as follows:
Where s is the random step length of Levy’s flight behavior, and \(\theta\) is bounded as [0, 2] as a power-law index and is set to be 1.5, which controls the peak sharpness of the levy distribution graph. The different values of the parameter \(\theta\) cause different distributions, it makes longer jumps for smaller values, whereas it makes shorter jumps for bigger values. True Levy distribution is hard to implement in computer code, but the approximate form. Mantegna algorithm is one of the fast and accurate algorithms which generate a stochastic variable whose probability density is close to the Levy stable distribution characterized. Mantegna’s algorithm can be split into three steps. For random walks, Mantegna’s algorithm determines the step length S as follows:
where S is the random step length variable, while U and V are two normal stochastic variables with standard deviation \(\sigma _U\) and \(\sigma _V\), U and V should be attained based on normal distributions:
The symbol \(\sim\) in Eq. (11) denotes the random variable obeys the distribution on the right-hand side; that is, samples should be drawn from the distribution. As the standard deviation \(\sigma _U\) and \(\sigma _V\) cannot be chosen independently for an arbitrary value of \(\theta\), for simplicity we usually set
After this setting, the standard deviation \(\sigma _U\) can be obtained by:
The step size of Levy flight has been achieved by the Eqs. (9) – (13), which simulates the search of short walking distance and occasionally longer walking distance. Then the step size is calculated by
Where, the factor value \(f(f=0.01)\) derived from L/100 determines the levy walks and the factor is dependent on the dimension of the desired problem, where L is the wide-scale; unless Levy flights become too aggressive, it helps the new solution move away from the search space. The process of Levy flight can be exhibited in Algorithm 1.

The step size value will be added to update the equations of the LFGOA algorithm for finding the best position. From theoretical perspectives, this random walk is based on a long tail distribution which can be used to help an algorithm escape from getting stuck at a local optimum25,26,27. In other words, the Levy flight distribution is an effective mathematical operator for producing varied solutions in the searching space and increasing the exploration capability of the LFGOA algorithm.
From Algorithm 1, it is worth noting the formula:
\(New Position = current Position * LFGOA\_Levy(dim)';\)
Firstly, \(LFGOA\_Levy(dim)\) represents the Levy flight function, and dim is the dimension size of the problem. the Levy flight Strategy is integrated into the GOA by the above formula. The Levy flight has a relatively high probability of large strides in random walking, which can effectively improve the randomness of the GOA algorithm. This way, the risk that the algorithm gets stuck in a local optimum is drastically reduced, while it is still possible to perform sufficient local refinements. In other words, the algorithm presents a natural balance between exploration and exploitation.
Secondly, in the case of stagnation, Levy-triggered searching (hunting) patterns can help LFGOA to jump out of them toward new better positions. By this mechanism, the LFGOA algorithm can overcome the deficiencies of the little diversity of the origin GOA algorithm and greatly increase the probability of getting the best position (solution), which is also the highlight and unique feature of the LFGOA algorithm.
Despite being a simple change in the LFGOA algorithm, this new distribution induces drastic changes in the optimization process, LFGOA-based jumps can redistribute grasshoppers around the fitness landscape to prevent the population from the loss of diversity and to put more emphasis on the global searching tendency.
Enhancing grasshopper optimization algorithm (GOA) with levy flight
How and where place Levy Flight in the GOA algorithm will directly produce totally different results, in some cases even give worse results. Based on the above facts, through an in-depth comprehensive study and trial-and-error experiments, we successfully embedded Levy flight into the GOA algorithm by the following simple but effetely mechanisms.
Firstly, except for the first grasshopper initialled with rand values (since the first iteration was dedicated to calculating the fitness of the grasshopper), the other grasshoppers were assigned Levy flight distribution values, not rand values, which directly produced a better start for most of the grasshoppers with wide diversity in the initialization stage. Secondly, the target is achieved by the Levy flight mechanism during executing iteration, which overcomes the deficiencies and can be escaped from a local optimum and restarted in a different region of the search space for the LFGOA. The flow chart of the Levy flight mechanism embedded in the GOA is shown in Fig. 2. The pseudo-code of the LFGOA algorithm is presented in Algorithm 2.

In sharp contrast: although the existing method has greatly improved GOA, there is still a large probability of falling into local optimum by the reason of immature convergence, and the truth reason derived from the diversity is underdeveloped for the GOA algorithm. On the other hand, initializes the position of agents in the search space by Levy flight as the below formula:
The above formula, \(LFGOA\_Levy (dim)\) represents the Levy flight function, and dim is the dimension size of the problem, which provides a large-scale deployment schema for the LFGOA algorithm, all grasshoppers assigned Levy flight value not random numbers between [0, 1] from the uniform distribution at the initialization stage, which directly increase the wide diversity of the LFGOA algorithm. Secondly, randomization is more efficient as the step length is heavy-tailed random redistribution, and any large step is possible, which effectively increases the probability of LFGOA’s global search ability and precision.
From Fig. 2, it is worth noting the following three formulas:
Where Tp is assigned logical ‘0’. when the value of the grasshoppers’ position is less than the upper boundary, otherwise Tp is assigned logical ‘1’.
Where Tm is assigned logical ‘0’. when the value of the grasshoppers’ position is more than the lower boundary, otherwise Tm is assigned logical ‘1’.
Where \(( \sim (Tp+Tm))\) is assigned to value 1 when the grasshoppers’ position is not at the boundary, otherwise is assigned to value 0.
When the grasshoppers go outside the search space, the grasshoppers will be drawn back by the above formula. After that, the positions of the grasshoppers are directly replaced (similar restarted)28 by the below formula:
Based on the above formula, the positions of all the grasshoppers random redistribution around the fitness landscape to prevent the population from the loss of diversity and to put more emphasis on the global searching tendency. The balance between exploration and exploitation can be achieved according to the Levy flight based jumps, which allows grasshoppers to escape from local minima and explore different search areas. However, it cannot ensure the new update position is better than the current position.
The proposed approach
As a newly proposed algorithm, GOA has achieved good results on some test functions. However, experiment results show that it still has the defects of insufficient global exploration and local optimum stagnation. The lack of global exploration capacity can be attributed to the deficient searches with two stages. Thus, GOA properly integrated with Levy Flight is utilized to improve the global search ability in this work. Meanwhile, a restart strategy of Levy Flight is added to GOA that helps the GOA algorithm escape from local optima.
To the best of our knowledge, the main reason behind the effectiveness of LFGOA is that the Levy flight based jumps can effectively redistribute the search agents to enhance their diversity and to emphasize more explorative steps in case of immature convergence to local optima. It is a successful GOA variant of combining GOA with Levy Flight and gained better results of applying LFGOA in seven real-world engineering problems. The statistical analysis and experimental results show the efficiency of LFGOA.
In section 4, the strict experiments will exhibit that LFGOA is superior to the GOA algorithm in most performance metrics, especially at the parts of correct getting the best solutions with quick convergence speed. In fact, LFGOA still holds the advantages of simple structure and few-parameter-turnings even added extra Levy flight mechanism.
Experimental results and analysis
In this section, all experiments were carried out under the Windows 10 OSx64 using MATLAB R2019a software, and the hardware platform used was configured with Intel(R) Core (TM) i7-8700 CPU @ 3.20GHz and 8 GB RAM.
The performance of the suggested LFGOA is assessed in this section by using five experiments. Accordingly, the first one evaluates AHA, AO, DA, DMOA, GBO, HGS, HHO, LFGOA, and MVO about the average value, the standard deviation, and the best value using twenty-three mathematical benchmark functions presented in Table 1. These benchmark functions are categorized into three groups: unimodal, multi-modal, and composite.
Here, the LFGOA performance is tested using twenty-three benchmark functions. This benchmark contains seven unimodal, six multimodal, and ten fixed-dimension multimodal functions. The mathematical description of each type is given in Table 1 where N denotes the number of grasshoppers, T refers to the maximum iteration value, dim refers to the number of dimensions, Range shows the interval of search space, \(F_{min}\) refers to the optimal value that the corresponding functions can achieve.
The second one strictly tests the convergence performance of the LFGOA with AHA, AO, DA, DMOA, GBO, HGS, HHO, and MVO respectively. The third experiment aims to test the LFGOA by a non-parametric Wilcoxon, Friedman, and Nemenyi statistical test. The fourth tests the scalability performance of the LFGOA compared with the GOA comprehensively and thoroughly under conditions of 50, 100, 300, and 500 Dimensions. The fifth part presents some quantitative metrics of LFGOA.
Comparing LFGOA with AHA, AO, DA, DMOA, GBO, HGS, HHO, and MVO
To comparing and evaluating the performance of the LFGOA on the well-known 23 mathematical benchmark functions, we select below the eight advanced well-known and the latest meta-heuristic algorithms respectively.
-
1)
Artificial hummingbird algorithm (AHA)29,
-
2)
Aquila Optimizer (AO)30,
-
3)
Dragonfly algorithm (DA)31,
-
4)
Dwarf Mongoose Optimization Algorithm (DMOA)32,
-
5)
Gradient-based optimizer (GBO)33,
-
6)
Hunger Games Search (HGS)34,
-
7)
Harris hawks optimization (HHO)35,
-
8)
Multi-Verse Optimizer (MVO)36.
In order to provide a fair comparison, the main controlling parameters of these algorithms all run 30 times on each of the benchmark function, number of search agents and maximum iteration are all equal to 100 respectively. In the experiments, the key parameters of these nine algorithms are set up as shown in Table 2.
In the following Tables, where best results are all marked in bold.
In addition, to check the differences and rankings between nine algorithms, another non parametric multiple comparison method is used to calculate the average ranking value by the Friedman test. When applying Friedman’s test, the best algorithm is the one that receives the lowest rank while the worst algorithm receives the highest rank. In order to assess the statistical performance of LFGOA and each other method on the 23 test suites, the average (or mean) and standard deviation values of the rank of each method were taken into account. The average and Std rankings of LFGOA in conjunction with other methods using Friedman’s test are summarized in Tables 3 and 4, respectively.
In Table 3, there are 17 out of 23 average values obtained by LFGOA algorithm, which are all less than those obtained by the other eight algorithms. From Table 3, it can be seen that the average searching quality of LFGOA is better than those of other methods.
From the statistical results of Table 3, it is clear that the LFGOA with the complete improvement strategies performs best with a Friedman test ranking value of 2.4783. All in all, there are 18 out of 23 average ranking first obtained by LFGOA, which are all more than those obtained by the other eight optimization algorithms. However, LFGOA gives unsatisfactory results in F14, F15, F17, and F18. The results show that LFGOA achieves the average ranking third in F12. The LFGOA performs the best among the nine algorithms, proving that the utilization of Levy Flight can effectively enhance the performance of the GOA algorithm.
In Table 4, only the composite functions F14–F18, the standard deviation value obtained by LFGOA algorithm are 5.90E+01, 4.91E−03, 7.49E−03, 8.58E−01, and 6.66E+00, which are not less than the other eight algorithms. All in all, there are 18 out of 23 standard deviation values obtained by LFGO algorithm, which are all less than those obtained by the other eight algorithms. The better values of the standard deviations prove that the LFGOA algorithm stable performs better than the other eight algorithms.
As shown in Table 4, we evaluate the performance of the algorithms using the Friedman test. All algorithms are ranked according to the Std value. LFGOA ranks first in all unimodal functions (F1–F7) and all multi-modal functions (F8–F13) and achieves a Std ranking value of 2.2609. However, LFGOA gives unsatisfactory results in F14, F17, and F18. In this regard, the results show that LFGOA achieves the Std ranking third in F16 and the fourth in F15. The statistical results show that LFGOA has the best performance compared to the eight algorithms mentioned above for solving the 23 classical test functions.
In Table 5, for the unimodal functions and the multimodal functions, the best values obtained by the LFGOA algorithm are not desired in comparison with other eight algorithms. For the composite functions, only the F15, the LFGOA algorithms get nearly accurate approximation values, for the other composite functions F14 and F16–F23, the LFGOA algorithm all get better accurate approximation values.
To further analyze the differences between the algorithms, a post-hoc Nemenyi test was employed. If the null-hypothesis is rejected, we can proceed with a post-hoc test. The Nemenyi test (Nemenyi, 1963) is similar to the Tukey test for ANOVA and is used when all classifiers are compared to each other. The performance of two classifiers is significantly different if the corresponding average ranks differ by at least the critical difference (CD).
where N is the number of datasets (23) and k (9) is the number of algorithms being compared.
At \(\alpha =0.05\), the critical value (Table 6) \(q_\alpha\) for 9 classifiers (algorithms) is 3.102 and the corresponding CD is \(3.102 \times \sqrt{\frac{9 \times 10}{6 \times 23}} \approx 2.5051\).
At \(\alpha =0.10\), \(q_\alpha =2.855\), \(N=23\), \(k=9\); corresponding CD is \(2.855 \times \sqrt{\frac{9 \times 10}{6 \times 23}} \approx 2.3056\).
To find differences in nine algorithms, critical difference (CD) based on the Nemenyi test was used. The critical value \(q_\alpha\) is 3.102, so the CD is 2.5051. A post-hoc test concludes that if the difference in Friedman ranking values between the two algorithms is less than the CD value, there is no significant difference between the two algorithms; conversely, there is a significant difference between the two algorithms.
In Table 7, the “Diff with LFGOA” in the third column indicates the differences in average rank between LFGOA and other eight algorithms, and the “Diff with LFGOA” in the fifth column indicates the differences in Std rank between LFGOA and other eight algorithms respectively.
Critical Difference (CD) diagrams in Fig. 3 are simple and intuitive visualizations of the results of a Nemenyi post-hoc test that is designed to check the statistical significance between the differences in average rank of a set of nine algorithms respectively on a set of 23 benchmark test functions.
Fig. 3 shows the analysis results of the data from Table 7. In each line segments, we plot the average ranks about mean (left side in the Fig. 3) and Std (right side in the Fig. 3) of nine algorithms. The length of the line segment indicates the CD value, and the center of each line segment labeled “circle mark” represents the value of the average rank position about mean (left side) and Std (right side) of the respective each algorithm across all 23 benchmark test functions. If the value of the center between two line-segments (intervals) is greater than the CD, it means that the two algorithms do not overlap each other, which indicate there is a statistically significant difference between them.
As shown in Fig. 3, LFGOA ranks first (The best average ranks are to the left side) in Mean and Std respectively. From the Fig. 3 and Table 7, we can clear see that LFGOA versus AO and LFGOA versus AHA have similar performance in terms of average ranks of Mean and Std.
Comparing the performance of LFGOA with AHA, AO, DA, DMOA, GBO, HGS, HHO, and MVO algorithm
The performance of the AHA, AO, DA, DMOA, GBO, HGS, HHO, LFGOA, and MVO algorithms are respectively benchmarked in the following figures. In the first column in Fig. 4, Fig. 5, and Fig. 6, the graph is a three-dimensional drawing of the cost function. The second column of the Fig. 4, Fig. 5, and Fig. 6, the graph shows the independently convergence progress of the AHA, AO, DA, DMOA, GBO, HGS, HHO, LFGOA, and MVO algorithms respectively. The third column of the Fig. 4, Fig. 5, and Fig. 6, the graph focus on the convergence progress of the LFGOA algorithm on each of the F1–F23 benchmark functions respectively. The fourth column of the Fig. 4, Fig. 5, and Fig. 6, the graph focus on the average fitness history of the LFGOA algorithm on each of the F1–F23 benchmark functions respectively. The fifth column of the Fig. 4, Fig. 5, and Fig. 6, the graph focus on the best fitness history of the LFGOA algorithm on each of the F1–F23 benchmark functions respectively.
The unimodal test functions F1–F7
Since there is only one extreme point in F1–F7 unimodal benchmark functions, the unimodal benchmark functions are suitable for assessing the convergence rate and benchmarking the exploitation behavior of the algorithm. In the second column in Fig. 4, the LFGOA algorithm shows the best results in 6 out of 7, especially on F1–F4 respectively in unimodal benchmark, but for F5, the result unsatisfactory for the LFGOA algorithm. In unimodal functions of F6–F7, the GBO algorithm shows better result that nearly reaches to zero.
The multimodal test functions F8–F13
The F8–F13 multimodal benchmark functions are used to assess the exploration capability of the LFGOA algorithm to find global optima when the number of local optima increases exponentially with the problem dimension.
The second column of Fig. 5, for F8, as the GBO algorithm present a wrong value of positive (reference the Table 5) against the value of negative that gotten by the AHA, AO, DA, DMOA, HGS, HHO, LFGOA, and MVO algorithms respectively, the figure only shows the best convergence progress of the AHA, AO, DA, DMOA, HGS, HHO, LFGOA, and MVO algorithms respectively without GBO, because great difference values on two directions can’t be appropriately plotted in the same figure. For F9–F13, the convergence progress of the LFGOA algorithm is satisfactory especially for F9–F11; the convergence rate of the LFGOA algorithm is rapidly. Since the multimodal functions have an exponential number of local solutions, the results show that the LFGOA algorithm can explore the search space extensively and find promising regions of the search space.
For the third column of Fig. 5, the convergence progress of the LFGOA algorithm on each of the F8–F13 benchmark functions all exhibit excellent convergence rate on each of the F8–F13 benchmark functions. It can also be seen in the third column of the Fig. 5, that the LFGOA algorithm does not provide uniform convergence behavior in all the benchmark functions. This shows that the LFGOA algorithm is good in handling of different problems.
The composite test functions F14–F23
The second column of Fig. 6, for F14–F20, all of the algorithms reached the satisfactory convergence rate. For F21 and F23 (reference the Table 5), only the final results of the convergence progress of the GBO and LFGOA algorithms respectively are satisfactory, the other seven algorithms unsatisfactory. For F22 (reference the Table 5), only the final results of the convergence progress of the HHO and LFGOA algorithms respectively are satisfactory, the other seven algorithms unsatisfactory. All in all, for the composite benchmark functions of F14–F23, the comprehensive result of the convergence progress of the LFGOA algorithm is superior to the other algorithms, which is very similar to the situation of the Table 5. From the third column of the Fig. 6, the convergence progress of the LFGOA on each of the F14–F23 benchmark functions all exhibit better convergence rate. For the fourth column of the Fig. 6, even the average fitness of all grasshoppers on the F20–F23 with high fluctuation during the exploration phase (at nearly the early iteration stage) and low changes in the exploitation phase (at the end of iteration stage). This proves that the LFGOA algorithm is able to eventually improve the fitness of initial random solutions for a given optimization problem. For the fifth column of the Fig. 6, even the best fitness of all grasshoppers on the F14 and F20–F23 with high fluctuation during the exploration phase (at nearly the early iteration stage) and low changes in the exploitation phase (at the end of iteration stage), which guarantee that the LFGOA algorithm exploration extensively over the initial stage and exploitation locally at the end of optimization, and eventually convergences to optimization points.
LFGOA vs the other eight optimization algorithms on the p-Values of the wilcoxon
Due to the stochastic nature of the algorithms, the averages and standard deviation only compare the overall performance of the algorithms, while a statistical test considers each run’s results and proves that the results are statistically significant. Derrac et al37, recommended that to evaluate the performance of algorithms, statistical tests should be done. The non-parametric Wilcoxon statistical test is conducted and the p-values that are less than 0.05 could be considered as strong evidence against the null hypothesis. To assess the overall performance of the LFGOA algorithm, and to confirm the significance and robustness of the results, we apply Wilcoxon’s statistical test with a 5% significance level to the obtained average accuracy results.
At Table 8, the P-values are more than 0.05 appeared in the following cases:
-
LFGOA/AO in F2, F9, and F14 in the third column of Table 8.
-
LFGOA/DA, the F4, F14, and F17 in the fourth column of Table 8, as above depicted, both the DA and LFGOA algorithms all embedded Levy Flight mechanism, which means both of the two algorithms have some extent similarity properties.
-
LFGOA/DMOA in F6 and F11 in the fiveth column of Table 8.
-
LFGOA/GBO in the F2 in the sixth column of Table 8.
-
LFGOA/HGS, which is consistent with the F2 and H9 in the seventh column of Table 8.
-
LFGOA/MVO, in the F4, F11, and F17, as above depicted, both the exploration and exploitation swarming behaviors of MVO are very similar to LFGOA.
The results of the p-values in Table 8 show that the superiority of the LFGOA algorithm is statistically significant.
The scalability test for LFGOA
Comparing comprehensive and thoroughly the property of the LFGOA algorithm with the GOA algorithm, we conducted the scalability test here. As we known, the scalability test can help us to some extent understand the impact of the dimension on the capability of the solution and the effectively of the LFGOA algorithm. An in-depth exploration of the impacts on the solution functionality to catch what appears for the features of the LFGOA and GOA algorithms as the dimension of function experiences a growth respectively. Therefore, four dimensions of the functions F1–F23 are used here: 50, 100, 300, and 500. The whole circumstances have remained consistent; each algorithm uses 100 search agents and runs 30 times respectively. The mean values, the standard deviation values and the best optimal values were picked by the LFGOA and GOA algorithms under 50, 100, 300, and 500 dimensions, which are shown in the following tables.
In Table 9 (D = 50), there are 15 out of 23 average values obtained by the LFGOA algorithm, which are all less than those obtained by the GOA algorithm.
In Table 9 (D = 100), there are 14 out of 23 average values obtained by the LFGOA algorithm, which are all less than those obtained by the GOA algorithm. Table 9 also tell us the LFGOA algorithm consumed a little more time than the GOA algorithm under dimensions equal to 50 and 100 respectively.
In Table 10 (D = 300), there are 15 out of 23 average values obtained by the LFGOA algorithm, which are all less than those obtained by the GOA algorithm.
In Table 10 (D = 500), there are 14 out of 23 average values obtained by the LFGOA algorithm, which are all less than those obtained by the GOA algorithm. Table 10 also tell us the LFGOA algorithm consumed a little more time than GOA under dimensions equal to 300 and 500 respectively.
In Table 11 (D = 50), only for the unimodal functions F1 and F2, the Std values obtained by the LFGOA algorithm are 3.8798E-08 and 1.1745E-19, for the multimodal function F12, the Std values obtained by the LFGOA algorithm is 1.3597E-11, for the composite functions F17–F19 and F21–F22, the Std values obtained by the LFGOA algorithm are 1.4536E-12, 6.8103E-12, 1.7853E-15, 1.8687E-11, and 2.5783E-11, which are less than the GOA algorithm.
In Table 11 (D = 100), for the unimodal functions F1, F2 and F7, the Std values obtained by the LFGOA algorithm are 1.2465E-08, 1.9946E-28, and 2.8663E-01, for the multimodal functions F12 and F13, the Std values obtained by the LFGOA algorithm are 1.6469E-11 and 1.3907E-11, for the composite functions F16–F17 and F20–F23, the Std values obtained by the LFGOA algorithm are 1.7963E-13, 2.7986E-13, 5.9709E-14, 2.0967E-12, 2.7518E-12, and 2.6440E-12, which are less than the GOA algorithm.
In Table 12 (D = 300), only for the unimodal functions F1 and F2, the Std values obtained by the LFGOA algorithm are 1.3549E-10 and 1.1144E-64, for the multimodal function F11 and F13, the Std values obtained by the LFGOA algorithm is 6.7423E-11 and 3.0608E-13, for the composite functions F14–F16 and F20–F22, the Std values obtained by the LFGOA algorithm are 1.2862E-15, 3.4063E-14, 6.5956E-15, 8.1218E-15, 1.3714E-12, and 6.8134E-13, which are less than the GOA algorithm.
In Table 12 (D = 300), for the unimodal functions F1, F2, F7, the Std values obtained by the LFGOA algorithm are 3.0527E-11, 1.4144E-82, and 2.8175E-01, for the multimodal function F10, F12, and F13, the Std values obtained by the LFGOA algorithm are 1.4097E-09, 7.3866E-14, and 2.6744E-13, for the composite functions F15, F17 and F19–F21, the Std values obtained by the LFGOA algorithm are 1.7152E-14, 8.7437E-14, 2.0085E-15, 5.3160E-15, and 3.6452E-13, which are less than the GOA algorithm.
In Table 13 (D = 50, D = 100), there are 17 out of 23 best values obtained by the LFGOA algorithm, such that the number is far exceeded by the GOA algorithm.
In Table 14 (D = 300, D = 500), there are 17 out of 23 best values obtained by the LFGOA algorithm, such that the number is far exceeded by the GOA algorithm.
Some quantitative metrics of LFGOA algorithm
In the first, second and third columns of Fig. 7, Fig. 8, and Fig. 9, the quantitative metrics about the dynamic change of grasshopper position (search history), and the eight grasshopper trajectories are employed from the first to the last iteration. Tracking the position change of grasshoppers during optimization, we can observe how the LFGOA algorithm explores and exploits the search space. Monitoring eight grasshopper trajectories during optimization, we can know in detail the movements of eight grasshoppers respectively.
From the first column in Fig. 7, we can see that the search history of grasshoppers is mostly concentrated in one region, which indicating that the LFGOA algorithm can quickly search for promising regions. In order to see the changes of the grasshoppers’ positions during searching, the trajectories of eight grasshoppers are picked in the second and third columns in Fig. 7, Fig. 8, and Fig. 9 as well. In the fourth and fifth columns of Fig. 7, the Box plot is used to check affirmed of the LFGOA algorithm stability. In the fourth column of Fig. 7, the Box plot is used to depict the fitness status by five groups (each group covering 20 iterations) at every stage. In the fifth column of Fig. 7, the Box plot is used to depict the position change by five groups (each group covering 20 iterations) at every stage.
The unimodal test functions F1–F7
In the first columns in Fig. 7, for the unimodal test functions F1–F7, it can be clearly seen that agents tend to exploration promising regions of the search space and exploitation very accurately around the global optima over the course of iterations in the form of rough like adozens of agents clustered together.
In the second and third columns in Fig. 7, the trajectory graphs of eight grasshoppers (as representative of all grasshoppers) are selected to show the grasshopper’s dynamic position changes respectively during optimization. From the second and the third columns in Fig. 7: we can see that the third in F2 and F7, the fifth and the seventh in F1, the fifth in F5, all of these grasshoppers undergo slight fluctuations during the grasshoppers searching respectively. From the second and third columns of Fig. 7: we also can see trajectory curves that the third in F1, the fourth and the sixth in F3, the third in F4, the first, the third and the eighth in F6, all of the grasshoppers made abrupt largely fluctuations in the initial stages of optimization respectively. Exploration of search space takes place due to high repulsive rate of the LFGOA algorithm. It is also seen that, as these grasshopper’s optimization approaches further the fluctuation decreased gradually over the course of iterations. This is done due to the attraction forces as well as comfort zone between grasshoppers. According to Berg et al.38, this behaviour can guarantee that an algorithm eventually convergences to a point and search locally in a search space.
There are some mild autocorrelations and cross-linked between the trajectory graphs of grasshopper in the first columns of Fig. 7 with the second and third columns of Fig. 7, and the search history of grasshoppers in the first column of Fig. 7, the small fluctuation of the grasshoppers corresponding to the small scatter graph of the grasshopper clustered together, the great fluctuation of the grasshoppers corresponding to the big scatter graph of the grasshopper clustered together. It is meaningful on some extent, the inferences about the effectively convergence of the LFGOA algorithm while avoiding most locally optimal from the trajectory graphs of grasshopper and search history of grasshoppers.
To analysis the LFGOA randomness nature, the Box plot is used to show the difference by comparisons the fitness about the LFGOA algorithm. As the box contains 50% of the data, therefore, the height of the box can directly reflect the fluctuation level of the fitness about the LFGOA algorithm. The box plot is relatively short for the unimodal benchmark function F5 in the fourth column of Fig. 7 that reflects the fluctuation of the fitness is slight, which corresponding little promising regions of the search space of F5 in the first column in Fig. 7. There are more or little outliers among the entire unimodal benchmark functions F1–F7, which corresponding there are separate scatter clustered regions, except the big promising regions of the search space. The box plot is relatively tall for the unimodal benchmark function F4 in the fifth column of Fig. 7 that reflects the fluctuation of the position changes are great at every search stage, which corresponding the grasshoppers made abrupt largely fluctuations in the initial stage of optimization respectively in the second and third columns in Fig. 7.
The multimodal test functions F8–F13
From the first column in Fig. 8, for the multimodal benchmark functions F8–F13, it can be clearly seen that agents tend to exploration promising regions of the search space and exploitation very accurately around the global optima over the course of iterations in the form of rough like adozens of agents clustered together.
From the second and third columns of Fig. 8: we can see that the F9, the sixth and the eighth in F10, the third in F12, all of the grasshoppers undergo slight fluctuations during the grasshoppers searching respectively. From the second and the third columns in Fig. 8: the first, the fifth, the sixth, the seventh, and the eighth in F8; the first, the third, the fourth, the sixth, and the eighth in F11; the fifth, the sixth, and the eighth in F12; the third in F13; all of these grasshoppers made abrupt largely fluctuations in the initial stages of optimization respectively during the grasshoppers searching respectively.
There is not outlier in the box plot of the multimodal benchmark function F10 in the fourth column of Fig. 8 that reflects the fluctuation of the fitness is not large and the grasshoppers clustered around a relatively little promising regions of the search space.
There are more or little outliers among the entire multimodal benchmark functions F8–F13 in the fifth column of Fig. 8, which corresponding there are separate scatter clustered regions, except the big promising regions of the search space.
The composite test functions F14–F23
From the first column in Fig. 9, for the composite benchmark functions F14 and F15, it can be clearly seen that agents tend to exploration promising regions of the search space and exploitation very accurately around the global optima over the course of iterations in the form of rough like adozens of agents clustered together. From the first column in Fig. 9, for the composite benchmark functions F21, F22 and F23, from a search history point of view, the agents tend to extensively exploration promising regions of the search spaces and exploitation the best target in the form of the scatter shape is rough like a thin stripe shape.
From the second and third columns of Fig. 9, we can see that the F21, F22, and F23, all of these grasshoppers made abrupt largely fluctuations from positive to the zero with one direction in the initial stage of optimization respectively during the grasshoppers extensively searching. There are more or little outliers among the composite benchmark function in the fourth column of Fig. 9, which corresponding there are separate scatter clustered regions, except the big promising regions of the search space. There are more or little outliers among the entire composite benchmark functions in the fifth column of Fig. 9, which corresponding there are separate scatter clustered regions, except the big promising regions of the search space.
Computational complexity of the LFGOA
In this section, the general computational complexity of the LFGOA is presented. The computational complexity of the LFGOA typically relies on three rules: solutions initialization, calculate the fitness functions, and updating of solutions. In the associated formulas, N indicates the number of individuals in the population (the number of solutions), and T represents the maximum quantity of iterations. During the initial stage, the computational complexity of fitness evaluation is O(N). The computational complexity of the solutions’ updating processes is \(O (T \times N) + O (T \times N\times Dim)\), which consists of exploring for the best positions and updating the solutions’ positions of all solutions, where the dimension size of the given problem is called Dim. From the above analysis, we can acquire the total computational complexity of the LFGOA is \(O (N \times (T \times Dim+1))\).
Results and discussion
As we can see in Section 4, the LFGOA algorithm significantly outperforms others in terms of numerical optimization. There are several reasons why the LFGOA algorithm did perform well in most of the test cases. First, Levy-flight strategy: Levy flight can increase the diversity of the population and make the algorithm jump out of local optimum more effectively. This approach is helpful to make LFGOA faster and more robust than GOA. Second, in GOA, it is assumed that the fittest grasshopper (the one with the best objective value) during optimisation is the target. This will assist GOA to save the most promising target in the search space in each iteration and requires grasshoppers to move towards it. This is done with the hope of finding a better and more accurate target as the best approximation for the real global optimum in the search space.
Therefore, this approach promotes the exploration of promising feasible regions and is the main reason for the superiority of the LFGOA algorithm. Third, the LFGOA algorithm has an explicit restart mechanism. These are the reasons why LFGOA performs better than other algorithms at the end of the results section. Another finding in the results is the performance of most of the AHA, AO, DA, DMOA, GBO, HGS, HHO, and MVO are not good enough. There is no restart mechanism for significant abrupt movements in the search space and this is likely to be the reason the performance of most of the eight algorithms is not good enough. In summary, the discussion and findings of this work clearly demonstrate the quality of the exploration, exploitation, local optima avoidance, and convergence rate of the LFGOA algorithm.
Real application of LFGOA in constrained engineering problems
Engineering constrained optimization problems are complex, sometimes even the optimal solutions of interest do not exist39. Engineering constrained optimization problems have been utilized by many researchers to evaluate the performance of different algorithms40. Although the above-discussed results prove and verify the high performance of the LFGOA algorithm, there is also to confidently confirm the performance of this algorithm in engineering constrained optimization problems in real life. In this section, the effectiveness of the LFGOA algorithm is verified in terms of its ability to solve constrained engineering optimization problems in practical application; seven well-studied constrained engineering design examples are selected to verify the proposed LFGOA algorithm, including: Himmelblau’s nonlinear optimization problem, Cantilever beam design, Car Side Impact Design, Gear train Design, Pressure vessel design, Speed Reducer Design, and tabular column design.
However, different real-world problems often have different constraints, so a suitable approach is demanded to deal with such problems41. The main idea is to transform the actual optimization problem into a mathematical model, and then use the LFGOA algorithm to find the optimal solution. Normally, f(x) is the fitness function, x represents the search space, \(x_1,x_2,\ldots ,x_n\) represent different dimensions, there are several equality and inequality constraints in engineering constrained optimization problems. In order to be suitable for these engineering constrained problems, the search agent of our proposed LFGOA algorithm does not only rely on fitness functions to update the location. So, the simplest method of dealing with constraints (penalty functions) can be used effectively to deal with constraints in algorithms42. That is, if the search agent violates any constraints, it will be assigned a large objective function value. This way, it is automatically replaced by a new search agent after the next iteration. So, we use penalty functions in which the LFGOA algorithm has achieved good values if it violates one of these constraints.
Himmelblau’s nonlinear optimization problem.
Before solving the engineering constrained problems, the LFGOA was benchmarked using a well-known problem, namely, Himmelblau’s problem, which is a relatively complex constrained problem of minimization five positive design variables and six nonlinear inequality constraints, and ten boundary conditions. This problem has originally been proposed by Himmelblau43 and it has been widely used as a benchmark nonlinear constrained optimization problem and applied to many fields. The problem can be outlined as follows:
Consider:
Minimize:
Subject to:
Where:
Table 15 demonstrates the comparison of the best solution among the different optimizers and the corresponding design variables, while the statistical results for each considered strategy are detailed in Table 16. The results obtained by LFGOA algorithm are compared with five state-of-the-art algorithms, such as Artificial Bee Colony algorithm44, sparrow search algorithm45, Cuckoo search algorithm46, harmony search algorithm47, and Differential gradient evolution plus algorithm48 respectively in the literatures. It can be clearly seen that the LFGOA algorithm performed better without any violation and is feasible on this issue. The convergence curve in Fig. 10 shows the function values versus the iteration numbers for the constrained problem.
Cantilever beam design
Cantilever beam design is a type of concrete engineering problems. It works to minimize the total weight of a cantilever beam by optimizing the hollow square cross-section parameters. There are five squares of which the first block is fixed and the fifth one burdens a vertical load.
For this well-known case, Fig. 11 shows the shape of the cantilever beam, the beam is rigidly supported at right side end, and a vertical force acts on the cantilever free node of the left side, which is supported at the rightmost block and the other blocks are left free. The widths and heights of the five beams considered of the problem are used as design parameters of the optimization. The beam consists of five hollow square blocks with constant thickness, whose heights (or widths) are the decision variables. The cantilever weight optimization is formulated in the following equation:
Consider:
Mathematically speaking, it is possible to write most optimization problems in the generic form:
Minimize:
Subject to:
Variable range:
To evaluate the performance of the proposed LFGOA in solving this problem, some of the algorithms that are chosen for comparison are Artificial hummingbird algorithm29 and Gradient-Based Optimizer33 in the literatures. The results obtained by LFGOA and their comparison with the aforementioned state-of-the-art metaheuristics are reported in Tables 17 and 18, while the statistical results for each considered strategy are detailed in Table 18. From Tables 17 and 18, it can be seen that LFGOA achieves the high-quality solution for this case. The results of LFGOA algorithm for this problem are consistent to those of other real problems, in which the LFGOA algorithm outperforms the other two algorithms and is the first most efficient approach, and shows very competitive results. The comparative results show that our method can effectively solve this case and reveal better design.
It is evident from Tables 17 and 18 that the proposed LFGOA algorithm performed better without any violation. The convergence curve shows the function values versus the Iteration numbers for the constrained problem are given in Fig. 12.
Car side impact design
On the foundation of the European Enhanced Vehicle-Safety Committee (EEVC) procedures, a car is exposed to a side impact, and the aim of this benchmark problem is minimizing the weight of the door. There are eleven influence parameters in this problem, which describe as follow:
-
the thicknesses of B-pillar inner \((x_1)\),
-
the B-pillar reinforcement \((x_2)\),
-
the floor side inner \((x_3)\),
-
the cross members \((x_4)\),
-
the door beam \((x_5)\),
-
the door beltline reinforcement \((x_6)\),
-
the roof rail \((x_7)\),
-
the materials of B-pillar inner \((x_8)\),
-
the floor side inner \((x_9)\),
-
the barrier height \((x_{10})\),
-
the hitting position \((x_{11}).\)
Consider:
Structural weight and response to impact can be approximated using global response surface methodology in order to simplify the analytical formulation of the optimization problem and speed up computations. As an optimization problem, mathematically speaking, it is possible to write simplified models optimization problems in the generic form:
Minimize:
Ten constraints are imposed on the design problem.
Subject to:
The simple bounds of this problem are:
To evaluate the performance of the proposed LFGOA algorithm in solving this problem, some of the algorithms that are chosen for comparison are Social Network Search49, Enhanced grasshopper optimization algorithm19, and Firefly Algorithm50 respectively in the literatures.
The results obtained by LFGOA and their comparison with the aforementioned state-of-the-art metaheuristics are reported in Table 19, while the statistical results for each considered strategy are detailed in Table 20.
It is evident from Tables 19 and 20 that the proposed LFGOA algorithm performed better without any violation. The convergence curve shows the function values versus the Iteration numbers for the constrained problem are given in Fig. 13.
Discrete engineering problem-gear train design
The high-speed train drive wheel transmission system mostly adopts a gear transmission structure. Due to the limited size of the structure, the pinion gear and the motor drive shaft are connected by an interference fit. The vibration is caused by an unreasonable design, which causes a system failure. The objective of gear train design problem is to minimize the cost of the “Gear ratio” of the gear train in field mechanical engineering problem. The “Gear ratio” defined as the ratio of the angular velocity of the output shaft to the angular velocity of the input shaft, the “Gear ratio” is calculated as follows:
The parameters of this problem are discrete with the increment size of 1 since they define the teeth of the gears \((T_a,T_b,T_c,T_d).\) There constraints are only limited the variable ranges. The design of gear train is a kind of mixed problems which have to determine various types of design variables such as continuous, discrete, and integer variables. This problem simply stated is: given a fix input drive and a number of fixed output drive spindles, how can the spindles be driven by the input using the minimum number of connecting gear in the train. To handle discrete parameters, each search agent was rounded to the nearest integer number before the fitness evaluation.
The number of teeth of gears \(T_a(=x_1)\), \(T_b(=x_2)\), \(T_c(=x_3)\), and \(T_d(=x_4)\) are considered as the design variables, and illustrates at Fig. 14.
Consider:
The mathematical formulation is provided as follows:
Minimize:
The design engineering constraint is defined as the number of teeth on any gear that should only be in the range of [12, 60], in other words, the constraints are only limited the variable ranges: \(12 \le {\ x}_1,x_2,x_3,x_4\ \le 60\)
This section uses the proposed LFGOA algorithm to solve the gear train design problem and compares the results with other optimization algorithms, including Social Network Search49, An enhanced hybrid arithmetic optimization algorithm51, The Ant Lion Optimizer52, and Multi-Verse-Optimizer36 respectively in the literatures. Table 21 compares the minimum cost and design variables obtained using the LFGOA algorithm and other optimization algorithms, while the statistical results for each considered strategy are detailed in Table 22.
However, the optimal values for variables obtained are different. It is worth pointing out that any feasible solution is an optimal solution, the values in Table 21 which gained by the five algorithms, only rough agreed with each other. Therefore, this design can be considered as a new design with a similar optimal “Gear ratio”. Table 21 shows that the LFGOA algorithm gives competitive results for numbers of function evaluations and is suitable to solve discrete constrained problems. Once more, these results prove that the proposed LFGOA algorithm can solve discrete real problems efficiently. As shown in the Fig. 15, the convergence curve is quickly and the solutions were obtained instantly under satisfy all constraints.
Pressure vessel design
The pressure vessel design optimization task has also been popular among researchers and optimized in various studies. Pressure vessel design is a mixed discrete-continuous constrained optimization problem. Using rolled steel plate, the shell is made in two halves that are joined by two longitudinal welds to forms a cylinder. The objective of this problem is to minimize the total cost consisting of material, forming, and welding of a cylindrical vessel as in Fig. 16. Both ends of the vessel are capped, and the head has a hemi-spherical shape. There are four variables in this problem:
-
Thickness of the shell \((T_s)\),
-
Thickness of the head \((T_h)\),
-
Inner radius (R),
-
Length of the cylindrical section without considering the head (L).
In pressure vessel, the thickness of the shell \((T_s)\) and head \((T_h)\), the internal radius (R), and the extent of the section, minus the head (L), are variables to be optimized. This problem is subject to four constraints: \(T_s\) and \(T_h\) are the available thicknesses of rolled steel plates, which are integer multiples of 0.0625 inch, and R and L are continuous variables. Many meta-heuristic methods that have been adopted to optimize this problem includes Social Network Search49, Composite Differential Evolution with Modified Oracle Penalty Method53, Artificial hummingbird algorithm29, Manta ray foraging optimization54, a Hybrid Co-evolutionary Particle Swarm Optimization Algorithm55, the Automatic Dynamic Penalisation method (ADP) for handling constraints with genetic algorithms56, and a Hybrid Generalized Reduced Gradient-Based Particle Swarm Optimizer57 respectively in the literatures.
These constraints and the problem are formulated as follows:
Consider:
Minimize:
Subject to:
Variable range:
From Tables 23 and 24, it is evident that LFGOA obtain the better solution among these compared approaches. From Table 24, once more, the statistical results of different methods also demonstrate that the proposed LFGOA method can solve this constrained optimization problems with discrete-continuous variables effectively and provide competitive statistical results. It should be noted the results of LFGOA do not denote that it can find better solutions due to the accuracy.
As shown in the Fig. 17, the convergence curve quickly converge towards the global optimum and the solutions was obtained instantly under satisfy all constraints.
Speed reducer design
In mechanical systems, one of the essential parts of the gearbox is the speed reducer, and it can be considered as a challenging benchmark engineering problem and can be employed for several applications. In this optimization problem, the weight of the speed reducer is to be minimized with subject to 11 constraints, as shown in Fig. 18. The goal of the speed reducer design problem is to minimize the total weight of the reducer by optimizing the seven variables, which describe as follow:
-
the width of the gear surface (cm) \((x_1=b)\),
-
the module of teeth (cm) \((x_2=m)\),
-
the number of teeth in the pinion \((x_3=p)\),
-
the length of the first shaft between bearings (cm) \((x_4=l_1)\),
-
the length of the second shaft between bearings (cm) \((x_5=l_2)\),
-
the diameter of first shafts (cm) \((x_6=d_1)\),
-
the diameter of second shafts (cm) \((x_7=d_2).\)
The mathematical model of the gear train design problem is:
Consider variable:
Minimize:
Subject to:
Variable range:
This case was previously tackled by many scholars using various heuristic methods, including Social Network Search49, Information-Decision Searching Algorithm58, An enhanced hybrid arithmetic optimization algorithm51, Artificial hummingbird algorithm29, Manta ray foraging optimization54, sparrow search algorithm45, A simplified non-equidistant grey prediction evolution algorithm59, Gradient-based optimizer33, and Snake Optimizer5.
The statistical results of LFGOA and nine optimization methods are compared in Tables 25 and 26. Among the compared optimization algorithms, the LFGOA ranks first as superior to other approaches in optimizing the reducer design, our method can find better geometric variables for this case. Hence, our result is feasible and verifies the effectiveness of the proposed LFGOA algorithm. The results demonstrate that the proposed LFGOA can provide reliable and very comprising solutions compared with the other algorithms.
As shown in the Fig. 19, the convergence curve quickly converge towards the global optimum and the solutions was obtained instantly under satisfy all constraints.
Tubular column design
Tubular column design is an example of designing a uniform column of the tubular section to carry a compressive load at minimum cost as described in Fig. 20. There are two design variables in this problem, which describe as follow:
-
the mean diameter of the column \(d (=x_1) (cm)\),
-
the thickness of tube \(t (=x_2) (cm)\).
The five characteristic parameters in the constituent materials of the column are set as:
-
P is a compressive load\((= 2500 kgf)\),
-
\(\sigma _y\) represents the yield stress\((=500 kgf/cm^2)\),
-
E is the modulus of elasticity\((=0.85\times {10}^6 kgf/cm^2)\),
-
\(\rho\) is the density\((=0.0025 kgf/cm^3)\),
-
L denotes the length of the designed column \((= 250 cm)\).
The optimization model of this problem is given as follows:
Consider: \(x=\left[ x_1,x_2\right] =[d,\ \ t],\)
Minimize:
Subject to:
Variable range:
The stress included in the column should be less than the buckling stress (constraint \(g_1\)) and the yield stress (constraint \(g_2\)). The mean diameter of the column is restricted between 2 and 14cm (constraint \(g_3\) and \(g_4\)), and columns with thickness outside the range \(0.2-0.8 cm\) are not commercially available (constraint \(g_5\) and \(g_6\)). The mean diameter \(d (x_1)\) and the thickness \(t (x_2)\) vary in the range of [2,14] and [0.2,0.8].
This case was previously tackled by many scholars using various heuristic methods, including Social Network Search49, Cuckoo search algorithm46, krill herd algorithm60, Cooperation search algorithm61, and a Hybrid Generalized Reduced Gradient-Based Particle Swarm Optimizer57 respectively in the literatures.
The statistical results of LFGOA and other optimization methods are compared in Tables 27 and 28. Among the compared optimization algorithms, the LFGOA ranks first as superior to other approaches in optimizing the tubular column design, our method can find better geometric variables for this case. Hence, our result is feasible and verifies the effectiveness of the LFGOA algorithm. The results demonstrate that the LFGOA algorithm can provide reliable and very comprising solutions compared with the other algorithms.
As shown in the Fig. 21, the convergence curve quickly converge towards the global optimum and the solutions was obtained instantly under satisfy all constraints.
Results and discussion
As we can see in Section 5, seven real-world constrained engineering design examples including Himmelblau’s nonlinear optimization problem, Cantilever beam design, Car Side Impact Design, Gear train Design, Pressure vessel design, Speed Reducer Design, and tabular column design are selected to verify the proposed LFGOA algorithm. The LFGOA has been demonstrated to perform better than or be highly competitive with the other algorithms in the literature on the seven constrained engineering optimization problems, and can solve different real-world constrained engineering optimization problems. The advantages of LFGOA involve performing simply and having few parameters to regulate. The work here proves the LFGOA to be robust, powerful, and effective over all types of the other algorithms in the literature. Constrained engineering optimization evaluation is a good way for testing the performance of the metaheuristic algorithms, but it also has some limitations. For example, different tuning parameter values in the optimization methods might lead to significant differences in their performance. Also, constrained engineering optimization tests may arrive at fully different conclusions if the termination criterion changes. If we change the population size or the number of iterations, we might draw a different conclusion.
Conclusion
This paper presented a novel enhancing Grasshopper Optimization Algorithm with Levy Flight algorithm, call LFGOA algorithm. Five metrics (i.e., search history, average fitness function, the best fitness history, the trajectory of the first dimension, and convergence curve) are implemented to investigate the LFGOA qualitatively. Next, 23 benchmark test functions to investigate the exploration, exploitation, local optima escape, and convergence performance of the LFGOA. The results demonstrated the effectiveness of LFGOA towards achieving optimal global solutions having more reliable convergence compared to other eight well-known optimization algorithms published in the literature. Freidman ranking test is applied to evaluate the efficacy of the LFGOA scientifically. The statistical results demonstrated that the LFGOA can guarantee the effectiveness of explorations while producing excellent exploitation, hence maintaining an equilibrium between exploitation and exploration strategies, which reveals the superior performance of the LFGOA in a statistical sense against other comparative algorithms. Moreover, seven real-world engineering problems are used to investigate the effectiveness of the LFGOA further. The results of the engineering design problems proved that the LFGOA achieved extremely better results against the other well-known optimization algorithms, and it can handle various constraints problems.
Of course, there are still many applications of the LFGOA algorithm worthy of further study because of the tremendous potential of the LFGOA algorithm. Moreover, the LFGOA algorithm can be used to solve constrained engineering optimization problems such as industry and engineering applications, and other application domains. There are several possible future directions and possible ideas worth investigating regarding the new variants of the LFGOA algorithm and its widespread applications, for example, features selection, job scheduling, and parameter optimization are still need to be resolved and can be suggested as future work.
Data availibility
The datasets generated during or analysed during the current study are available from the corresponding author on reasonable request.
References
Oyelade, O. N., Ezugwu, A.E.-S., Mohamed, T. I. & Abualigah, L. Ebola optimization search algorithm: A new nature-inspired metaheuristic optimization algorithm. IEEE Access 10, 16150–16177 (2022).
Abualigah, L., Abd Elaziz, M., Sumari, P., Geem, Z. W. & Gandomi, A. H. Reptile search algorithm (rsa): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 191, 116158 (2022).
Abualigah, L., Diabat, A., Mirjalili, S., Abd Elaziz, M. & Gandomi, A. H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 376, 113609 (2021).
Hussien, A. G. et al. Recent advances in harris hawks optimization: A comparative study and applications. Electronics 11, 1919 (2022).
Hashim, F. A. & Hussien, A. G. Snake optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst. 242, 108320 (2022).
Zheng, R. et al. An improved wild horse optimizer for solving optimization problems. Mathematics 10, 1311 (2022).
Yu, H., Jia, H., Zhou, J. & Hussien, A. Enhanced aquila optimizer algorithm for global optimization and constrained engineering problems. Math. Biosci. Eng. 19, 14173–14211 (2022).
Hussien, A. G. & Amin, M. A self-adaptive harris hawks optimization algorithm with opposition-based learning and chaotic local search strategy for global optimization and feature selection. Int. J. Mach. Learn. Cybern. 13, 309–336 (2022).
Wang, P. et al. Complex-valued encoding metaheuristic optimization algorithm: A comprehensive survey. Neurocomputing 407, 313–342 (2020).
Chen, M., Zhou, Y. & Luo, Q. An improved arithmetic optimization algorithm for numerical optimization problems. Mathematics 10, 2152 (2022).
Saremi, S., Mirjalili, S. & Lewis, A. Grasshopper optimisation algorithm: Theory and application. Adv. Eng. Softw. 105, 30–47 (2017).
Abualigah, L. & Diabat, A. A comprehensive survey of the grasshopper optimization algorithm: Results, variants, and applications. Neural Comput. Appl. 32, 15533–15556 (2020).
Razmjooy, N., Estrela, V. V., Loschi, H. J. & Fanfan, W. A comprehensive survey of new meta-heuristic algorithms. Recent Advances in Hybrid Metaheuristics for Data Clustering, Wiley Publishing ( 2019).
El-Henawy, I. & Abdelmegeed, N. A. Meta-heuristics algorithms: A survey. Int. J. Comput. Appl. 179, 45–54 (2018).
Meraihi, Y., Gabis, A. B., Mirjalili, S. & Ramdane-Cherif, A. Grasshopper optimization algorithm: Theory, variants, and applications. IEEE Access 9, 50001–50024 (2021).
Arora, S. & Anand, P. Chaotic grasshopper optimization algorithm for global optimization. Neural Comput. Appl. 31, 4385–4405 (2019).
Zhao, S. et al. An enhanced cauchy mutation grasshopper optimization with trigonometric substitution: Engineering design and feature selection. Eng. Comput. 38, 1–34 (2021).
Ewees, A. A., Gaheen, M. A., Yaseen, Z. M. & Ghoniem, R. M. Grasshopper optimization algorithm with crossover operators for feature selection and solving engineering problems. IEEE Access 10, 23304–23320 (2022).
Yildiz, B. S., Pholdee, N., Bureerat, S., Yildiz, A. R. & Sait, S. M. Enhanced grasshopper optimization algorithm using elite opposition-based learning for solving real-world engineering problems. Eng. Comput. 38, 1–13 (2021).
Feng, Y., Liu, M., Zhang, Y. & Wang, J. A dynamic opposite learning assisted grasshopper optimization algorithm for the flexible jobscheduling problem. Complexity 2020 ( 2020).
Qin, P., Hu, H. & Yang, Z. The improved grasshopper optimization algorithm and its applications. Sci. Rep. 11, 1–14 (2021).
Wang, S., Hussien, A. G., Jia, H., Abualigah, L. & Zheng, R. Enhanced remora optimization algorithm for solving constrained engineering optimization problems. Mathematics 10, 1696 (2022).
Zhou, Y., Ling, Y. & Luo, Q. Lévy flight trajectory-based whale optimization algorithm for engineering optimization. Eng. Comput. 35, 2406–2428 (2018).
Chen, X., Cheng, F., Liu, C., Cheng, L. & Mao, Y. An improved wolf pack algorithm for optimization problems: Design and evaluation. PLoS ONE 16, e0254239 (2021).
Tran, T., Nguyen, T. T. & Nguyen, H. L. Global optimization using l’evy flights. Eprint Arxiv ( 2014).
Houssein, E. H., Saad, M. R., Hashim, F. A., Shaban, H. & Hassaballah, M. Lévy flight distribution: A new metaheuristic algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 94, 103731 (2020).
Gutowski, M. Lévy flights as an underlying mechanism for global optimization algorithms. arXiv preprint math-ph/0106003 ( 2001).
Zhang, H. et al. Ensemble mutation-driven salp swarm algorithm with restart mechanism: Framework and fundamental analysis. Expert Syst. Appl. 165, 113897 (2021).
Zhao, W., Wang, L. & Mirjalili, S. Artificial hummingbird algorithm: A new bio-inspired optimizer with its engineering applications. Comput. Methods Appl. Mech. Eng. 388, 114194 (2022).
Abualigah, L. et al. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 157, 107250 (2021).
Mirjalili, S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 27, 1053–1073 (2016).
Agushaka, J. O., Ezugwu, A. E. & Abualigah, L. Dwarf mongoose optimization algorithm. Comput. Methods Appl. Mech. Eng. 391, 114570 (2022).
Ahmadianfar, I., Bozorg-Haddad, O. & Chu, X. Gradient-based optimizer: A new metaheuristic optimization algorithm. Inf. Sci. 540, 131–159 (2020).
Yang, Y., Chen, H., Heidari, A. A. & Gandomi, A. H. Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst. Appl. 177, 114864 (2021).
Heidari, A. A. et al. Harris hawks optimization: Algorithm and applications. Futur. Gener. Comput. Syst. 97, 849–872 (2019).
Mirjalili, S., Mirjalili, S. M. & Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 27, 495–513 (2016).
Derrac, J., García, S., Hui, S., Suganthan, P. N. & Herrera, F. Analyzing convergence performance of evolutionary algorithms: A statistical approach. Inf. Sci. 289, 41–58 (2014).
Van den Bergh, F. & Engelbrecht, A. P. A study of particle swarm optimization particle trajectories. Inf. Sci. 176, 937–971 (2006).
Thong, N. H. A new search via probability algorithm for single-objective optimization problems. Tap chi Khoa hoc 63 ( 2013).
Çimen, M. E., Garip, Z. & Boz, A. F. Comparison of metaheuristic optimization algorithms with a new modifieddeb feasibility constraint handling technique. Turk. J. Electr. Eng. Comput. Sci. 29, 3270–3289 (2021).
Arora, S. & Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft. Comput. 23, 715–734 (2019).
Deb, K. An efficient constraint handling method for genetic algorithms. Comput. Methods Appl. Mech. Eng. 186, 311–338 (2000).
Himmelblau, D. M. Applied nonlinear programming mcgraw-hill book company. New York ( 1972).
Garg, H. Solving structural engineering design optimization problems using an artificial bee colony algorithm. J. Ind. Manag. Optim. 10, 777 (2014).
Xue, J. & Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 8, 22–34 (2020).
Gandomi, A. H., Yang, X.-S. & Alavi, A. H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 29, 17–35 (2013).
Lee, K. S. & Geem, Z. W. A new meta-heuristic algorithm for continuous engineering optimization: Harmony search theory and practice. Comput. Methods Appl. Mech. Eng. 194, 3902–3933 (2005).
Tabassum, M. F. et al. Differential gradient evolution plus algorithm for constraint optimization problems: A hybrid approach. Int. J. Optim. Control. Theor. Appl. (IJOCTA) 11, 158–177 (2021).
Talatahari, S., Bayzidi, H. & Saraee, M. Social network search for global optimization. IEEE Access 9, 92815–92863 (2021).
Gandomi, A. H., Yang, X.-S. & Alavi, A. H. Mixed variable structural optimization using firefly algorithm. Comput. Struct. 89, 2325–2336 (2011).
Hu, G., Zhong, J., Du, B. & Wei, G. An enhanced hybrid arithmetic optimization algorithm for engineering applications. Comput. Methods Appl. Mech. Eng. 394, 114901 (2022).
Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 83, 80–98 (2015).
Dong, M., Wang, N., Cheng, X. & Jiang, C. Composite differential evolution with modified oracle penalty method for constrained optimization problems. Math. Probl. Eng. 2014 ( 2014).
Zhao, W., Zhang, Z. & Wang, L. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell. 87, 103300 (2020).
Sun, Y., Zhang, L. & Gu, X. A hybrid co-evolutionary cultural algorithm based on particle swarm optimization for solving global optimization problems. Neurocomputing 98, 76–89 (2012).
Montemurro, M., Vincenti, A. & Vannucci, P. The automatic dynamic penalisation method (adp) for handling constraints with genetic algorithms. Comput. Methods Appl. Mech. Eng. 256, 70–87 (2013).
Varaee, H., Safaeian Hamzehkolaei, N. & Safari, M. A hybrid generalized reduced gradient-based particle swarm optimizer for constrained engineering optimization problems. J. Soft Comput. Civil Eng. 5, 86–119 (2021).
Wang, K., Guo, M. & Dai, C. & Li, Z (Information-decision searching algorithm: Theory and applications for solving engineering optimization problems. Inf. Sci. , 2022).
Xiang, X., Su, Q., Huang, G. & Hu, Z. A simplified non-equidistant grey prediction evolution algorithm for global optimization. Appl. Soft Comput. 109081 ( 2022).
Gandomi, A. H. & Alavi, A. H. An introduction of krill herd algorithm for engineering optimization. J. Civ. Eng. Manag. 22, 302–310 (2016).
Feng, Z.-K., Niu, W.-J. & Liu, S. Cooperation search algorithm: A novel metaheuristic evolutionary intelligence algorithm for numerical optimization and engineering optimization problems. Appl. Soft Comput. 98, 106734 (2021).
Acknowledgements
This research was supported by the Beijing Municipal Government Fund Projects 18JYB015299 and BJSZ2021ZC65.
Author information
Authors and Affiliations
Contributions
L.W. wrote the main manuscript text and J.W prepared a Real-world engineering case, T.W. reviewed and editing. All authors have read and agreed to the published version of the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Wu, L., Wu, J. & Wang, T. Enhancing grasshopper optimization algorithm (GOA) with levy flight for engineering applications. Sci Rep 13, 124 (2023). https://doi.org/10.1038/s41598-022-27144-4
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41598-022-27144-4
This article is cited by
-
An efficient enhanced exponential distribution optimizer: applications in global, engineering, and combinatorial optimization problems
Journal of Big Data (2025)
-
Hybrid GOA and PSO optimization for load frequency control in renewable multi source dual area power systems
Scientific Reports (2025)
-
An explainable federated blockchain framework with privacy-preserving AI optimization for securing healthcare data
Scientific Reports (2025)
-
Improved COOT optimization: An approach to multilevel thresholding in image segmentation
Scientific Reports (2025)
-
Optimization of electric field screening effect under EHV overhead transmission lines using hybrid computing CSM-GOA paradigm
Electrical Engineering (2025)





























