Abstract
By combining opposition-based learning techniques with conventional African Vulture Optimization (AVO), this study offers a notable improvement in the handling of optimization problems. Despite the limitations of AVO, such as issues involving extremely rough search spaces, more iterations or function evaluations are necessary. To overcome this limitation, our proposed paper, an enhanced opposition-based learning (EOBL), speeds up the convergence and, at the same time, assists the algorithm in escaping local optima. A combination of this new technique with AVO, the Enhanced Opposition-based African Vulture Optimizer (EOBAVO), is proposed. The performance of the suggested EOBAVO was evaluated through experiments using the CEC2005 and CEC2022 benchmark functions in addition to seven engineering challenges. Furthermore, statistical analyses, including the t-test and Wilcoxon rank-sum test, were conducted, and they demonstrated that the proposed EOBAVO surpasses several of the leading algorithms currently in use. The results indicate that the proposed approach can be regarded as a competent and efficient solution for complex optimization challenges.
Similar content being viewed by others
Introduction
Metaheuristic Algorithms (MA) are sophisticated optimization strategies that draw inspiration from natural processes and are intended to address challenging issues that conventional approaches find difficult to handle. To find the best answers, these algorithms, including simulated annealing, particle swarm optimization, and genetic algorithms, explore enormous search spaces. They are suitable in various domains, including engineering and finance, owing to their versatility and adaptability. These methods are widely acknowledged as global search algorithms that are stochastic, flexible, straightforward, and derivative free1. Their versatility and ease of use have made it possible to employ them to address a wide range of challenging business and science issues. Metaheuristic algorithms can be extensively categorized into two classes: Those that are entirely based on natural phenomena and those that are inspired by biological processes seen in nature. Swarm intelligence is another name for the biological processes found in nature. The term “swarm intelligence” was used by2 to refer to cellular robotic systems. Every swarm member controls its behavior and is separated from the others. In this case, the agent serves a different approach to the current issue. Instead of being present in every member, intelligence manifests in the swarm as a whole. As they change during the program, the agents of the problem (or solutions) are erratic3. Many metaheuristics have attracted the interest of multiple scholars and have received numerous citations over the last two decades. Table 1 lists some of the well-known algorithms.
Although most of the previously stated algorithms have been applied to many optimization problems, they have been used by metaheuristic algorithms and show that slow convergence or falling into local optima remains a common issue. For instance, being trapped in local optima and experiencing premature convergence are two problems associated with the PSO method4. The ABC algorithm5 features a poor balance between exploitation and exploration and a slow rate of convergence. The Differential Evolution (DE) methodology exhibits specific shortcomings, including a protracted convergence rate and population stagnation6. The Cuckoo Search7 algorithm has numerous drawbacks, including inadequate search velocity and diminished convergence precision. The Grey Wolf Optimizer (GWO) has drawbacks, including a tendency to become trapped in local optima and a slow rate of convergence during the final stages of the search process8. Some of the shortcomings of the WOA9 include poor accuracy, sluggish convergence, and susceptibility to local optima. The convergence rate of TLBO is its primary drawback, and it becomes significantly more problematic when handling high-dimensional problems10. The Fast Fourier Algorithm (FFA) presents several shortcomings, including a tendency for slow convergence and uneven distribution within intensification (exploitation) and diversification (exploration) efforts11. Salp Swarm Algorithm (SSA) also has challenges related to population variety and being trapped in locally optimal solutions12. Some of the drawbacks of GSA include complex objective functions, which may have high computational requirements, intricate procedures, a large number of control parameters, and poor convergence13. Among the shortcomings of the Sea Horse Optimizer (SHO) are its sluggish rate of convergence and propensity to become stuck in local optima14. The low variety and unequal use of exploitation and exploration are problems with Golden Jackal Optimization (GJO)15. Similarly16, presented the drawbacks of the African Vulture Optimization Algorithm (AVO) as the addition of exploitation capability to the exploration phase to speed up convergence, and the random strategy that determines the changeover between the phases of exploration and exploitation, which impacts balance. This idea causes local optimum trapping and affects the global search of the region. To overcome the aforementioned restrictions, new optimization techniques are required. Furthermore17, proposed the no-free-lunch theorem (NFL), where no algorithm can be successful in every optimization problem. Owing to the above limitations, there is always room to create new metaheuristic algorithms or modify existing ones to resolve challenging optimization problems in a variety of domains. Table 1 lists the types of metaheuristic algorithm.
In their quest to enhance the existing algorithms45, introduced an improved SCMSSA aimed at enhancing both the optimization precision and efficiency. It also assesses SCMSSA in comparison with alternative algorithms through the application of six distinct test functions. The SVR-SCMSSA model demonstrated a 95% accuracy in predicting CO2 emissions, providing valuable insights into the primary factors contributing to CO2 emissions. In another study46, presented a Modified EDO (MEDO) that integrates EDO with SSA and QI. An adaptive p-best mutation technique was employed to avoid local optimum pitfalls. In addition, a phasor operator is utilized to improve the diversity within the algorithm47 also introduced the Salp Navigation and Competitive based Parrot Optimizer (SNCPO). This involves a hybrid algorithm combining Competitive Swarm Optimization and Salp Swarm Algorithm. Their findings indicate that SNCPO consistently surpasses current leading algorithms, attaining enhanced convergence rates, solution quality, and robustness, while successfully evading local optima. Importantly, SNCPO shows significant adaptability to various optimization environments, underscoring its applicability in practical engineering and machine learning scenarios. The exploitation phase was based on an exponential distribution model.
The following is a summary of the main contributions of this research.
-
i.
To enhance the exploration and exploitation of conventional AVO, a recently developed method called Enhanced Opposition-based Learning (EOBL) is suggested and integrated into AVO to create an Enhanced Opposition-based African Vulture Optimizer (EOBAVO) algorithm.
-
ii.
Statistical tests validated the performance of the proposed methodology and compared it with eight of the best algorithms. Moreover, EOBAVO was assessed based on engineering challenges to show that it can solve real-life engineering problems.
-
iii.
The efficacy and toughness of EOBAVO were confirmed using the CEC2005 benchmark’s 23 test functions, which had both low and high dimensions.
-
iv.
Exploration–Exploitation and Diversity Analyses show that EOBAVO effectively transitions from exploration to exploitation and converges well on most functions and enhanced convergence speed by reducing population diversity and promoting exploitation across benchmarks.
-
v.
Additionally, EOBAVO was tested on some engineering challenges to show that real-life engineering complications can be resolved.
The rest of the paper is structured as follows: Section “Preliminaries” offers a literature review on AVO, a brief justification and mathematical modeling of the traditional AVO technique, and an outline of the fundamentals of ROBL and OBL techniques. Section “The Proposed EOBAVO” describes the EOBAVO algorithm and the fundamental idea of IOBL. Section “Numerical experiments and results analysis” presents numerical tests and analysis of the results. Section “Performance of EOBAVO on practical engineering problems” shows how the EOBAVO algorithm can be used to solve actual engineering challenges, and Section “Conclusion” summarizes the research and suggests areas that require further investigation.
Preliminaries
Literature review on AVO
The African Vulture Optimization (AVO), a novel MA created by Abdollahzadeh et al. in 2022, is an intriguing substitute for global optimization48. Inspired by the hunting habits of African vultures, AVO consists of two primary steps: Finding prey and attacking it16. AVO is more effective on a few benchmark functions when compared to the sophisticated metaheuristic algorithms previously discussed. AVO is currently employed to resolve a range of challenges in the field of engineering optimization, as well as in numerous other areas of study. For example, to identify the ideal parameters of a solid oxide fuel cell (SOFC) steady-state model49, used various swarm intelligence methods for SOFC parameter estimation. The findings demonstrate that AVO is capable of producing precise characteristic curves for voltage and current using the 8-population intelligent optimization algorithm to solve five mechanical part design problems50, showed that AVO had the quickest solution time and ensured reasonable performance. The analysis of the merits and limitations of the established MAs, coupled with efforts to enhance their functionality through the introduction of new or modified mechanisms, presents an emerging research challenge. Recent developments in machine learning, termed as opposition-based learning (OBL), are perceived as an effective strategy for boosting the efficiency of these algorithms. Motivated by the opposite link between items51, first presented the concept of opposition in 2005. Over the past ten years, scholars have paid considerable attention to this topic. The concept of OBL has been used to enhance the functionality of a range of soft computing algorithms, such as artificial neural networks, fuzzy systems, optimization techniques, and reinforcement learning. The integration of the OBL methodology with additional bio-inspired optimization techniques yields shorter estimated distances to the global optimum. For instance, OBL is used in the DE optimization algorithm to generate new offspring when the species changes52. In addition, with the introduction of OBL to the Grasshopper Optimization Algorithm53, it was possible to swiftly reach an optimal point, and the exploration region was fully explored.
Nature-inspired optimization techniques, such as AVO, have been created based on the behaviors and feeding habits of African vultures. The ability of vultures to locate carrions across great distances and their remarkable scavenging skills are well-recognized. (Sharp Vision and Communication). Figure 1 shows the hunting behavior of the vultures.
They can also collaborate to identify the best answers by drawing on the insights of others (Social Cooperation) and depending on their surroundings, with their dynamic role adaptation, vultures alternate between exploring (looking for food) and exploitation (eating)54. To address challenging optimization problems, the AVO algorithm attempts to imitate these characteristics. AVO has shown notable applicability in several domains such as data analysis, power optimization, and control systems. It works well for complicated optimization problems because of its simplicity and minimal processing requirements55.
The nature-inspired metaheuristic algorithm known as AVO was created by16 and was inspired by the way vultures, which hunt birds, search for food. Vultures in their natural habitat behave differently from other birds in the scavenging process, continuously traveling great distances in a revolving fight style. Vultures, when searching for food, also look for other members of their kind that have found food. Vultures from several species occasionally gather from similar food sources. It is very rare to find weaker vultures around stronger ones, since they fight them for food.
The AVO principle, as described in54, imitates the population of vultures. Based on the fitness value, the initial population of N vultures in the AVO algorithm was divided into three groups. The population’s best solution is found in the first category. The second-best vultures are in the second team, and the rest are in the third group. Each vulture group plays a unique role in a food hunt. The most prevalent and dominant vultures, that is, the finest solutions, were thought to be the best in AVO. However, the worst solutions are the most hungry and feeblest vultures in the population. To discover the optimal alternative, vultures in AVO try to get closer to strong vultures and stay away from weak vultures. Based on the core concept of vultures and the theoretical frameworks established for the modeling of artificial vulture populations, the development of the AVO algorithm occurred in four phases,
Phase 1: categorizing the population according to the best vulture
To determine the leading solutions, the fitness value of each solution is calculated following the creation of the initial population. The best solution was determined by selecting the best vulture for the first category, and the second-best solution was determined by selecting the best vulture for the second category. Equation (1) determines the probability of successfully guiding the vulture towards the optimal solution within the two categories. This probability is subsequently used to move the remaining responses towards the best solutions identified in both the first and second categories.
In the first group, the best vulture is designated as BestVulture1, and the second-best vulture in the second category is designated as BestVulture2. The sum of random numbers L1 and L2, which ranges from 0 to 1, is 1. pi can be calculated using Eq. (2) and the roulette-wheel method.
where Fi represents the fitness value for the groups and the total number of vultures in the two categories is represented by n. In conclusion, Fig. 2 illustrates the relationships between the vultures, where \(\boldsymbol{\alpha }\) represents the first group, \({\varvec{\beta}}\), the second group and \({\varvec{\gamma}}\), the third group of vultures. The target vulture was acquired using the pertinent characteristics.
Phase 2: vultures going without food (starvation)
Because they are powerful animals, vultures can fly great distances to find food when they are not famished. On the other hand, when they are hungry, they become hostile and unable to fly very far, forcing them to approach stronger vultures in their quest for food. Equation (3) was used to mathematically model this phenomenon:
The Hunger rate Eq. (4) is used to determine the vultures’ appetite to determine when the exploration phase ends and the exploitation phase begins.
where variables h, Z, and rand1 are random integers drawn from [− 2, 3], [− 1, 1], and [0, 1] intervals, respectively; F represents the degree of hunger; and itrmax and itr(i) indicate the maximum and current iterations, respectively. The probability of initiating the exploration process during the final stages of optimization is positively correlated with an increase in the value of k, whereas a reduction in the parameter leads to a lower likelihood of entering this exploration phase. This fixed value of k indicates that the optimization process impedes the exploration and exploitation stages. The formula for rate F reveals that the operational rate of the vultures decreases as the number of iterations increases. In particular, if F is greater than 1, vultures will continue to explore and seek food in diverse areas. However, if F is less than or equal to 1, the vultures enter the exploitation phase, where they search for food closer to the existing solution.
Phase 3: exploration
Vultures can easily discover food and carcasses in the wild with the aid of their excellent vision. Vultures take time to assess their surroundings before starting a protracted food battle. Vultures in AVO can investigate several random sites using one of two methods; the parameter pi is used to select the strategy. Which of the two strategies will be employed is determined by this parameter value, which is predefined in the range between 0 and 1 before the search procedure begins. Equation (5) can be used to depict the exploration phase of the vultures.
where \(X_{t + 1}\) represents the vulture position in subsequent iterations, one of the best vultures, Rt, is determined by Eq. (1) and \(rand_{p1}^{t}\), \(rand_{p2}^{t}\), and \(rand_{p3}^{t}\) are raom values between 0 and 1. Ft is determined by Eq. (4); the lower and upper bounds are also represented by lb and ub, respectively; and finally, Dt denotes the vulture’s distance from the current optimal vulture, which is determined by Eq. (6).
where C represents a random integer distributed uniformly between 0 and 2, and the vulture position currently being used in the iteration is represented by Xt.
Phase 4: exploitation’s first stage
The exploitation phase in AVO is grouped into two main stages. The first stage starts when F has a value ranging between 0.5 and 1. Two distinct tactics are employed during this phase: Siege fighting and rotating combat. The predetermined random value p2\(\in\)[0, 1] is utilized to choose which approach to use. The Siege-fight (food competition) approach is used if the generated number is greater than p2, which is the initial random variable randp2 created randomly in the interval [0,1]. The rotating combat tactic is used elsewhere56.
-
Competing for food When F falls between 0.5 and 1, the vultures are deemed to be active and full. As a result, the weak vultures gather and attempt to attack the vital ones to get food, while the stronger vultures are unwilling to share their food. Equation (7) and (8) shows the modeled behavior to update the location of the vulture.
-
Rotating flight strategy Vultures that are motivated and stuffed will hover at high altitudes and compete for food. A spiral model is used by AVO to mimic this behavior. The expression for rotating flight strategy can be described using Eqs. (9)–(11). The positional update equation pertinent to the rotational flight dynamics of vultures is given by Eq. (9).
$$\begin{array}{*{20}c} {X_{t + 1} = R_{t} - \left( {S_{1}^{t} + S_{2}^{t} } \right)} \\ \end{array}$$(9)$$\begin{array}{*{20}c} {S_{1}^{t} = R_{t} \times \left( {\frac{{rand_{5} \times X_{t} }}{2\pi }} \right) \times cos\left( {X_{t} } \right)} \\ \end{array}$$(10)$$\begin{array}{*{20}c} {S_{2}^{t} = R_{t} \times \left( {\frac{{rand_{5} \times X_{t} }}{2\pi }} \right) \times sin\left( {X_{t} } \right)} \\ \end{array}$$(11)
All rand values in this phase are equally distributed and fall between 0 and 1.
Phase 5: the final stage of exploitation
The second and final stage of exploitation is when the movements of the vultures gather several vulture species over the food supply. During this stage, the methods of siege or aggressive conflict, struggling for food, are carried out. If the value of F is below 0.5, the algorithm advances to this stage. To begin this step, labelled rand P3, is produced with bounds of [0, 1]. If randP3 ≥ P3, the vultures display the aggregation behavior. However, if the generated number is greater than P3, yet less than 0, the vultures will adopt an attack behavior.
-
Aggregation behavior Vultures digest a lot of food when AVO reaches its last stage. Vultures will gather in large numbers and act competitively wherever food is available. Currently, Eq. (12) is used to calculate the formula for updating the vultures’ position.
-
Attack behavior In a similar manner to when AVO is almost finished, the vultures will approach the best vulture to take the leftover food. Mathematically, the following expression for updating the position of the vultures is given by Eqs. (15), (16), and (17).
$$\begin{array}{*{20}c} {X_{t + 1} = R_{t} - \left| {d_{t} } \right| \times F_{t} \times Levy\left( {dim} \right)} \\ \end{array}$$(15)$$\begin{array}{*{20}c} {Levy\left( {dim} \right) = 0.01 \times \frac{{r_{1} \times \sigma }}{{\left| {r_{2} } \right|^{{\frac{1}{\delta }}} }}} \\ \end{array}$$(16)$$\begin{array}{*{20}c} {\sigma = \left( {\frac{{{\Gamma }\left( {1 + \delta } \right) \times sin\left( {\frac{\pi \delta }{2}} \right)}}{{{\Gamma }\left( {1 + \delta 2} \right) \times \delta \times 2^{{\left( {\frac{\delta - 1}{2}} \right)}} }}} \right)^{{\frac{1}{\delta }}} } \\ \end{array}$$(17)where δ = 1.5 is a constant, and r1 and r2 are the random values that are uniformly distributed within [0, 1], and dim represents the problem dimension.
Figure 3 depicts the African vulture optimization algorithm’s solution procedure.
The opposition-based learning (OBL)
To enhance the efficiency of MAs in solving complex optimization challenges, Tizhoosh introduced the idea of Opposition-based learning (OBL) in 200551. The erratic, radial changes brought about by societal revolutions serve as the main driving force behind the OBL approach. MAs use OBL to address two major challenges in optimization, namely, premature convergence and slow exploration of the search space51. In other words, OBL serves as an effective strategy to enhance the performance of MAs by evaluating candidate solutions with their opposites52. This mechanism significantly improves search space exploration, increases the probability of locating global optima, and mitigates premature convergence. Within the AVO algorithm, the integration of OBL strengthens the balance between exploration and exploitation, accelerates convergence rates, and reduces the risk of stagnation in local optima when addressing complex, multimodal optimization problem8,57. The incorporation of OBL enables MAs to enhance population diversity while simultaneously sustaining a dynamic equilibrium between exploration (global search) and exploitation (local search refinement) during the optimization process. The improved exploration ability diminishes the chances of getting trapped in local minima and hastens the process of reaching global optima, especially in complex high-dimensional and multimodal environments52. Additionally, OBL promotes a more resilient search mechanism by methodically addressing stagnation, enabling the algorithm to break free from plateaus or misleading areas within the search domain58. The AVO algorithm significantly benefits from the integration of OBL approach, which enhances its search efficiency. This allows the agents, or vultures, to effectively navigate promising regions while preserving diversity to prevent premature convergence8,58. As a result, this leads to improved convergence rates, superior solution quality, and increases stability when addressing intricate, nonlinear, and multimodal optimization challenges.
Therefore, the integration OBL into MAs like AVO represents a significant advancement in the design of optimization strategies, offering a practical and theoretically grounded solution to the challenges of exploration–exploitation balance and robustness in complex search spaces. The OBL concept has been used in AVO successfully as the Opposition-based learning African Vulture Optimization Algorithm (OBAVO) to improve the exploitation potential of the original AVO’s search mechanism59. Additionally, the next subsections explain the opposite number idea.
The opposite number
For any random variable \(V \in \left| {a, b} \right|\), the opposite number \(\hat{V},\) can be found using the formula in Eq. (18) below.
where a and b represent the search space’s lower boundary and upper boundary, respectively. The population’s initial position is denoted by V. Equation (19) below can be used to generalize Eq. (13) into n-dimensional space:
where the real vector \(V \in R^{n}\) has an opposite value of \(\hat{V} \in R^{n}\). Nevertheless, throughout any optimization process, the values V and \(\hat{V}\) are analyzed. By comparing the objective functions, the best of these two outcomes is maintained, and the worst is removed. For example, V is saved if F(V) is less than \(F\left( {\hat{V}} \right)\), otherwise if F(V) is greater than \(F\left( {\hat{V}} \right)\), then we save \(\hat{V}\).
V and its opposite \(\hat{V}\), are shown in one, two, and three dimensions, respectively, in Figs. 4, 5, and 6
ROBL
To help evade becoming trapped in local optima and also increase diversity, the Random Opposition-based Learning (ROBL) was created by60, a novel approach to OBL, in 2019. ROBL incorporates African Vulture Optimizer (AVO) as the Random Opposition-Based learning African Vulture Optimizer (ROBAVO). In contrast to Eq. (19), the opposite solution \(\hat{V}_{ij}\), represented by Eq. (20), is random for investigation. It describes a novel OBL strategy known as Random Opposition-Based Learning (ROBL)
where rand is between 0 and 1, and \({a}_{ij}, {b}_{ij}\) represent the lower and upper limits of the ith particle, respectively. Therefore, Eq. (20) can effectively increase the population’s diversity and also assist in avoiding local optima. The V and its opposing \(\widehat{V}\) are shown in one, two, and three dimensions, respectively, in Figs. 7, 8, and 9.
The proposed EOBAVO
The enhanced opposition-based learning (EOBL)
The Enhanced Opposition-based Learning (EOBL) represents a groundbreaking pedagogical strategy designed to aid in the assessment of a candidate solution by concurrently evaluating the current response alongside its developed counter-solution. This technique can facilitate a faster convergence of the optimization algorithm by selecting the most appropriate correspondence between the current solutions being compared and their evolving opposite-based counterparts, which will be refined in later iterations.
This makes it possible for the first response to be the best fit solution, which is the guess or enhanced-opposite approximation that tackles the issue. The process begins with the closer of the two guesses. Every other solution in the present population can be implemented consistently using the same process.
Mathematically, the optimization problem’s candidate solution is represented as shown in Eq. (21), in coordinate form as an m-dimensional space, and it is of the form:
where \(N=\frac{{\varvec{a}}+{\varvec{b}}}{2}\), a denotes the lower limit and b denotes the upper limit of the search interval, and \({rand}^{2}\) represents a small arbitrary number between [0, 1], which aids in making use of the search space’s promising areas. Additionally, this technique predetermines when to use EOBL in this algorithm by defining a jumping probability or jumping parameter (Jr). Linking to Eq. (15), the new functions are proposed so that the existing rules are altered to avoid poor diversity and to promote convergence while avoiding local optima. Figures 10, 11, and 12, respectively, depict V and \(\widehat{\text{V}}\), in all three-dimensional spaces.
Integrates EOBL with AVO
The basis of the suggested EOBAVO technique, which seeks to increase the AVO technique’s efficiency, is described in this segment. The AVO technique is improved by merging it with the EOBL technique, which improves its ability to swiftly find the ideal value while thoroughly exploring the search space. The AVO approach has several drawbacks, such as the addition of exploitation capability to the exploration phase to speed up convergence and the random strategy that determines the conversion from the exploration phase through to the exploitation phase, which also impacts balance. This idea causes local optima trapping and affects the region’s global search, which is enhanced by EOBAVO. The suggested approach takes the opposite values when exploring the full search area by taking into account the two potential places for the calculated value to avoid these situations. This alteration raises the likelihood that the best answers will be discovered faster and more effectively. Every mathematical change must be tested, though, as the NFL theorem states that no optimization technique can successfully handle every issue17. There are two stages to the integration of the EOBL with the AVO. The population is initialized using EOBL in the first phase, and new vultures are developed using the data at hand in the second phase. These phases are explained in depth in the following subsections.
Initialize population by EOBL
In the first phase, we set the population \(V_{i} = \left\{ {v_{i1} , v_{i2} , \ldots ,v_{ij} , \ldots v_{iD} } \right\}\quad \left( {i = 1, 2, 3 \ldots MP;j = 1,2,3 \ldots D} \right)\) randomly in the search space, and MP represents the population size in dimension D. The EOBL technique is used to determine the optimal value for each solution as well as the value associated with every solution in the population. The original population \({V}_{i}\) and its corresponding population \({\widehat{V}}_{i}\) will then be combined to form a single group. From \(\left\{{V}_{i} , {\widehat{V}}_{i}\right\}\), we select the optimal MP result. The original population will now have the best MP solutions.
Update the new group of vultures
At the second phase, each revised AVO solution is restructured using the function before the modification in agreement with Eqs. (1–16). Then the fitness value and optimal solution are sustained. The EOBL is applied to create new vultures with a particular rate of probability (Jr). An arbitrary number from 0 to 1 is then generated. Our proposed technique’s capacity for exploration is improved by this jumping parameter (Jr). If the random value is less than Jr, we apply EOBL to create new Vultures based on the existing population. Next, we combine the present vulture with the appropriate vulture to select the MP fittest vulture. The approach may bring to equilibrium the exploitation and exploration capacities with the probability of Jr = 0.1, since the EOBL can be thought of as the mutation operator.
Time and space complexity
The computational complexity of any algorithm is a crucial component in assessing its performance. The time and space complexity of the proposed method is as follows:
-
i.
The vulture’s population is initialized in O (n * D) time, where n is the population size and D is the variable dimension. To calculate each vulture’s fit value, O(n) is needed.
-
ii.
The formula O Max_iter * n) time is used to estimate each vulture’s fitness value, where Max_iter is the maximum number of iterations.
Table 2 shows the average runtime per run over 30 independent runs for a representative set of benchmark functions (F3, F8 from CEC2005 and CEC09 from CEC2022) with a maximum of 15,000 function evaluations.
From the results, EOBAVO uses approximately 8–12% extra runtime compared to standard AVO and about 5–8% compared to OBAVO/ROBAVO. This overhead stems from the extra evaluation of opposition-based solutions during initialization and occasional EOBL-based updating during the search iterations. However, the slight increase in computational cost is justified by the significant performance improvements in solution quality and convergence speed. EOBAVO consequently maintains a reasonable trade-off between computational efficiency and optimization effectiveness.
The overall time complexity is given as: O((Max_iter * n) * D). From the iterations, the offspring generation takes up the most space. Therefore, O(n * D) is the space complexity of EOBAVO. Figure 13 shows the graphical representation if the Average runtime per run for AVO, OBAVO, ROBAVO and EOBAVO.
The proposed EOBAVO technique’s pseudo-code
The proposed EOBAVOA pseudo-code is given in Algorithm 1 below. We set the vulture population, \({V}_{0}\) in the search space to a random initial value.
Numerical experiments and results analysis
The proposed EOBAVO method is evaluated against AVO16, other Metaheuristics like AVO, PSO4, GSA28, SSA12, TSA61, MVO35, WVO62 as well as with the newest algorithms like MGO30, HGS29, and HHO63 and the top-performing algorithms like INFO64, Furthermore, the EOBAVO technique application to real-life engineering problems demonstrates how traditional AVO has improved.
Parameter/benchmark functions settings
Assessing the effectiveness of the proposed method, 23 IEEE CEC2005 test functions65 and which comprise 10 fixed-dimensional multi-modal functions, six multi-modal, and seven uni-modal, have been chosen. The test functions for IEEE CEC2005 are denoted by ‘‘F’’ and their respective numbers: F1, F2, F3,…,F23. To assess an algorithm’s potential for exploitation, the unimodal test functions F1–F7 have a single global optimum solution, but F8–F13 (the multi-modal) test functions have several local optimum values. These are thought to assess a potential algorithm for exploration. The remaining fixed-dimensional multi-modal functions, F14–F23, are thought to concurrently investigate the exploration and exploitation of Metaheuristic algorithms in global and local searches because they have fewer dimensions and more local extremums than the multi-modal functions. In addition, 12 hard test functions of the IEEE CEC202266 benchmark functions are applied to determine the usefulness and capabilities of the suggested technique. These functions have varying and expandable dimensions. IEEE CEC2022 test functions are denoted by ‘‘CEC,’’ which is followed by their respective numbers: CEC01, CEC02, up to CEC12. While the ranges of all the other functions are [− 100, 100], the ranges of the functions from CEC01 to CEC03 differ. The IEEE CEC202267 benchmark functions are far more complicated than the IEEE CEC200565 test functions that were employed in this investigation. The AVO68 and other prominent MAs, including SCA33, PSO4, SSA12, WOA9, were compared with the optimization outcomes of the EOBAVO. The proposed technique is also tested on a variety of real-world engineering design optimization problems. Additionally, statistical tests are used to quantify the algorithms’ statistical significance, such as the Wilcoxon rank-sum test69 and the t-test70.
Thirty search agents were used by each algorithm to look up for the search space. The average outcomes are then used for comparison. Each function is said to be performed 30 times, constrained to a maximum of 500 iterations and a total of 15,000 function evaluations (FEs). The experiment was carried out using (MATLAB R2025a, Windows 10, Intel Core i7-1165G7 CPU @ 2.80, 8GB RAM). Parameter configurations for the aforementioned algorithms are listed in Table 3 below:
Performance metrics
-
i.
Average (Avg)
The average is the mean of an algorithm’s best results over several runs and can be determined as shown in Eq. (22):
$$\begin{array}{*{20}c} {avg = \frac{1}{R}\mathop \sum \limits_{i = 1}^{R} Best_{i} } \\ \end{array}$$(22)where \(Best_{i}\) denotes the best result reached from the ith run, and the number of independent runs is denoted by R.
-
ii.
Standard deviation (std)
An algorithm’s repeatability and capacity to yield the same ideal outcome after multiple runs are evaluated using the standard deviation given in Eq. (23):
$$\begin{array}{*{20}c} {std = \sqrt {\frac{1}{R - 1}\mathop \sum \limits_{i = 1}^{R} \left( {Best_{i} - avg} \right)^{2} } } \\ \end{array}$$(23) -
iii.
The t-test
Equation (24) shows the t-test used to assess whether a proposed method differs significantly from existing MA.
$$\begin{array}{*{20}c} {t = \frac{{avg_{1} - avg_{2} }}{{\sqrt {\frac{{std_{1}^{2} - std_{2}^{2} }}{R}} }}} \\ \end{array}$$(24)where \({avg}_{1}\) and \({avg}_{2}\) are the averages and \({st{d}_{1}}^{2}\) and \({st{d}_{2}}^{2}\) are also the standard deviations for any two algorithms.
Comparison of EOBAVO with AVO, OBAVO and ROBAVO
To identify the top solution for the IEEE CEC200565 and IEEE CEC202267 benchmark functions, the performance of the suggested technique, EOBAVO, and the traditional AVO, OBAVO, and ROBAVO algorithms are evaluated in this section. Tables 4 and 5 present the comparison results, respectively, which show clearly that for most functions, the recommended EOBAVO approach performs better than AVO, OBAVO, and ROBAVO.
Table 4 shows that the EOBAVO produced a better result than the rest, i.e., AVO, OBAVO, and ROBAVO, for functions F1–F4, F9–F15, F17–F19, and F21–F23. Functions F1, F3, F9, and F11 show successful exact global optimal solutions. The standard deviation measure for the proposed EOBAVO technique is higher than that of the classical AVO, OBAVO, and ROBAVO.
The average and standard deviation in Table 5 indicate that the EOBAVO technique performs better than the other three algorithms, except for the functions CEC01- CEC03, CEC09, and CEC10.
Evaluation of EOBAVO with other innovative and successful algorithms
A comparison of the effectiveness of the EOBAVO technique with four traditional, well-known Metaheuristic techniques: SCA, PSO, SSA, and WOA, was done in this section. Table 1 lists these algorithms’ parameter configurations. Table 6 shows that for the average fitness values, the EOBAVO approach fared better than any other algorithm of the IEEE CEC200565.
Nonetheless, the EOBAVO approach produced better and absolute global optimal solutions for the functions F16, F19, and F20. Except for F6, F8, F13, F14, and F16–F19, the suggested algorithm fared better than most functions in terms of the standard deviation. Table 7 shows that, compared to alternative methods for the IEEE CEC202281 test functions, the suggested EOBAVO methodology is superior at solving them. For example, Table 5 shows that the suggested technique outperformed the others for the functions F1, F3, F5, F8, and F10 in terms of average fitness value. Additionally, when the standard deviations of other methods are compared, EOBAVO functions F1–F2, F5, and F7–F8 have shown better outcomes.
Analysis of sensitivity
The Sensitivity analysis for MAs looks at how several independent variables’ values impact a specific outcome. This is also a crucial part of parameter tuning. The proposed EOBAVO technique is used to analyze the sensitivity of Jr (the jumping parameter) and rand2. Below is a discussion of five different scenarios.
Analysis of Jr
The EOBAVO technique is used with different Jr values, while we keep other parameters constant, to analyze the importance of the parameter. Jr will take the following values: 0.01; 0.05; 0.07; 0.09; and 0.1. The reasons for these close values are that small, closely spaced values permit detailed performance assessment to guarantee fine-grained sensitivity analysis. It also prevents undue disturbances while harmonizing exploration and exploitation (controlled randomness). The functions F3, F8, and F15 are a subgroup of test functions selected from each category. With a maximum of 15,000 FEs, each function executes 30 times. Table 8 and Fig. 14 show the outcomes that were recorded for each case. Results show that when Jr is 0.1, the EOBAVO approach gave the best results.
Analysis of rand 2
The EOBAVO technique is used with several different scenarios, such as rand3, rand4, rand5, and rand6, and keeping all factors constant to analyze the relevance of the rand2 parameter. F4, F11, and F20 are a subset of test functions selected from each category. With a maximum of 15,000 FEs, each function executes 30 times. The statistical results for each state are shown in Table 9 and Fig. 15. It is clear from the experimental results that rand2 solves issues better than the others.
Statistical analysis
Here, the effectiveness of the suggested EOBAVO technique is estimated using the Wilcoxon rank-sum test71 and the t-test69. In calculating the t-values for the function in a t-test, the two algorithms are considered simultaneously. Tables 10 and 11 present the t-test results at α = 0.05% for test functions (12 test functions) from IEEE CEC202266 and 23 test functions from IEEE CEC200565. The other methods are outperformed by EOBAVO if the relevant t-value is bold-faced. Additionally, the last row of Tables 10 and 11 shows EOBAVO’s win, tie, and loss counts, which are labelled as w = t = l. The t-values make it clear that, in many situations, EOBAVO performance is greatly enhanced.
A non- parametric Wilcoxon rank- sum test performed in pairs can be used to identify the significant differences in the behaviors of the two algorithms. Tables 12 and 13 present the p values at the a = 0.05% significant level, respectively. For two methods to be considered statistically significant, their p values must be less than 0.05. The results of p values and H are displayed in Tables 12 and 13, where the symbols ‘‘−’’ and ‘‘+’’ stand for rejection and acceptance, respectively.
It is clear from the aforementioned tables that the proposed EOBAVO, outperforms the other MAs because most of the p values are less than 0.05. Also, it must be noted that NA represents the statistical equivalence between the various algorithms.
Convergence analysis
The convergence curve illustrates the correlation between the number of iterations and the value of the fitness function. In the initial stages of optimization, the search agent quickly deviates from the designated search area. The primary objective of this convergence analysis is to examine the optimization behavior and graphical representation of the proposed EOBAVO technique. Figures 16 and 17 illustrate the convergence graphs for all the suggested algorithms that were evaluated against the IEEE CEC200565 and IEE CEC202266 test functions, respectively. Figure 16 shows that EOBAVO converges more quickly for all functions except F1 and F14 in unimodal functions, whereas the suggested approach performs best for functions F9, F10, and F11 in multimodal functions. The proposed method exhibits comparable convergence for functions F14, F15, F17, and F18, falling under the fixed-dimensional multi-modal category. Furthermore, when compared to the AVO, OBAVO, and ROBAVO methods, the EOBAVO approach has a greater impact on balancing convergence and divergence.
Additionally, it is clear from Fig. 17 that the EOBAVO technique outperforms all other algorithms in CEC01, CEC02, CEC03, CEC05, CEC08, and CEC10. However, compared to AVO, OBAVO, and ROBAVO, it is more successful at reaching convergence. Due to these improvements, the EOBAVO technique outperforms all the other algorithms under comparison both for convergence and search rate.
EOBAVO’s comparison with the newest and most effective algorithms
This section compares the effectiveness of the proposed EOBAVO technique to the four most current and effective optimization algorithms: The Mountain Gazelle Optimization (MGO)30, Harris Hawks Optimization (HHO)63, the Hunger Games Search (HGS)29, and the weighted vectors INFO64. The IEEE CEC2005 and IEEE CEC2022 benchmark functions are used to implement the suggested method together with the most recent and effective algorithms.
Table 14 presents the statistical results of the IEEE CEC2005 test functions. The suggested EOBAVO technique performed better than all other equated algorithms and produced optimal solutions. According to the results in Table 14. For functions F1, F4, F6, and F15, EOBAVO produces better results; however, for functions F10, F20, and F23, the average fit values and standard deviation are roughly the same.
Table 15 shows the results of optimization and significance testing for the IEEE CEC2022 benchmark functions, using EOBAVO and the latest and most effective methodologies. The suggested method outperformed the others for functions F1, F3, F8, and F10, as Table 15 demonstrates. As a result, the suggested method outperforms the most recent algorithms, MGO, HHO, HGS, and INFO. These algorithms’ convergence curves have been run and are shown in Fig. 18. The analysis of these curves indicates that the EOBAVO method demonstrates superior performance across the majority of functions. Almost all of the p values in Table 15 fall below a 5% significance level, indicating that the recommended method demonstrates better performance compared to the other algorithms.
Therefore, the statistical findings of the IEEE CEC2005 and IEEE CEC2022 research show that the EOBAVO outperforms the most recent and effective technique.
Figure 18 shows the convergence graphs of the EOBAVO technique in relation to the most recent and effective algorithms. The statistical results demonstrated that the proposed EOBAVO technique performed similarly to MGO and HGS but better than HHO, INFO.
These algorithms’ convergence curves have been run and are shown in Fig. 19. It clearly shows that the EOBAVO method works better for most functions.
EOBAVO method performs better than the other methods, as shown by the majority of the p values in Tables 16 and 17 being below the 5% significance level. Therefore, the statistical findings of the IEEE CEC2005 and IEEE CEC2022 examinations show that the EOBAVO outperforms the most recent and effective technique.
Exploration and exploitation analysis
To assess the exploration and exploitation capabilities of EOBAVO, the average Euclidean distance from each individual to the best-known solution throughout iterations was monitored. Higher average distances are indicative of exploration, while lower distances indicate exploitation46,47. Examining the balance between the exploration and exploitation phases of the EOBAVO can provide significant insights for tackling practical optimization issues. Figure 20 shows a graphical representation of the exploration and exploitation trajectories in the search space while addressing the difficulties associated with the CEC2005 and CEC2022 functions. It is essential to highlight that Functions F2. F7 and F8 represent the functions within CEC2022, while F15, F17 and F18 denote the benchmarks within CEC2005. The graphs show that during the early stages of the optimization, EOBAVO maintains a consistent pattern with high exploration, which rapidly decreases within the first few hundred iterations. Simultaneously, exploitation increases sharply and dominates the search process for the remainder of the optimization run. This shows with an effective convergence dynamic, EOBAVO is prompt in recognizing promising regions quickly. In contrast, F8 exhibits unstable exploration–exploitation dynamics, with persistent fluctuations and multiple crossovers throughout the run. This indicates a complex, multimodal landscape where the optimizer struggles to converge, suggesting the need for enhanced diversity or adaptive control mechanism. F15 also shows unstable fluctuations between exploration and exploitation throughout the iterations, indication inconsistent search behavior. This suggests that algorithm struggles to maintain focus, likely due to a highly multimodal or deceptive landscape. Exploration remains dominant for most of the iterations in F17, while exploitation stays low and erratic. This prolonged exploration phase reflects difficulty in converging and suggests potential improvements in the algorithm’s exploitation strategy. The algorithm quickly transitions from exploration to exploitation in F18, within the first 1000 iterations. This smooth shift indicates a well-balanced search process and efficient convergence toward optimal solutions. We can conclude that EOBAVO effectively transitions from exploration to exploitation and converges well on most functions, but struggles to maintain stability and achieve to maintain stability and achieve convergence on complex, multimodal landscapes such as F8, F15 and F17. These challenges highlight the need to enhance diversity management and introduce adaptive control mechanisms for improved performance.
Diversity analysis
In MA optimization, diversity analysis evaluates the distribution and speed of candidate solutions within the search space. It plays a critical role in understanding the exploration and exploitation balance of an algorithm. High diversity typically supports better global exploration, helping to avoid premature convergence to local optima, whereas low diversity aids in intensifying the search near promising regions for convergence47. There are nonlinear features in the diversity trajectory. The average distance travelled by each person during the iterative process is represented by the trajectory72. Greater aggregation within the population throughout the optimization process is indicated by less nonlinear population diversity, and this raises the possibility of convergence towards a sub-optimal solution within the problem space. The greatest separation between two members of the population was determined to be the population’s radius. Too much diversity can hinder convergence, while too little may lead to stagnation. The population radius is the maximum distance between any two individuals within the population and mathematically represented by Eq. (25).
where the \(\left| S \right|\) and Dim represents the population size and dimension of the problem size respectively, \(x_{it}\) denotes the position in the ith dimension of the ith individual. Finally, the diversity population from Eq. (25) is given by Eq. (26) as follows:
And the \({\overline{x} }_{t}\) is the center of the populace. The diversity analysis of selected benchmark functions from CEC2005 and CEC2022 is represented in Fig. 21. We selected F1, F15 and F19 to represent CEC2005, while CEC2022 is represented by F3, F7 and F12. From the graphs, EOBAVO consistently maintained lower diversity levels for CEC2005 functions, particularly in F15 and F19. This means that with a reduced diversity, EOBAVO exploited the search space more aggressively, enabling faster convergence but at the potential risk of reduced exploration. In F1, both algorithms showed a similar decline in diversity, though AVO retained higher diversity for a longer period, suggesting greater exploration capacity in simpler unimodal landscapes. In contrast, the diversity gap between AVO and EOBAVO widened in F15 and F19, highlighting EOBAVO’s stronger exploitation tendencies on more complex multimodal functions. On the CEC2022 functions, EOBAVO consistently exhibited lower diversity than AVO, with the largest difference in F7. This indicates that EOBAVO’s enhanced OBL intensifies exploitation and accelerates convergence, especially in complex landscapes. However, the reduced diversity may limit exploration, while AVO maintains broader search capability at the cost of slower convergence. Overall, EOBAVO enhanced convergence speed by reducing population diversity and promoting exploitation across benchmarks.
Performance of EOBAVO on practical engineering problems
In this section, the proposed EOBAVO technique is examined for its performance and effectiveness by applying it to various real-world engineering design issues73. These issues encompass the pressure vessels, the speed reducers, the welded beams, the tension/compression spring, and the gear train problems.
The EOBAVO method is employed to address each issue, and the outcomes are compared with those obtained from other advanced algorithms such as AVO, OBAVO, ROBAVO, PSO, WOA, SCA, and SSA.
Pressure vessel problems
The main aim of this engineering challenge is to reduce the total cost, which includes the material, welding, and forming, of a cylindrical vessel. Figure 22 shows the vessel’s schematic representation. The shell’s width (Ts), the width of the head (Th), the cylinder’s length without measuring the radius (R), and the head (L) are the four decision parameters in this pressure vessel problem.
The design can be stated mathematically as
Subject to
The following are the decision variables’ ranges:
The records of the aforementioned MAs are compared with the results of the EOBAVO approach. The best results from each algorithm are compared in Table 18, showing that the EOBAVO method minimizes the costs of the cylindrical pressure vessel with superior performance.
Problems involving welded beam design
The purpose of this problem is to create a beam that is economically viable in terms of production costs. Figure 20 shows the features and structure of the welded beam. The beam’s shear stress (\(\tau\)), bending stress (\(\sigma\)), buckling load (Pb) and the end deflection (\(\delta\)) are the optimization constraints in this issue. Additionally, there are four consecutive decision variables: Weld thickness (h), bar height (t), bar thickness (b), and clamped bar length (l). Figure 23 and Table 19 shows the welded beam and the outcome of the Welded Beam design in EOBAVO respectively.
A Welded beam73.
This problem can be represented numerically as follows:
The following are the decision variables’ ranges:
where
Tension/compression spring design problems
This engineering design problem’s primary objective was to reduce the weight of the spring, while three nonlinear inequality constraints were accounted for with a linear constraint. The wire diameter (d), the count of active coils (K), and the mean coil diameter (D) represent the three subsequent criteria for decision making in this engineering design. Figure 24 shows the spring’s schematic representation. The design’s numerical expression is shown as follows.
The results of the aforementioned MAs that have been successful in resolving this issue are compared to the results of the recently introduced EOBAVO approach. Table 20 presents the comparative findings, where, in terms of effectiveness, EOBAVO performs better than the other algorithms.
Gear train design problems
The key focus of a gear train design is the reduction of the gear ratio, which constitutes a mechanical problem. Figure 25 shows the schematic representation of a trained gear. Four successive decision variables with this engineering design include: \({n}_{A}\left({g}_{1}\right)\), \({n}_{B}\left({g}_{2}\right)\), \({n}_{C}\left({g}_{3}\right)\) and \({n}_{D}\left({g}_{4}\right)\).
Gear train design74.
This design’s mathematical expression has been provided in Eq. (30).
Let \(\vec{x} = \left[ {x_{1} , x_{2} , x_{3} , x_{4} } \right]\) = \(n_{A} n_{B} n_{C} n_{D}\)
The following are the design variables’ ranges:
The EOBAVO technique’s best results are compared to those of other MAs, and the results that are best from each algorithm are compared in Table 21, showing that the EOBAVO method fared better than the others.
Speed reducer design problems
Designing a gear box speed reduction having the least amount of weight is the primary objective of this project. Eleven inequality constraints form the basis of the speed reducer weight; seven of these constraints are nonlinear, while the remaining four are linear. In this engineering problem, there are seven variables to consider: the module of teeth m(s2), the face width b(s1), the teeth count in the pinion z(s3), the first shaft’s length between bearings l1(s4), the first shaft’s diameter d1(s6), the second shaft’s length between the bearings l2(s5) and the second shaft’s diameter d2(s7). Figure 26 shows a graphic illustration of this design.
Speed reducer design problems73.
Mathematically, we can represent the design in Eq. (31):
\({\text{Suppose}}\;\vec{s} = \left[ {s_{1} ,s_{2} ,s_{3} ,s_{4} ,s_{5} ,s_{6} ,s_{7} } \right] = \left[ {bmxl_{1} l_{2} d_{1} d_{2} } \right]\)
Subject to
The decision variables’ ranges are as follows:
The suggested EOBAVO approach outperformed the selected MAs in determining the ideal cost for the speed reducer design problem, according to the simulation results shown in Table 22.
The Three-bar truss design problems
The objective of this engineering design is to ascertain the ideal values for two variables, A1 and A2, to minimize the truss’s weight, subject to three optimization constraints, namely: Defection, stress, and buckling. (a third value, A3 = A1, is observed). Figure 27 shows the three-bar truss design’s structural outline.
Mathematically, the design’s formula is shown in Eq. (32) as follows:
Consider \(\vec{z} = \left[ {z_{1} z_{2} } \right] = \left[ {A_{1} A_{2} } \right]\)
Subject to
The decision variables’ ranges are as follows: \(0 \le z_{1} , z_{2} \le 1\).
According to the results shown in Table 23, the recommended EOBAVO technique performs better than other MAs in determining the least cost design value.
Statistical analysis for engineering problems
Statistical analysis is crucial in optimization engineering for objectively evaluating algorithm performance, comparing solutions, and ensuring results are reliable and significant across multiple runs. This also supports parameter tuning, convergence assessment, and robustness evaluation, enhancing the validity and applicability of optimization solutions75. Table 24 highlights the comparative performance of EOBAVO and its base version (AVO) and other state-of-the-art MA across sic standard engineering design problems namely Pressure Vessel Design (PVD), Welded Beam Design (WBD), Tension/Compression Spring Design (TCSD), Gear Train Design (GTD), Speed Reducer Design (SRD), Three-Bar Truss Design (TBTD).In general, EOBAVO achieved the best average solution in four out of six design problems and ties for the best in one, demonstrating its superior global search capability. EOBAVO consistently yields the lowest standard deviation in five out of six problems, reflecting remarkable robustness and stability across independent runs. The only exception is the Tension/Compression Spring Design (TCSD) problem, where the Whale Optimization Algorithm (WOA) shows superior consistency despite matching EOBAVO’s average solution. Overall, these results affirm the strength of the opposition-based learning enhancements in improving the exploration–exploitation balance of AVO, enabling it to outperform both its baseline and competing metaheuristic algorithms in complex constrained optimization contexts.
Figure 28 presents the convergence curves of six algorithms—PSO, SSA, SCA, WOA, AVO, and EOBAVO—across six engineering design problems (PVD, WBD, TCSD, GTD, SRD, and TBTD). The EOBAVO consistently achieves superior fitness values across all test functions, converging more rapidly and attaining lower objective function values than the competing algorithms. Notably, in PVD and TBTD, EOBAVO demonstrates faster convergence toward the global optimum while maintaining stability, outperforming AVO and the other baseline algorithms. In contrast, SSA exhibits poor convergence across most functions and yielding suboptimal fitness values, particularly evident in PVD and TCSD. PSO and SCA display moderate convergence but plateau prematurely without significant improvement over iterations. WOA shows competitive performance in some cases but fails to match EOBAVO’s convergence precision. Overall, the results validate that integrating opposition-based learning into AVO (i.e., EOBAVO) enhances both convergence speed and solution accuracy, confirming its robustness and effectiveness in solving these benchmark optimization problems.
Utilizing the EOBAVO algorithm to address the issue of wind turbine installation
The electrical energy generated when wind is converted into mechanical energy by windmills or turbines is known as Wind energy. This is one of the most significant renewable energy bases because of its widespread availability and abundance. The effective utilization of this energy has the potential to produce a remarkable amount of electricity. Wind energy has recently become more popular due to the rise in the need for electricity. Strategically positioning the wind turbines in a wind farm can lead to increased total electricity production. The wake effect may cause most wind farms to produce less electricity, and the haphazard placement of the turbines within wind farms can hinder turbines from reaching their full capacity. Effective management of the wake effect, combined with the strategic positioning of wind turbines, can lead to an increase in the energy yield from wind farms. Optimizing the output of these facilities is vital for lowering energy expenditures. In this section, the recommended EOBAVO technique is employed to ascertain the most advantageous sites for wind turbines, ensuring that the highest power output is achieved at the lowest cost per kilowatt.
The research encompasses two separate case studies: The first examines Constant Wind Speed (CWS) in conjunction with Variable Wind Direction (VWD), while the second investigates Variable Wind Speed (VWS) paired with Variable Wind Direction (VWD). In comparison to the statistical results, LSHADE32 , GWO8, GA76, GA77, RSA78, SBO79, and BPSO TVAC80 are used.
The nearby turbines in wind turbine fields have an impact on the wind’s speed. A wake forms downstream of a wind turbine when it uses the wind to generate power. The downstream turbine’s power output decreases compared to the turbine running in a free wind flow if the nearby turbine is operating within the wake. A module developed by N.O. Jensen, known as the wake effect model, was used in the majority of the wind turbine layout optimization studies44,81. This model is shown in Fig. 29.
At a downstream distance of x from the wind turbine, the wind speed v in the wake is represented using Eqs. (33)–(36) as follows:
where μ is the entrainment constant, v0 is the velocity of the free stream, and a is the induction factor for axial. The radius of the downstream rotor is represented by rdr, the wind turbine’s thrust coefficient by Ct, the rotor radius by rr, the hub height by h, and the turbine’s surface roughness by h0.
Using Eq. (37), at any distance x, the downstream wake radius (rdw) is estimated.
where the wind turbine radius is indicated by R.
The total of the kinetic energy defects is the mixed wake kinetic energy when one turbine passes next to multiple turbines. Thus, the downstream velocity of the N turbine is calculated using Eq. (38).
Every wind turbine produces power and the sum is determined by Eq. (39):
The wind farm’s wind turbines serve as the basis for the cost model created by76. According to this model expression, establishing a single wind turbine cost is 1, whereas adding many wind turbines can save costs by one-third. The remaining expenses require two thirds. The wind farm costs are computed using the expression shown in Eq. (40)
The objective of the cost function is to reduce the investment cost associated with each unit of energy generated to the minimum, while simultaneously enhancing the power output of a wind farm. The objective function is then shown in Eq. (41) as follows:
For wind turbine’s optimal situation in the field, the 2000 × 2000 m2 farm area is partitioned into 10 × 10 square grids. This particular arrangement of grids permits the construction of 100 turbines in a wind farm. A wind turbine (1) or nothing (0) can be placed in every grid cell of the wind farm. As shown in Fig. 30, the width of a cell is five times that of a rotor diameter, which requires a rotor diameter of 200 m for horizontal directions and 40 m for vertical directions.
Table 25 shows the characteristics of wind turbines. The findings are evaluated against other cutting-edge algorithms such as LSHADE32, GWO8, GA76, GA77, RSA78, SBO79, and BPSO TVAC80. The suggested EOBAVO approach is combined with CWS with VWD and VWS with VWD. It should be mentioned that each algorithm has a maximum of 100 iterations and 200 populations. Additionally, the upper and lower bounds are 0 and 1, and the problem size is 100.
Two case studies are examined below to assess the efficacy of the recommended strategy for the optimal placement of a wind turbine:
VWD combined with CWS
At this instance, the wind moves in different directions and surrounding the wind farm increases at equal interval of 10° between 0° to 360°, with a fixed speed of 12 m/s. The statistical results for different MAs and the suggested EOBAVO approach are given in Table 26. The table makes it clear that EOBAVO method performs better than the other MAs. Additionally, the optimal wind turbine location in a wind farm as established by the EOBAVO approach is shown in Fig. 31. The EOBAVO technique uses 40 turbines to produce 18,120 kW at an efficiency of 88.31% and a cost of 0.001734/kW.
VWD combined with VWS
In this instance, the wind flows around the wind farm in three different directions at speeds of 8, 12, and 17 m/s. It also increases the angles by 100°, ranging from 0° to 360°. The statistical results of the various MAs and the proposed EOBAVO approach are shown in Table 27. The table makes it clear that the EOBAVO approach fared better than the other metaheuristic algorithms. Additionally, the optimal wind turbine location in a wind farm, as established by the EOBAVO approach, is shown in Fig. 32. The EOBAVO uses 39 turbines to produce 30,830 kW at an efficiency of 84.49% and a cost of 0.000807/kW.
Finally, the results clearly show that the EOBAVO technique performed better than other algorithms in terms of setting up the wind turbines in both case studies. Finally, it can be said that the suggested EOBAVO has outperformed the other selected algorithms on a range of practical engineering design and applications issues.
Conclusion
OBL, which has gained increased attention recently, is used to improve the efficacy of metaheuristic algorithms. To achieve varying densities across all vulture species and to foster population diversity, this research intends to refine the recently developed nature-inspired algorithm known as the African Vulture Optimizer (AVO). This enhancement will involve the incorporation of an opposition-based learning mechanism, resulting in the creation of the Opposition-Based African Vulture Optimizer Algorithm (OBAVO). Nevertheless, this approach elevates the computational demands necessary for discovering superior solutions and is susceptible to premature convergence. To effectively resolve these issues while steering clear of local optima and facilitating faster convergence, a novel adaptation of Opposition-based Learning, called Evolved Opposition-based Learning (EOBL), has been proposed. This adaptation is incorporated into a refined version of the African Vulture Optimizer Algorithm, known as the Enhanced Opposition-based African Vulture Optimizer Algorithm (EOBAVO). The investigation’s findings demonstrate that, for a significant number of the functions studied, both the accuracy of the solutions and the convergence rates were notably superior when assessed against other optimization techniques. Evaluations of engineering difficulties show that the newly created EOBAVO performs better than its contemporaries. In contrast to other Metaheuristics, this one is negligible when it comes to solving multi-modal functions. While the proposed EOBAVO algorithm demonstrates strong performance across a wide range of optimization tasks, future work will focus on addressing its limitations on highly ill-conditioned and rotated problems (observed in CEC2022 functions CEC01–CEC03). Incorporating hybrid local search technique and developing rotation-invariant opposition strategies are promising directions to further enhance EOBAVO’s adaptability and performance in complex, non-separable optimization landscapes. Future initiatives designed to minimize the frequency of function calls are projected to find utility in several domains, such as image processing, feature selection, and data mining. We also aim to extend the proposed EOBAVO framework to handle discrete and mixed-integer optimization problems in our future works, which are prevalent in engineering design. This adaption will involve redefining OBL mechanisms for discrete spaces, hybrid encoding strategies for mixed variables, and specialized constraint-handling techniques to maintain feasibility while preserving diversity and convergence speed.
Data availability
The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.
References
Abdel−Basset, M., Abdel−Fatah, L. & Sangaiah, A. K. Metaheuristic algorithms: A comprehensive review. Comput. Intell. Multimed. Big Data Cloud Eng. Appl. https://doi.org/10.1016/B978-0-12-813314-9.00010-4 (2018).
Beni, G. & Wang, J. Swarm intelligence in cellular robotic systems. Robot. Biol. Syst. Towar. A New Bionics https://doi.org/10.1007/978-3-642-58069-7-38 (1993).
Brezočnik, L., Fister, I. & Podgorelec, V. Swarm intelligence algorithms for feature selection: A review. Appl. Sci. https://doi.org/10.3390/app8091521 (2018).
Kennedy, J. & Eberhart, R. Particle swarm optimization. In Proceedings of ICNN’95—International Conference on Neural Networks, IEEE, Ed. 1942–1948 (IEEE, 1995).
Karaboga, D. & Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 39(3), 459–471. https://doi.org/10.1007/s10898-007-9149-x (2007).
Storn, R. & Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 38(3), 284–287. https://doi.org/10.1071/AP09004 (2009).
Yang, X.−S. & Deb, S. Cuckoo search via Lévy flights. IEEE 210–214 (2009). https://doi.org/10.1109/NABIC.2009.5393690.
Mirjalili, S., Mirjalili, S. M. & Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 61, 46–61 (2014).
Mirjalili, S. & Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 95, 51–67. https://doi.org/10.1016/j.advengsoft.2016.01.008 (2016).
Rao, R. V., Savsani, V. J. & Vakharia, D. P. Teaching–learning−based optimization: A novel method for constrained mechanical design optimization problems. Comput. Des. 43(3), 303–315. https://doi.org/10.1016/j.cad.2010.12.015 (2011).
Shayanfar, H. & Gharehchopogh, F. S. Farmland fertility: A new metaheuristic algorithm for solving continuous optimization problems. Appl. Soft Comput. 71, 728–746. https://doi.org/10.1016/j.asoc.2018.07.033 (2018).
Mirjalili, S., Gandomi Amir, H., Mirjalili, S. Z., Saremi, F. H. & Mirjalili, S. M. Salp swarm algorithm: A bio−inspired optimizer for engineering design problems. Adv. Eng. Softw. 114, 163–191. https://doi.org/10.1016/j.advengsoft.2017.07.002 (2017).
Rashedi, E., Nezamabadi−pour, H. & Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 179, 2232–2248. https://doi.org/10.1016/j.ins.2009.03.004 (2019).
Zhao, S., Zhang, T., Ma, S. & Wang, M. Sea−horse optimizer: A novel nature−inspired meta−heuristic for global optimization problems. Appl. Intell. 53(10), 11833–11860. https://doi.org/10.1007/s10489-022-03994-3 (2023).
Chopra, N. & Ansari, M. M. Golden jackal optimization: A novel nature−inspired optimizer for engineering applications. Expert Syst. Appl. 198, 116924 (2022).
Abdollahzadeh, B., Gharehchopogh, F. S. & Mirjalili, S. African vultures optimization algorithm: A new nature−inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 158, 107408. https://doi.org/10.1016/j.cie.2021.107408 (2021).
Wolpert, D. H. & Macready, W. G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997).
Kirkpatrick, S., Gelatt, C. D. Jr. & Vecchi, M. P. Optimization by Simulated Annealing. Science 220(4598), 671–680. https://doi.org/10.1126/science.220.4598.671 (1983).
Sumida, B. H., Houston, A. I., McNamara, J. M. & Hamilton, W. D. Genetic algorithms and evolution. J. Theor. Biol. 147(1), 59–84 (1990).
Glover, F. Tabu search—Part I. ORSA J. Comput. 1(3), 190–206. https://doi.org/10.1287/ijoc.1.3.190 (1989).
Geem, Z. W., Kim, J. H. & Loganathan, G. V. A new heuristic optimization algorithm: Harmony search. SIMULATION 76, 2. https://doi.org/10.1177/003754970107600201 (2001).
Yao, X. & Liu, Y. Evolution strategies. Lect. Notes Comput. Sci. 1213, 151–161. https://doi.org/10.1007/bfb0014808 (1997).
Hansen, N., Müller, S. D. & Koumoutsakos, P. Reducing the Time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA−ES). Evol. Comput. 11(1), 1–18 (2003).
Cao, Y. J. & Wu, Q. H. Evolutionary programming. In Proceedings of 1997 IEEE International Conference on Evolutionary Computation (ICEC ’97) (1997). https://doi.org/10.1109/ICEC.1997.592352
Eroll, O. K. & Eksin, I. A new optimization method: Big bang−big crunch. Adv. Eng. Softw. 37(2), 106–111. https://doi.org/10.1016/j.advengsoft.2005.04.005 (2006).
Dorigo, M. & Di-Caro, G. Ant colony optimization: A new meta−heuristic. In Proc. 1999 Congr. Evol. Comput. (Cat. No. 99TH8406) (1999). https://doi.org/10.1109/CEC.1999.782657.
Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 12(6), 702–713 (2008).
Rashedi, E., Nezamabadi−pour, H. & Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 179(13), 2232–2248 (2009).
Yang, Y., Chen, H., Heidari, A. A. & Gandomi, A. H. Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst. Appl. https://doi.org/10.1016/j.eswa.2021.114864 (2021).
Abdollahzadeh, B., Soleimanian, F. G., Khodadadi, N. & Mirjalili, S. Mountain gazelle optimizer: A new nature-inspired metaheuristic algorithm for global optimization problems. Adv. Eng. Softw. 174, 103282. https://doi.org/10.1016/j.knosys.2021.107139 (2022).
Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inf. Sci. 222, 175–184. https://doi.org/10.1016/j.ins.2012.08.023 (2013).
Tanabe, R. & Fukunaga, A. S. Improving the search performance of SHADE using linear population size reduction. In 2014 IEEE Congress on Evolutionary Computation (2014).
Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl. Based Syst. 96, 120–133. https://doi.org/10.1016/j.knosys.2015.12.022 (2016).
Koza, J. R. Human-competitive results produced by genetic programming. Genet. Program. Evolv. Mach. 11(3–4), 251–284. https://doi.org/10.1007/s10710-010-9112-3 (2010).
Mirjalili, S., Mirjalili, S. M. & Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 27(2), 495–513. https://doi.org/10.1007/s00521-015-1870-7 (2016).
Mirjalili, S. Moth-flame optimization algorithm: A novel nature−inspired heuristic paradigm. Knowl. Based Syst. 89, 228–249. https://doi.org/10.1016/j.knosys.2015.07.006 (2015).
Faramarzi, A., Heidarinejad, M., Stephens, B. & Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl. Based Syst. 191, 105190. https://doi.org/10.1016/j.knosys.2019.105190 (2020).
Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 83, 80–98 (2015).
Abdel−Basset, M., El−Shahat, D., Jameel, M. & Abouhawwash, M. Young’s double-slit experiment optimizer: A novel metaheuristic optimization algorithm for global and constraint optimization problems. Comput. Methods Appl. Mech. Eng. 403, 115652. https://doi.org/10.1016/j.cma.2022.115652 (2023).
Sarangi, P. & Mohapatra, P. Chaotic-based mountain gazelle optimizer for solving optimization problems. Int. J. Comput. Intell. Syst. https://doi.org/10.1007/s44196-024-00444-5 (2024).
Abualigah, L. et al. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 157, 107250. https://doi.org/10.1016/j.cie.2021.107250 (2021).
Gopi, S. & Mohapatra, P. A modified whale optimisation algorithm to solve global optimisation problems. In Proceedings of 7th International Conference on Harmony Search, Soft Computing and Applications 465–477 (2022).
Braik, M., Hammouri, A., Atwan, J., Al−Betar, M. A. & Awadallah, M. A. White shark optimizer: A novel bio−inspired meta-heuristic algorithm for global optimization problems. Knowl.-Based Syst. 243, 108457. https://doi.org/10.1016/j.knosys.2022.108457 (2022).
Mohapatra, S. & Mohapatra, P. American Zebra Optimization Algorithm for Global Optimization Problems vol. 13, no. 1 (Nature Publishing Group UK, 2023). https://doi.org/10.1038/s41598-023-31876-2.
Adegboye, O. R., Feda, A. K., Agyekum, E. B., Mbasso, W. F. & Kamel, S. Towards greener futures: SVR−based CO2 prediction model boosted by SCMSSA algorithm. Heliyon 10(11), 31766. https://doi.org/10.1016/j.heliyon.2024.e31766 (2024).
Adegboye, O. R. & Feda, A. K. Improved Exponential Distribution Optimizer: Enhancing Global Numerical Optimization Problem Solving and Optimizing Machine Learning Parameters, vol. 28, no. 2 (Springer US, 2025). https://doi.org/10.1007/s10586-024-04753-4.
Adegboye, O. R., Feda, A. K. & Tejani, G. G. Salp Navigation and Competitive Based Parrot Optimizer (SNCPO) for Efficient Extreme Learning Machine Training and Global Numerical Optimization 1–29 (2025).
Sasmal, B., Das, A., Dhal, K. G. & Saha, R. A Comprehensive Survey on African Vulture Optimization Algorithm Vol. 31, no. 3 (Springer Netherlands, 2024). https://doi.org/10.1007/s11831-023-10034-x.
Yakout, A. H., Kotb, H., AboRas, K. M. & Hasanien, H. M. Comparison among different recent metaheuristic algorithms for parameters estimation of solid oxide fuel cell: Steady-state and dynamic models. Alex. Eng. J. 61(11), 8507–8523. https://doi.org/10.1016/j.aej.2022.02.009 (2022).
Alkan, B. & Kaniappan-Chinnathai, M. Performance comparison of recent population-based metaheuristic optimisation algorithms in mechanical design problems of machinery components. Machines 9(12), 341. https://doi.org/10.3390/machines9120341 (2021).
Tizhoosh, H. Opposition-based learning: A new scheme for machine intelligence. In International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce Vol. 1 695–701 (2005).
Rahnamayan, S., Tizhoosh, H. R. & Salama, M. M. A. Opposition-based differential evolution. IEEE Trans. Evol. Comput. 12(1), 64–79. https://doi.org/10.1109/TEVC.2007.894200 (2008).
Ewees, A. A., Elaziz, M. A. & Houssein, E. H. Improved grasshopper optimization algorithm using opposition-based learning. Expert Syst. Appl. 112, 156–172. https://doi.org/10.1016/j.eswa.2018.06.023 (2018).
Mostafa, R. R., Hashim, F. A., El-Attar, N. E. & Khedr, A. M. Empowering African Vultures Optimizer Using Archimedes Optimization Algorithm for Maximum Efficiency for Global Optimization and Feature Selection Vol. 15, no. 5 (Springer Berlin Heidelberg, 2024). https://doi.org/10.1007/s12530-024-09585-6.
Liu, R. et al. Improved African vulture optimization algorithm based on quasi−oppositional differential evolution operator. IEEE Access 10, 95197–95218. https://doi.org/10.1109/ACCESS.2022.3203813 (2022).
Fan, J., Li, Y. & Wang, T. An improved African vultures optimization algorithm based on tent chaotic mapping and time-varying mechanism. PLoS ONE https://doi.org/10.1371/journal.pone.0260725 (2021).
Oliva, D., Houssein, E. H. & Hinojosa, S. Metaheuristics in Machine Learning: Theory and Applications (Springer, 2021).
Askr, H., Farag, M. A., Hassanien, A. E., Snasel, V. & Farrag, T. A. Many-objective African vulture optimization algorithm (2023).
Kuang, X., Hou, J., Liu, X., Lin, C., Wang, Z. & Wang, T. Improved African Vulture Optimization Algorithm Based on Random Opposition-Based Learning Strategy 1–24 (2024).
Long, W., Jiao, J., Liang, X., Cai, S. & Xu, M. A random opposition−based learning grey wolf optimizer. IEEE Access 7, 113810–113825. https://doi.org/10.1109/ACCESS.2019.2934994 (2019).
Kiran, M. S. TSA: Tree−seed algorithm for continuous optimization. Expert Syst. Appl. 42(19), 6686–6698. https://doi.org/10.1016/j.eswa.2015.04.055 (2015).
Dolatabadi, S. Weighted vertices optimizer (WVO): A novel metaheuristic optimization algorithm. Numer. Algebr. Control Optim. 8(4), 461–479. https://doi.org/10.3934/naco.2018029 (2018).
Heidari, A. A. et al. Harris hawks optimization: Algorithm and applications. Futur. Gener. Comput. Syst. 97, 849–872. https://doi.org/10.1016/j.future.2019.02.028 (2019).
Ahmadianfar, I., Heidari, A. A., Noshadian, S., Chen, H. & Gandomi, A. H. INFO: An efficient optimization algorithm based on weighted mean of vectors. Expert Syst. Appl. https://doi.org/10.1016/j.eswa.2022.116516 (2022).
Suganthan, P. N. et al. Problem definitions and evaluation criteria for the CEC 2005 special session on real−parameter optimization. In Tech. Report, Nanyang Technol. Univ. Singapore, May 2005 KanGAL Rep. 2005005, IIT Kanpur, India no. May 1–50 (2005).
Ahrari, A., Elsayed, S., Sarker, R., Essam, D. & Coello, C. A. Problem definition and evaluation criteria for the CEC’2022 competition on dynamic multimodal optimization. In Proceedings IEEE World Congress Computational Intelligence (IEEE WCCI 2022), Padua, Italy no. January 18–23 (2022). https://doi.org/10.13140/RG.2.2.32347.85284.
Biedrzyck, R. Revisiting CEC 2022 ranking: A new ranking method and influence of parameter tuning. Swarm Evol. Comput. https://doi.org/10.1016/j.swevo.2024.101623 (2024).
Elnaghi, B. E., Dessouki, M. E., Mohamed, S. W., Ismaiel, A. M. & Abdel−Wahab, M. N. African vulture optimizer algorithm for fuzzy logic speed controller of fuel cell electric vehicle. Int. J. Power Electron. Drive Syst. 15(3), 1348–1357. https://doi.org/10.11591/ijpeds.v15.i3.pp1348-1357 (2024).
Nikolić-ðorić, E., Čobanović, K. & Lozanov-Crvenković, Z. Statistical graphics and experimental data 1–4 (2006).
Mohapatra, P., Das, K. N. & Roy, S. A modified competitive swarm optimizer for large scale optimization problems. Appl. Soft Comput. 59, 340–362. https://doi.org/10.1016/j.asoc.2017.05.060 (2017).
Wu, G., Mallipeddi, R. & Suganthan, P. N. Problem Definitions and Evaluation Criteria for the CEC 2017 Special Session and Competition on Real-Parameter Optimization 1–34, Technical Report (Nanyang Technological University, 2016).
Hu, G., Zhong, J., Zhao, C., Wei, G. & Chang, C. LCAHA: A hybrid artificial hummingbird algorithm with multi-strategy for engineering applications. Comput. Methods Appl. Mech. Eng. 415, 116238. https://doi.org/10.1016/j.cma.2023.116238 (2023).
Chandran, V. & Mohapatra, P. Enhanced opposition-based grey wolf optimizer for global optimization and engineering design problems. Alex. Eng. J. 76, 429–467. https://doi.org/10.1016/j.aej.2023.06.048 (2023).
Bidar, M., Mouhoub, M., Sadaoui, S. & Kanan, H. R. A novel nature−inspired technique based on mushroom reproduction for constraint solving and optimization. Int. J. Comput. Intell. Appl. https://doi.org/10.1142/S1469026820500108 (2020).
Talbi, E.-G. Metaheuristics: From design to implementation (Wiley, 2009).
Mosetti, G., Poloni, C. & Diviacco, B. Optimization of wind turbine positioning in large windfarms by means of a genetic algorithm. J. Wind Eng. Ind. Aerodyn. 51(1), 105–116. https://doi.org/10.1016/0167-6105(94)90080-s9 (1994).
Grady, S. A., Hussaini, M. Y. & Abdullah, M. M. Placement of wind turbines using genetic algorithms. Energy Renew. 30(2), 259–270 (2005).
Feng, J. & Shen, W. Z. Solving the wind farm layout optimization problem using random search algorithm. Renew. Energy 78, 182–192 (2015).
Moosavi, S. H. S. & Bardsiri, V. K. Satin bowerbird optimizer: A new optimization algorithm to optimize ANFIS for software development effort estimation. Eng. Appl. Artif. Intell. 60, 1–15 (2017).
Pookpunt, S. & Ongsakul, W. Optimal placement of wind turbines within wind farm using binary particle swarm optimization with time−varying acceleration coefficients. Renew. Energy 55, 266–276 (2013).
Jensen, N. O. A note on wind generator interaction. Risø Natl. Lab. Roskilde, Denmark, vol. 2411 (1983).
Funding
Open access funding provided by Vellore Institute of Technology.
Author information
Authors and Affiliations
Contributions
Henry Blankson: Conceptualization, Methodology, Writing—original draft. Vanisree Chandran—Conceptualization, Methodology, Himadri Lala: Conceptualization, Methodology, Supervisor, Writing—review & editing. Prabhujit Mohapatra: Conceptualization, Methodology, Supervision, Writing—review & editing.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Blankson, H., Chandran, V., Lala, H. et al. An enhanced opposition-based African vulture optimizer for solving engineering design problems and global optimization. Sci Rep 15, 33078 (2025). https://doi.org/10.1038/s41598-025-16630-0
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41598-025-16630-0








































