Introduction

Metaheuristic Algorithms (MA) are sophisticated optimization strategies that draw inspiration from natural processes and are intended to address challenging issues that conventional approaches find difficult to handle. To find the best answers, these algorithms, including simulated annealing, particle swarm optimization, and genetic algorithms, explore enormous search spaces. They are suitable in various domains, including engineering and finance, owing to their versatility and adaptability. These methods are widely acknowledged as global search algorithms that are stochastic, flexible, straightforward, and derivative free1. Their versatility and ease of use have made it possible to employ them to address a wide range of challenging business and science issues. Metaheuristic algorithms can be extensively categorized into two classes: Those that are entirely based on natural phenomena and those that are inspired by biological processes seen in nature. Swarm intelligence is another name for the biological processes found in nature. The term “swarm intelligence” was used by2 to refer to cellular robotic systems. Every swarm member controls its behavior and is separated from the others. In this case, the agent serves a different approach to the current issue. Instead of being present in every member, intelligence manifests in the swarm as a whole. As they change during the program, the agents of the problem (or solutions) are erratic3. Many metaheuristics have attracted the interest of multiple scholars and have received numerous citations over the last two decades. Table 1 lists some of the well-known algorithms.

Table 1 Types of metaheuristic algorithms.

Although most of the previously stated algorithms have been applied to many optimization problems, they have been used by metaheuristic algorithms and show that slow convergence or falling into local optima remains a common issue. For instance, being trapped in local optima and experiencing premature convergence are two problems associated with the PSO method4. The ABC algorithm5 features a poor balance between exploitation and exploration and a slow rate of convergence. The Differential Evolution (DE) methodology exhibits specific shortcomings, including a protracted convergence rate and population stagnation6. The Cuckoo Search7 algorithm has numerous drawbacks, including inadequate search velocity and diminished convergence precision. The Grey Wolf Optimizer (GWO) has drawbacks, including a tendency to become trapped in local optima and a slow rate of convergence during the final stages of the search process8. Some of the shortcomings of the WOA9 include poor accuracy, sluggish convergence, and susceptibility to local optima. The convergence rate of TLBO is its primary drawback, and it becomes significantly more problematic when handling high-dimensional problems10. The Fast Fourier Algorithm (FFA) presents several shortcomings, including a tendency for slow convergence and uneven distribution within intensification (exploitation) and diversification (exploration) efforts11. Salp Swarm Algorithm (SSA) also has challenges related to population variety and being trapped in locally optimal solutions12. Some of the drawbacks of GSA include complex objective functions, which may have high computational requirements, intricate procedures, a large number of control parameters, and poor convergence13. Among the shortcomings of the Sea Horse Optimizer (SHO) are its sluggish rate of convergence and propensity to become stuck in local optima14. The low variety and unequal use of exploitation and exploration are problems with Golden Jackal Optimization (GJO)15. Similarly16, presented the drawbacks of the African Vulture Optimization Algorithm (AVO) as the addition of exploitation capability to the exploration phase to speed up convergence, and the random strategy that determines the changeover between the phases of exploration and exploitation, which impacts balance. This idea causes local optimum trapping and affects the global search of the region. To overcome the aforementioned restrictions, new optimization techniques are required. Furthermore17, proposed the no-free-lunch theorem (NFL), where no algorithm can be successful in every optimization problem. Owing to the above limitations, there is always room to create new metaheuristic algorithms or modify existing ones to resolve challenging optimization problems in a variety of domains. Table 1 lists the types of metaheuristic algorithm.

In their quest to enhance the existing algorithms45, introduced an improved SCMSSA aimed at enhancing both the optimization precision and efficiency. It also assesses SCMSSA in comparison with alternative algorithms through the application of six distinct test functions. The SVR-SCMSSA model demonstrated a 95% accuracy in predicting CO2 emissions, providing valuable insights into the primary factors contributing to CO2 emissions. In another study46, presented a Modified EDO (MEDO) that integrates EDO with SSA and QI. An adaptive p-best mutation technique was employed to avoid local optimum pitfalls. In addition, a phasor operator is utilized to improve the diversity within the algorithm47 also introduced the Salp Navigation and Competitive based Parrot Optimizer (SNCPO). This involves a hybrid algorithm combining Competitive Swarm Optimization and Salp Swarm Algorithm. Their findings indicate that SNCPO consistently surpasses current leading algorithms, attaining enhanced convergence rates, solution quality, and robustness, while successfully evading local optima. Importantly, SNCPO shows significant adaptability to various optimization environments, underscoring its applicability in practical engineering and machine learning scenarios. The exploitation phase was based on an exponential distribution model.

The following is a summary of the main contributions of this research.

  1. i.

    To enhance the exploration and exploitation of conventional AVO, a recently developed method called Enhanced Opposition-based Learning (EOBL) is suggested and integrated into AVO to create an Enhanced Opposition-based African Vulture Optimizer (EOBAVO) algorithm.

  2. ii.

    Statistical tests validated the performance of the proposed methodology and compared it with eight of the best algorithms. Moreover, EOBAVO was assessed based on engineering challenges to show that it can solve real-life engineering problems.

  3. iii.

    The efficacy and toughness of EOBAVO were confirmed using the CEC2005 benchmark’s 23 test functions, which had both low and high dimensions.

  4. iv.

    Exploration–Exploitation and Diversity Analyses show that EOBAVO effectively transitions from exploration to exploitation and converges well on most functions and enhanced convergence speed by reducing population diversity and promoting exploitation across benchmarks.

  5. v.

    Additionally, EOBAVO was tested on some engineering challenges to show that real-life engineering complications can be resolved.

The rest of the paper is structured as follows: Section “Preliminaries” offers a literature review on AVO, a brief justification and mathematical modeling of the traditional AVO technique, and an outline of the fundamentals of ROBL and OBL techniques. Section “The Proposed EOBAVO” describes the EOBAVO algorithm and the fundamental idea of IOBL. Section “Numerical experiments and results analysis” presents numerical tests and analysis of the results. Section “Performance of EOBAVO on practical engineering problems” shows how the EOBAVO algorithm can be used to solve actual engineering challenges, and Section “Conclusion” summarizes the research and suggests areas that require further investigation.

Preliminaries

Literature review on AVO

The African Vulture Optimization (AVO), a novel MA created by Abdollahzadeh et al. in 2022, is an intriguing substitute for global optimization48. Inspired by the hunting habits of African vultures, AVO consists of two primary steps: Finding prey and attacking it16. AVO is more effective on a few benchmark functions when compared to the sophisticated metaheuristic algorithms previously discussed. AVO is currently employed to resolve a range of challenges in the field of engineering optimization, as well as in numerous other areas of study. For example, to identify the ideal parameters of a solid oxide fuel cell (SOFC) steady-state model49, used various swarm intelligence methods for SOFC parameter estimation. The findings demonstrate that AVO is capable of producing precise characteristic curves for voltage and current using the 8-population intelligent optimization algorithm to solve five mechanical part design problems50, showed that AVO had the quickest solution time and ensured reasonable performance. The analysis of the merits and limitations of the established MAs, coupled with efforts to enhance their functionality through the introduction of new or modified mechanisms, presents an emerging research challenge. Recent developments in machine learning, termed as opposition-based learning (OBL), are perceived as an effective strategy for boosting the efficiency of these algorithms. Motivated by the opposite link between items51, first presented the concept of opposition in 2005. Over the past ten years, scholars have paid considerable attention to this topic. The concept of OBL has been used to enhance the functionality of a range of soft computing algorithms, such as artificial neural networks, fuzzy systems, optimization techniques, and reinforcement learning. The integration of the OBL methodology with additional bio-inspired optimization techniques yields shorter estimated distances to the global optimum. For instance, OBL is used in the DE optimization algorithm to generate new offspring when the species changes52. In addition, with the introduction of OBL to the Grasshopper Optimization Algorithm53, it was possible to swiftly reach an optimal point, and the exploration region was fully explored.

Nature-inspired optimization techniques, such as AVO, have been created based on the behaviors and feeding habits of African vultures. The ability of vultures to locate carrions across great distances and their remarkable scavenging skills are well-recognized. (Sharp Vision and Communication). Figure 1 shows the hunting behavior of the vultures.

Fig. 1
figure 1

Hunting behaviour of vultures.

They can also collaborate to identify the best answers by drawing on the insights of others (Social Cooperation) and depending on their surroundings, with their dynamic role adaptation, vultures alternate between exploring (looking for food) and exploitation (eating)54. To address challenging optimization problems, the AVO algorithm attempts to imitate these characteristics. AVO has shown notable applicability in several domains such as data analysis, power optimization, and control systems. It works well for complicated optimization problems because of its simplicity and minimal processing requirements55.

The nature-inspired metaheuristic algorithm known as AVO was created by16 and was inspired by the way vultures, which hunt birds, search for food. Vultures in their natural habitat behave differently from other birds in the scavenging process, continuously traveling great distances in a revolving fight style. Vultures, when searching for food, also look for other members of their kind that have found food. Vultures from several species occasionally gather from similar food sources. It is very rare to find weaker vultures around stronger ones, since they fight them for food.

The AVO principle, as described in54, imitates the population of vultures. Based on the fitness value, the initial population of N vultures in the AVO algorithm was divided into three groups. The population’s best solution is found in the first category. The second-best vultures are in the second team, and the rest are in the third group. Each vulture group plays a unique role in a food hunt. The most prevalent and dominant vultures, that is, the finest solutions, were thought to be the best in AVO. However, the worst solutions are the most hungry and feeblest vultures in the population. To discover the optimal alternative, vultures in AVO try to get closer to strong vultures and stay away from weak vultures. Based on the core concept of vultures and the theoretical frameworks established for the modeling of artificial vulture populations, the development of the AVO algorithm occurred in four phases,

Phase 1: categorizing the population according to the best vulture

To determine the leading solutions, the fitness value of each solution is calculated following the creation of the initial population. The best solution was determined by selecting the best vulture for the first category, and the second-best solution was determined by selecting the best vulture for the second category. Equation (1) determines the probability of successfully guiding the vulture towards the optimal solution within the two categories. This probability is subsequently used to move the remaining responses towards the best solutions identified in both the first and second categories.

$$\begin{array}{*{20}c} {R\left( i \right) = \left\{ {\begin{array}{*{20}l} {BestVulture_{1} } \hfill & {if} \hfill & {p_{i} = L_{1} } \hfill \\ {BestVulture_{2} } \hfill & {if} \hfill & {p_{i} = L_{2} } \hfill \\ \end{array} } \right.} \\ \end{array}$$
(1)

In the first group, the best vulture is designated as BestVulture1, and the second-best vulture in the second category is designated as BestVulture2. The sum of random numbers L1 and L2, which ranges from 0 to 1, is 1. pi can be calculated using Eq. (2) and the roulette-wheel method.

$$\begin{array}{*{20}c} {p_{i} = \frac{{F_{i} }}{{\mathop \sum \nolimits_{i = 1}^{n} F_{i} }}} \\ \end{array}$$
(2)

where Fi represents the fitness value for the groups and the total number of vultures in the two categories is represented by n. In conclusion, Fig. 2 illustrates the relationships between the vultures, where \(\boldsymbol{\alpha }\) represents the first group, \({\varvec{\beta}}\), the second group and \({\varvec{\gamma}}\), the third group of vultures. The target vulture was acquired using the pertinent characteristics.

Fig. 2
figure 2

The connection between the AVO.

Phase 2: vultures going without food (starvation)

Because they are powerful animals, vultures can fly great distances to find food when they are not famished. On the other hand, when they are hungry, they become hostile and unable to fly very far, forcing them to approach stronger vultures in their quest for food. Equation (3) was used to mathematically model this phenomenon:

$$\begin{array}{*{20}c} {t = h \times \left( {sin^{k} \left( {\frac{\pi }{2} \times \frac{itr\left( i \right)}{{itr_{max} }}} \right) + cos\left( {\frac{\pi }{2} \times \frac{itr\left( i \right)}{{itr_{max} }}} \right) - 1} \right)} \\ \end{array}$$
(3)

The Hunger rate Eq. (4) is used to determine the vultures’ appetite to determine when the exploration phase ends and the exploitation phase begins.

$$\begin{array}{*{20}c} {F = \left( {2 \times rand_{1} + 1} \right) \times Z \times \left( {1 - \frac{itr\left( i \right)}{{itr_{max} }}} \right) + t} \\ \end{array}$$
(4)

where variables h, Z, and rand1 are random integers drawn from [− 2, 3], [− 1, 1], and [0, 1] intervals, respectively; F represents the degree of hunger; and itrmax and itr(i) indicate the maximum and current iterations, respectively. The probability of initiating the exploration process during the final stages of optimization is positively correlated with an increase in the value of k, whereas a reduction in the parameter leads to a lower likelihood of entering this exploration phase. This fixed value of k indicates that the optimization process impedes the exploration and exploitation stages. The formula for rate F reveals that the operational rate of the vultures decreases as the number of iterations increases. In particular, if F is greater than 1, vultures will continue to explore and seek food in diverse areas. However, if F is less than or equal to 1, the vultures enter the exploitation phase, where they search for food closer to the existing solution.

Phase 3: exploration

Vultures can easily discover food and carcasses in the wild with the aid of their excellent vision. Vultures take time to assess their surroundings before starting a protracted food battle. Vultures in AVO can investigate several random sites using one of two methods; the parameter pi is used to select the strategy. Which of the two strategies will be employed is determined by this parameter value, which is predefined in the range between 0 and 1 before the search procedure begins. Equation (5) can be used to depict the exploration phase of the vultures.

$$\begin{array}{*{20}c} {X_{t + 1} = \left\{ {\begin{array}{*{20}l} {R_{t} - D_{t} \times F_{t} ,} \hfill & {p_{1} \ge rand_{p1}^{t} } \hfill \\ {R_{t} - F_{t} + rand_{i2}^{t} \times \left( {\left( {ub - lb} \right) \times rand_{i3}^{t} } \right) + lb,} \hfill & {p_{1} < rand_{p1}^{t} } \hfill \\ \end{array} } \right.} \\ \end{array}$$
(5)

where \(X_{t + 1}\) represents the vulture position in subsequent iterations, one of the best vultures, Rt, is determined by Eq. (1) and \(rand_{p1}^{t}\), \(rand_{p2}^{t}\), and \(rand_{p3}^{t}\) are raom values between 0 and 1. Ft is determined by Eq. (4); the lower and upper bounds are also represented by lb and ub, respectively; and finally, Dt denotes the vulture’s distance from the current optimal vulture, which is determined by Eq. (6).

$$\begin{array}{*{20}c} {D_{t} = \left| {C \times R_{t} - X_{t} } \right|} \\ \end{array}$$
(6)

where C represents a random integer distributed uniformly between 0 and 2, and the vulture position currently being used in the iteration is represented by Xt.

Phase 4: exploitation’s first stage

The exploitation phase in AVO is grouped into two main stages. The first stage starts when F has a value ranging between 0.5 and 1. Two distinct tactics are employed during this phase: Siege fighting and rotating combat. The predetermined random value p2\(\in\)[0, 1] is utilized to choose which approach to use. The Siege-fight (food competition) approach is used if the generated number is greater than p2, which is the initial random variable randp2 created randomly in the interval [0,1]. The rotating combat tactic is used elsewhere56.

  • Competing for food When F falls between 0.5 and 1, the vultures are deemed to be active and full. As a result, the weak vultures gather and attempt to attack the vital ones to get food, while the stronger vultures are unwilling to share their food. Equation (7) and (8) shows the modeled behavior to update the location of the vulture.

$$\begin{array}{*{20}c} {X_{t + 1} = D_{t} \times \left( {F_{t} + rand_{4}^{t} } \right) - d_{t} } \\ \end{array}$$
(7)
$$\begin{array}{*{20}c} {d_{t} = R_{t} - X_{t} } \\ \end{array}$$
(8)
  • Rotating flight strategy Vultures that are motivated and stuffed will hover at high altitudes and compete for food. A spiral model is used by AVO to mimic this behavior. The expression for rotating flight strategy can be described using Eqs. (9)–(11). The positional update equation pertinent to the rotational flight dynamics of vultures is given by Eq. (9).

    $$\begin{array}{*{20}c} {X_{t + 1} = R_{t} - \left( {S_{1}^{t} + S_{2}^{t} } \right)} \\ \end{array}$$
    (9)
    $$\begin{array}{*{20}c} {S_{1}^{t} = R_{t} \times \left( {\frac{{rand_{5} \times X_{t} }}{2\pi }} \right) \times cos\left( {X_{t} } \right)} \\ \end{array}$$
    (10)
    $$\begin{array}{*{20}c} {S_{2}^{t} = R_{t} \times \left( {\frac{{rand_{5} \times X_{t} }}{2\pi }} \right) \times sin\left( {X_{t} } \right)} \\ \end{array}$$
    (11)

All rand values in this phase are equally distributed and fall between 0 and 1.

Phase 5: the final stage of exploitation

The second and final stage of exploitation is when the movements of the vultures gather several vulture species over the food supply. During this stage, the methods of siege or aggressive conflict, struggling for food, are carried out. If the value of F is below 0.5, the algorithm advances to this stage. To begin this step, labelled rand P3, is produced with bounds of [0, 1]. If randP3 ≥ P3, the vultures display the aggregation behavior. However, if the generated number is greater than P3, yet less than 0, the vultures will adopt an attack behavior.

  • Aggregation behavior Vultures digest a lot of food when AVO reaches its last stage. Vultures will gather in large numbers and act competitively wherever food is available. Currently, Eq. (12) is used to calculate the formula for updating the vultures’ position.

$$\begin{array}{*{20}c} {X_{t + 1} = \frac{{A_{1}^{t} + A_{2}^{t} }}{2}} \\ \end{array}$$
(12)
$$\begin{array}{*{20}c} {A_{1}^{t} = BestVulture_{1}^{t} - \frac{{BestVulture_{1}^{t} \times X_{t} }}{{BestVulture_{1}^{t} - \left( {X_{t} } \right)^{2} }} \times F_{t} } \\ \end{array}$$
(13)
$$\begin{array}{*{20}c} {A_{2}^{t} = BestVulture_{2}^{t} - \frac{{BestVulture_{2}^{t} \times X_{t} }}{{BestVulture_{2}^{t} - \left( {X_{t} } \right)^{2} }} \times F_{t} } \\ \end{array}$$
(14)
  • Attack behavior In a similar manner to when AVO is almost finished, the vultures will approach the best vulture to take the leftover food. Mathematically, the following expression for updating the position of the vultures is given by Eqs. (15), (16), and (17).

    $$\begin{array}{*{20}c} {X_{t + 1} = R_{t} - \left| {d_{t} } \right| \times F_{t} \times Levy\left( {dim} \right)} \\ \end{array}$$
    (15)
    $$\begin{array}{*{20}c} {Levy\left( {dim} \right) = 0.01 \times \frac{{r_{1} \times \sigma }}{{\left| {r_{2} } \right|^{{\frac{1}{\delta }}} }}} \\ \end{array}$$
    (16)
    $$\begin{array}{*{20}c} {\sigma = \left( {\frac{{{\Gamma }\left( {1 + \delta } \right) \times sin\left( {\frac{\pi \delta }{2}} \right)}}{{{\Gamma }\left( {1 + \delta 2} \right) \times \delta \times 2^{{\left( {\frac{\delta - 1}{2}} \right)}} }}} \right)^{{\frac{1}{\delta }}} } \\ \end{array}$$
    (17)

    where δ = 1.5 is a constant, and r1 and r2 are the random values that are uniformly distributed within [0, 1], and dim represents the problem dimension.

Figure 3 depicts the African vulture optimization algorithm’s solution procedure.

Fig. 3
figure 3

Flowchart of the African vulture optimization algorithm.

The opposition-based learning (OBL)

To enhance the efficiency of MAs in solving complex optimization challenges, Tizhoosh introduced the idea of Opposition-based learning (OBL) in 200551. The erratic, radial changes brought about by societal revolutions serve as the main driving force behind the OBL approach. MAs use OBL to address two major challenges in optimization, namely, premature convergence and slow exploration of the search space51. In other words, OBL serves as an effective strategy to enhance the performance of MAs by evaluating candidate solutions with their opposites52. This mechanism significantly improves search space exploration, increases the probability of locating global optima, and mitigates premature convergence. Within the AVO algorithm, the integration of OBL strengthens the balance between exploration and exploitation, accelerates convergence rates, and reduces the risk of stagnation in local optima when addressing complex, multimodal optimization problem8,57. The incorporation of OBL enables MAs to enhance population diversity while simultaneously sustaining a dynamic equilibrium between exploration (global search) and exploitation (local search refinement) during the optimization process. The improved exploration ability diminishes the chances of getting trapped in local minima and hastens the process of reaching global optima, especially in complex high-dimensional and multimodal environments52. Additionally, OBL promotes a more resilient search mechanism by methodically addressing stagnation, enabling the algorithm to break free from plateaus or misleading areas within the search domain58. The AVO algorithm significantly benefits from the integration of OBL approach, which enhances its search efficiency. This allows the agents, or vultures, to effectively navigate promising regions while preserving diversity to prevent premature convergence8,58. As a result, this leads to improved convergence rates, superior solution quality, and increases stability when addressing intricate, nonlinear, and multimodal optimization challenges.

Therefore, the integration OBL into MAs like AVO represents a significant advancement in the design of optimization strategies, offering a practical and theoretically grounded solution to the challenges of exploration–exploitation balance and robustness in complex search spaces. The OBL concept has been used in AVO successfully as the Opposition-based learning African Vulture Optimization Algorithm (OBAVO) to improve the exploitation potential of the original AVO’s search mechanism59. Additionally, the next subsections explain the opposite number idea.

The opposite number

For any random variable \(V \in \left| {a, b} \right|\), the opposite number \(\hat{V},\) can be found using the formula in Eq. (18) below.

$$\begin{array}{*{20}c} {\hat{V} = a + b - V } \\ \end{array}$$
(18)

where a and b represent the search space’s lower boundary and upper boundary, respectively. The population’s initial position is denoted by V. Equation (19) below can be used to generalize Eq. (13) into n-dimensional space:

$$\begin{array}{*{20}c} {{\hat{\text{V}}}_{{{\text{ij}}}} = {\text{a}}_{{{\text{ij}}}} + {\text{b}}_{{{\text{ij}}}} - {\text{V}}_{{{\text{ij}}}} \quad i = 1,2, \ldots N} \\ \end{array}$$
(19)

where the real vector \(V \in R^{n}\) has an opposite value of \(\hat{V} \in R^{n}\). Nevertheless, throughout any optimization process, the values V and \(\hat{V}\) are analyzed. By comparing the objective functions, the best of these two outcomes is maintained, and the worst is removed. For example, V is saved if F(V) is less than \(F\left( {\hat{V}} \right)\), otherwise if F(V) is greater than \(F\left( {\hat{V}} \right)\), then we save \(\hat{V}\).

V and its opposite \(\hat{V}\), are shown in one, two, and three dimensions, respectively, in Figs. 4, 5, and 6

Fig. 4
figure 4

One-dimensional OBL mechanism space.

Fig. 5
figure 5

Two-dimensional OBL mechanism space.

Fig. 6
figure 6

Three-dimensional OBL mechanism space.

ROBL

To help evade becoming trapped in local optima and also increase diversity, the Random Opposition-based Learning (ROBL) was created by60, a novel approach to OBL, in 2019. ROBL incorporates African Vulture Optimizer (AVO) as the Random Opposition-Based learning African Vulture Optimizer (ROBAVO). In contrast to Eq. (19), the opposite solution \(\hat{V}_{ij}\), represented by Eq. (20), is random for investigation. It describes a novel OBL strategy known as Random Opposition-Based Learning (ROBL)

$$\begin{array}{*{20}c} {{\hat{\text{V}}}_{{{\text{ij}}}} = {\text{a}}_{{{\text{ij}}}} + {\text{b}}_{{{\text{ij}}}} - {\text{rand*V}}_{{{\text{ij}}}} ,\quad i = 1,2,3 \ldots N} \\ \end{array}$$
(20)

where rand is between 0 and 1, and \({a}_{ij}, {b}_{ij}\) represent the lower and upper limits of the ith particle, respectively. Therefore, Eq. (20) can effectively increase the population’s diversity and also assist in avoiding local optima. The V and its opposing \(\widehat{V}\) are shown in one, two, and three dimensions, respectively, in Figs. 7, 8, and 9.

Fig. 7
figure 7

One-dimensional ROBL mechanism space.

Fig. 8
figure 8

Two-dimensional ROBL mechanism space.

Fig. 9
figure 9

Three-dimensional ROBL mechanism space.

The proposed EOBAVO

The enhanced opposition-based learning (EOBL)

The Enhanced Opposition-based Learning (EOBL) represents a groundbreaking pedagogical strategy designed to aid in the assessment of a candidate solution by concurrently evaluating the current response alongside its developed counter-solution. This technique can facilitate a faster convergence of the optimization algorithm by selecting the most appropriate correspondence between the current solutions being compared and their evolving opposite-based counterparts, which will be refined in later iterations.

This makes it possible for the first response to be the best fit solution, which is the guess or enhanced-opposite approximation that tackles the issue. The process begins with the closer of the two guesses. Every other solution in the present population can be implemented consistently using the same process.

Mathematically, the optimization problem’s candidate solution is represented as shown in Eq. (21), in coordinate form as an m-dimensional space, and it is of the form:

$$\begin{array}{*{20}c} { {\hat{\text{V}}}_{{{\text{ij}}}} = \left\{ {\begin{array}{*{20}c} {{\text{N}} + \left( {{\text{rand}}^{2} {* }\frac{{{\text{V}}_{{{\text{ij}}}} }}{2}} \right),} & {{\text{norm}}\left( {{\text{V}}_{{{\text{ij}}}} } \right) \le {\text{norm}}\left( {\text{N}} \right)} \\ {{\text{N}} - \left( {{\text{rand}}^{2} {* }\frac{{{\text{V}}_{{{\text{ij}}}} }}{2}} \right),} & {{\text{norm}}\left( {{\text{V}}_{{{\text{ij}}}} } \right) > {\text{norm}}\left( {\text{N}} \right)} \\ \end{array} } \right. } \\ \end{array}$$
(21)

where \(N=\frac{{\varvec{a}}+{\varvec{b}}}{2}\), a denotes the lower limit and b denotes the upper limit of the search interval, and \({rand}^{2}\) represents a small arbitrary number between [0, 1], which aids in making use of the search space’s promising areas. Additionally, this technique predetermines when to use EOBL in this algorithm by defining a jumping probability or jumping parameter (Jr). Linking to Eq. (15), the new functions are proposed so that the existing rules are altered to avoid poor diversity and to promote convergence while avoiding local optima. Figures 10, 11, and 12, respectively, depict V and \(\widehat{\text{V}}\), in all three-dimensional spaces.

Fig. 10
figure 10

One-dimensional EOBL mechanism space.

Fig. 11
figure 11

Two-dimensional EOBL mechanism space.

Fig. 12
figure 12

Three-dimensional EOBL mechanism space.

Integrates EOBL with AVO

The basis of the suggested EOBAVO technique, which seeks to increase the AVO technique’s efficiency, is described in this segment. The AVO technique is improved by merging it with the EOBL technique, which improves its ability to swiftly find the ideal value while thoroughly exploring the search space. The AVO approach has several drawbacks, such as the addition of exploitation capability to the exploration phase to speed up convergence and the random strategy that determines the conversion from the exploration phase through to the exploitation phase, which also impacts balance. This idea causes local optima trapping and affects the region’s global search, which is enhanced by EOBAVO. The suggested approach takes the opposite values when exploring the full search area by taking into account the two potential places for the calculated value to avoid these situations. This alteration raises the likelihood that the best answers will be discovered faster and more effectively. Every mathematical change must be tested, though, as the NFL theorem states that no optimization technique can successfully handle every issue17. There are two stages to the integration of the EOBL with the AVO. The population is initialized using EOBL in the first phase, and new vultures are developed using the data at hand in the second phase. These phases are explained in depth in the following subsections.

Initialize population by EOBL

In the first phase, we set the population \(V_{i} = \left\{ {v_{i1} , v_{i2} , \ldots ,v_{ij} , \ldots v_{iD} } \right\}\quad \left( {i = 1, 2, 3 \ldots MP;j = 1,2,3 \ldots D} \right)\) randomly in the search space, and MP represents the population size in dimension D. The EOBL technique is used to determine the optimal value for each solution as well as the value associated with every solution in the population. The original population \({V}_{i}\) and its corresponding population \({\widehat{V}}_{i}\) will then be combined to form a single group. From \(\left\{{V}_{i} , {\widehat{V}}_{i}\right\}\), we select the optimal MP result. The original population will now have the best MP solutions.

Update the new group of vultures

At the second phase, each revised AVO solution is restructured using the function before the modification in agreement with Eqs. (116). Then the fitness value and optimal solution are sustained. The EOBL is applied to create new vultures with a particular rate of probability (Jr). An arbitrary number from 0 to 1 is then generated. Our proposed technique’s capacity for exploration is improved by this jumping parameter (Jr). If the random value is less than Jr, we apply EOBL to create new Vultures based on the existing population. Next, we combine the present vulture with the appropriate vulture to select the MP fittest vulture. The approach may bring to equilibrium the exploitation and exploration capacities with the probability of Jr = 0.1, since the EOBL can be thought of as the mutation operator.

Time and space complexity

The computational complexity of any algorithm is a crucial component in assessing its performance. The time and space complexity of the proposed method is as follows:

  1. i.

    The vulture’s population is initialized in O (n * D) time, where n is the population size and D is the variable dimension. To calculate each vulture’s fit value, O(n) is needed.

  2. ii.

    The formula O Max_iter * n) time is used to estimate each vulture’s fitness value, where Max_iter is the maximum number of iterations.

Table 2 shows the average runtime per run over 30 independent runs for a representative set of benchmark functions (F3, F8 from CEC2005 and CEC09 from CEC2022) with a maximum of 15,000 function evaluations.

Table 2 Average runtime per run (in seconds).

From the results, EOBAVO uses approximately 8–12% extra runtime compared to standard AVO and about 5–8% compared to OBAVO/ROBAVO. This overhead stems from the extra evaluation of opposition-based solutions during initialization and occasional EOBL-based updating during the search iterations. However, the slight increase in computational cost is justified by the significant performance improvements in solution quality and convergence speed. EOBAVO consequently maintains a reasonable trade-off between computational efficiency and optimization effectiveness.

The overall time complexity is given as: O((Max_iter * n) * D). From the iterations, the offspring generation takes up the most space. Therefore, O(n * D) is the space complexity of EOBAVO. Figure 13 shows the graphical representation if the Average runtime per run for AVO, OBAVO, ROBAVO and EOBAVO.

Fig. 13
figure 13

Average runtime per run.

The proposed EOBAVO technique’s pseudo-code

The proposed EOBAVOA pseudo-code is given in Algorithm 1 below. We set the vulture population, \({V}_{0}\) in the search space to a random initial value.

Algorithm 1
figure a

Pseudo-code of the proposed EOBAVO.

Numerical experiments and results analysis

The proposed EOBAVO method is evaluated against AVO16, other Metaheuristics like AVO, PSO4, GSA28, SSA12, TSA61, MVO35, WVO62 as well as with the newest algorithms like MGO30, HGS29, and HHO63 and the top-performing algorithms like INFO64, Furthermore, the EOBAVO technique application to real-life engineering problems demonstrates how traditional AVO has improved.

Parameter/benchmark functions settings

Assessing the effectiveness of the proposed method, 23 IEEE CEC2005 test functions65 and which comprise 10 fixed-dimensional multi-modal functions, six multi-modal, and seven uni-modal, have been chosen. The test functions for IEEE CEC2005 are denoted by ‘‘F’’ and their respective numbers: F1, F2, F3,…,F23. To assess an algorithm’s potential for exploitation, the unimodal test functions F1–F7 have a single global optimum solution, but F8–F13 (the multi-modal) test functions have several local optimum values. These are thought to assess a potential algorithm for exploration. The remaining fixed-dimensional multi-modal functions, F14–F23, are thought to concurrently investigate the exploration and exploitation of Metaheuristic algorithms in global and local searches because they have fewer dimensions and more local extremums than the multi-modal functions. In addition, 12 hard test functions of the IEEE CEC202266 benchmark functions are applied to determine the usefulness and capabilities of the suggested technique. These functions have varying and expandable dimensions. IEEE CEC2022 test functions are denoted by ‘‘CEC,’’ which is followed by their respective numbers: CEC01, CEC02, up to CEC12. While the ranges of all the other functions are [− 100, 100], the ranges of the functions from CEC01 to CEC03 differ. The IEEE CEC202267 benchmark functions are far more complicated than the IEEE CEC200565 test functions that were employed in this investigation. The AVO68 and other prominent MAs, including SCA33, PSO4, SSA12, WOA9, were compared with the optimization outcomes of the EOBAVO. The proposed technique is also tested on a variety of real-world engineering design optimization problems. Additionally, statistical tests are used to quantify the algorithms’ statistical significance, such as the Wilcoxon rank-sum test69 and the t-test70.

Thirty search agents were used by each algorithm to look up for the search space. The average outcomes are then used for comparison. Each function is said to be performed 30 times, constrained to a maximum of 500 iterations and a total of 15,000 function evaluations (FEs). The experiment was carried out using (MATLAB R2025a, Windows 10, Intel Core i7-1165G7 CPU @ 2.80, 8GB RAM). Parameter configurations for the aforementioned algorithms are listed in Table 3 below:

Table 3 Mas’ parameter settings.

Performance metrics

  1. i.

    Average (Avg)

    The average is the mean of an algorithm’s best results over several runs and can be determined as shown in Eq. (22):

    $$\begin{array}{*{20}c} {avg = \frac{1}{R}\mathop \sum \limits_{i = 1}^{R} Best_{i} } \\ \end{array}$$
    (22)

    where \(Best_{i}\) denotes the best result reached from the ith run, and the number of independent runs is denoted by R.

  2. ii.

    Standard deviation (std)

    An algorithm’s repeatability and capacity to yield the same ideal outcome after multiple runs are evaluated using the standard deviation given in Eq. (23):

    $$\begin{array}{*{20}c} {std = \sqrt {\frac{1}{R - 1}\mathop \sum \limits_{i = 1}^{R} \left( {Best_{i} - avg} \right)^{2} } } \\ \end{array}$$
    (23)
  3. iii.

    The t-test

    Equation (24) shows the t-test used to assess whether a proposed method differs significantly from existing MA.

    $$\begin{array}{*{20}c} {t = \frac{{avg_{1} - avg_{2} }}{{\sqrt {\frac{{std_{1}^{2} - std_{2}^{2} }}{R}} }}} \\ \end{array}$$
    (24)

    where \({avg}_{1}\) and \({avg}_{2}\) are the averages and \({st{d}_{1}}^{2}\) and \({st{d}_{2}}^{2}\) are also the standard deviations for any two algorithms.

Comparison of EOBAVO with AVO, OBAVO and ROBAVO

To identify the top solution for the IEEE CEC200565 and IEEE CEC202267 benchmark functions, the performance of the suggested technique, EOBAVO, and the traditional AVO, OBAVO, and ROBAVO algorithms are evaluated in this section. Tables 4 and 5 present the comparison results, respectively, which show clearly that for most functions, the recommended EOBAVO approach performs better than AVO, OBAVO, and ROBAVO.

Table 4 The IEEE CEC2005 benchmark function results for AVO, OBAVO, ROBAVO, and EOBAVO (the top results have been highlighted).
Table 5 The IEEE CEC2022 benchmark function results for AVO, OBAVO, ROBBWOA, and EOBAVO (the top results have been highlighted).

Table 4 shows that the EOBAVO produced a better result than the rest, i.e., AVO, OBAVO, and ROBAVO, for functions F1–F4, F9–F15, F17–F19, and F21–F23. Functions F1, F3, F9, and F11 show successful exact global optimal solutions. The standard deviation measure for the proposed EOBAVO technique is higher than that of the classical AVO, OBAVO, and ROBAVO.

The average and standard deviation in Table 5 indicate that the EOBAVO technique performs better than the other three algorithms, except for the functions CEC01- CEC03, CEC09, and CEC10.

Evaluation of EOBAVO with other innovative and successful algorithms

A comparison of the effectiveness of the EOBAVO technique with four traditional, well-known Metaheuristic techniques: SCA, PSO, SSA, and WOA, was done in this section. Table 1 lists these algorithms’ parameter configurations. Table 6 shows that for the average fitness values, the EOBAVO approach fared better than any other algorithm of the IEEE CEC200565.

Table 6 IEEE CEC2005 benchmark function results for SCA, PSO, SSA, WOA, and EOBAVO (the top results are in Bold).

Nonetheless, the EOBAVO approach produced better and absolute global optimal solutions for the functions F16, F19, and F20. Except for F6, F8, F13, F14, and F16–F19, the suggested algorithm fared better than most functions in terms of the standard deviation. Table 7 shows that, compared to alternative methods for the IEEE CEC202281 test functions, the suggested EOBAVO methodology is superior at solving them. For example, Table 5 shows that the suggested technique outperformed the others for the functions F1, F3, F5, F8, and F10 in terms of average fitness value. Additionally, when the standard deviations of other methods are compared, EOBAVO functions F1–F2, F5, and F7–F8 have shown better outcomes.

Table 7 The IEEE CEC2022 benchmark function results for AVO, OBAVO, ROBBWOA, and EOBAVO (the top results have been highlighted).

Analysis of sensitivity

The Sensitivity analysis for MAs looks at how several independent variables’ values impact a specific outcome. This is also a crucial part of parameter tuning. The proposed EOBAVO technique is used to analyze the sensitivity of Jr (the jumping parameter) and rand2. Below is a discussion of five different scenarios.

Analysis of Jr

The EOBAVO technique is used with different Jr values, while we keep other parameters constant, to analyze the importance of the parameter. Jr will take the following values: 0.01; 0.05; 0.07; 0.09; and 0.1. The reasons for these close values are that small, closely spaced values permit detailed performance assessment to guarantee fine-grained sensitivity analysis. It also prevents undue disturbances while harmonizing exploration and exploitation (controlled randomness). The functions F3, F8, and F15 are a subgroup of test functions selected from each category. With a maximum of 15,000 FEs, each function executes 30 times. Table 8 and Fig. 14 show the outcomes that were recorded for each case. Results show that when Jr is 0.1, the EOBAVO approach gave the best results.

Table 8 Sensitivity study of EOBAVO with varying parameter Jr. values
Fig. 14
figure 14

Sensitivity analysis of the EOBAVO algorithm for jumping parameter.

Analysis of rand 2

The EOBAVO technique is used with several different scenarios, such as rand3, rand4, rand5, and rand6, and keeping all factors constant to analyze the relevance of the rand2 parameter. F4, F11, and F20 are a subset of test functions selected from each category. With a maximum of 15,000 FEs, each function executes 30 times. The statistical results for each state are shown in Table 9 and Fig. 15. It is clear from the experimental results that rand2 solves issues better than the others.

Table 9 Sensitivity study of EOBAVO with varying parameter rand2 values.
Fig. 15
figure 15

Sensitivity analysis of the proposed EOBAVO algorithm for rand2.

Statistical analysis

Here, the effectiveness of the suggested EOBAVO technique is estimated using the Wilcoxon rank-sum test71 and the t-test69. In calculating the t-values for the function in a t-test, the two algorithms are considered simultaneously. Tables 10 and 11 present the t-test results at α = 0.05% for test functions (12 test functions) from IEEE CEC202266 and 23 test functions from IEEE CEC200565. The other methods are outperformed by EOBAVO if the relevant t-value is bold-faced. Additionally, the last row of Tables 10 and 11 shows EOBAVO’s win, tie, and loss counts, which are labelled as w = t = l. The t-values make it clear that, in many situations, EOBAVO performance is greatly enhanced.

Table 10 Findings from the IEEE CEC2005 benchmark functions for unimodal, multimodal, and fixed-dimensional multimodal (the top solutions have been highlighted).
Table 11 Findings from the IEEE CEC2022 benchmark functions (the top answers are indicated).

A non- parametric Wilcoxon rank- sum test performed in pairs can be used to identify the significant differences in the behaviors of the two algorithms. Tables 12 and 13 present the p values at the a = 0.05% significant level, respectively. For two methods to be considered statistically significant, their p values must be less than 0.05. The results of p values and H are displayed in Tables 12 and 13, where the symbols ‘‘−’’ and ‘‘+’’ stand for rejection and acceptance, respectively.

Table 12 The IEEE CEC2005 test functions’ P values at the 5% significant level using the Wilcoxon rank- sum test.
Table 13 The IEEE CEC2022 test functions’ p values at the 5% significant level using the Wilcoxon rank- sum test.

It is clear from the aforementioned tables that the proposed EOBAVO, outperforms the other MAs because most of the p values are less than 0.05. Also, it must be noted that NA represents the statistical equivalence between the various algorithms.

Convergence analysis

The convergence curve illustrates the correlation between the number of iterations and the value of the fitness function. In the initial stages of optimization, the search agent quickly deviates from the designated search area. The primary objective of this convergence analysis is to examine the optimization behavior and graphical representation of the proposed EOBAVO technique. Figures 16 and 17 illustrate the convergence graphs for all the suggested algorithms that were evaluated against the IEEE CEC200565 and IEE CEC202266 test functions, respectively. Figure 16 shows that EOBAVO converges more quickly for all functions except F1 and F14 in unimodal functions, whereas the suggested approach performs best for functions F9, F10, and F11 in multimodal functions. The proposed method exhibits comparable convergence for functions F14, F15, F17, and F18, falling under the fixed-dimensional multi-modal category. Furthermore, when compared to the AVO, OBAVO, and ROBAVO methods, the EOBAVO approach has a greater impact on balancing convergence and divergence.

Fig. 16
figure 16figure 16

Convergence curves of EOBAVO and some cutting-edge algorithms used to solve the IEEECEC2005 test functions, respectively.

Fig. 17
figure 17figure 17

Convergence curves of EOBGAVO and some cutting-edge algorithms used to solve the IEEECEC2022 test functions, respectively.

Additionally, it is clear from Fig. 17 that the EOBAVO technique outperforms all other algorithms in CEC01, CEC02, CEC03, CEC05, CEC08, and CEC10. However, compared to AVO, OBAVO, and ROBAVO, it is more successful at reaching convergence. Due to these improvements, the EOBAVO technique outperforms all the other algorithms under comparison both for convergence and search rate.

EOBAVO’s comparison with the newest and most effective algorithms

This section compares the effectiveness of the proposed EOBAVO technique to the four most current and effective optimization algorithms: The Mountain Gazelle Optimization (MGO)30, Harris Hawks Optimization (HHO)63, the Hunger Games Search (HGS)29, and the weighted vectors INFO64. The IEEE CEC2005 and IEEE CEC2022 benchmark functions are used to implement the suggested method together with the most recent and effective algorithms.

Table 14 presents the statistical results of the IEEE CEC2005 test functions. The suggested EOBAVO technique performed better than all other equated algorithms and produced optimal solutions. According to the results in Table 14. For functions F1, F4, F6, and F15, EOBAVO produces better results; however, for functions F10, F20, and F23, the average fit values and standard deviation are roughly the same.

Table 14 The IEEE CEC2005 benchmark results use the newest and most effective methods.

Table 15 shows the results of optimization and significance testing for the IEEE CEC2022 benchmark functions, using EOBAVO and the latest and most effective methodologies. The suggested method outperformed the others for functions F1, F3, F8, and F10, as Table 15 demonstrates. As a result, the suggested method outperforms the most recent algorithms, MGO, HHO, HGS, and INFO. These algorithms’ convergence curves have been run and are shown in Fig. 18. The analysis of these curves indicates that the EOBAVO method demonstrates superior performance across the majority of functions. Almost all of the p values in Table 15 fall below a 5% significance level, indicating that the recommended method demonstrates better performance compared to the other algorithms.

Table 15 The IEEE CEC2022 benchmark function’s results using the newest and most effective methods.
Fig. 18
figure 18figure 18figure 18

Convergence curves of the EOBAVO algorithm about the newest and most effective algorithms for solving the IEEECEC2005 benchmark functions.

Therefore, the statistical findings of the IEEE CEC2005 and IEEE CEC2022 research show that the EOBAVO outperforms the most recent and effective technique.

Figure 18 shows the convergence graphs of the EOBAVO technique in relation to the most recent and effective algorithms. The statistical results demonstrated that the proposed EOBAVO technique performed similarly to MGO and HGS but better than HHO, INFO.

These algorithms’ convergence curves have been run and are shown in Fig. 19. It clearly shows that the EOBAVO method works better for most functions.

Fig. 19
figure 19figure 19

Convergence curves of the EOBAVO algorithm with the newest and most effective method for solving the benchmark functions in IEEECEC2022.

EOBAVO method performs better than the other methods, as shown by the majority of the p values in Tables 16 and 17 being below the 5% significance level. Therefore, the statistical findings of the IEEE CEC2005 and IEEE CEC2022 examinations show that the EOBAVO outperforms the most recent and effective technique.

Table 16 Results of the Wilcoxon rank-sum test for the IEEECEC2005 test functions, at a significance level of 5%
Table 17 Results of the Wilcoxon rank-sum test for the IEEECEC2022 test functions, at a significance level of 5%.

Exploration and exploitation analysis

To assess the exploration and exploitation capabilities of EOBAVO, the average Euclidean distance from each individual to the best-known solution throughout iterations was monitored. Higher average distances are indicative of exploration, while lower distances indicate exploitation46,47. Examining the balance between the exploration and exploitation phases of the EOBAVO can provide significant insights for tackling practical optimization issues. Figure 20 shows a graphical representation of the exploration and exploitation trajectories in the search space while addressing the difficulties associated with the CEC2005 and CEC2022 functions. It is essential to highlight that Functions F2. F7 and F8 represent the functions within CEC2022, while F15, F17 and F18 denote the benchmarks within CEC2005. The graphs show that during the early stages of the optimization, EOBAVO maintains a consistent pattern with high exploration, which rapidly decreases within the first few hundred iterations. Simultaneously, exploitation increases sharply and dominates the search process for the remainder of the optimization run. This shows with an effective convergence dynamic, EOBAVO is prompt in recognizing promising regions quickly. In contrast, F8 exhibits unstable exploration–exploitation dynamics, with persistent fluctuations and multiple crossovers throughout the run. This indicates a complex, multimodal landscape where the optimizer struggles to converge, suggesting the need for enhanced diversity or adaptive control mechanism. F15 also shows unstable fluctuations between exploration and exploitation throughout the iterations, indication inconsistent search behavior. This suggests that algorithm struggles to maintain focus, likely due to a highly multimodal or deceptive landscape. Exploration remains dominant for most of the iterations in F17, while exploitation stays low and erratic. This prolonged exploration phase reflects difficulty in converging and suggests potential improvements in the algorithm’s exploitation strategy. The algorithm quickly transitions from exploration to exploitation in F18, within the first 1000 iterations. This smooth shift indicates a well-balanced search process and efficient convergence toward optimal solutions. We can conclude that EOBAVO effectively transitions from exploration to exploitation and converges well on most functions, but struggles to maintain stability and achieve to maintain stability and achieve convergence on complex, multimodal landscapes such as F8, F15 and F17. These challenges highlight the need to enhance diversity management and introduce adaptive control mechanisms for improved performance.

Fig. 20
figure 20

Exploration/exploitation analysis of EOBAVO.

Diversity analysis

In MA optimization, diversity analysis evaluates the distribution and speed of candidate solutions within the search space. It plays a critical role in understanding the exploration and exploitation balance of an algorithm. High diversity typically supports better global exploration, helping to avoid premature convergence to local optima, whereas low diversity aids in intensifying the search near promising regions for convergence47. There are nonlinear features in the diversity trajectory. The average distance travelled by each person during the iterative process is represented by the trajectory72. Greater aggregation within the population throughout the optimization process is indicated by less nonlinear population diversity, and this raises the possibility of convergence towards a sub-optimal solution within the problem space. The greatest separation between two members of the population was determined to be the population’s radius. Too much diversity can hinder convergence, while too little may lead to stagnation. The population radius is the maximum distance between any two individuals within the population and mathematically represented by Eq. (25).

$$\begin{array}{*{20}c} {\left| D \right| = \mathop {\max }\limits_{{\left( {i \ne j} \right) \in \left| {1,\left| S \right|} \right|}} \left( {\sqrt {\mathop \sum \limits_{t = 1}^{Dim} \left( {x_{it} - x_{jt} } \right)^{2} } } \right) } \\ \end{array}$$
(25)

where the \(\left| S \right|\) and Dim represents the population size and dimension of the problem size respectively, \(x_{it}\) denotes the position in the ith dimension of the ith individual. Finally, the diversity population from Eq. (25) is given by Eq. (26) as follows:

$$\begin{array}{*{20}c} {D^{N} = \frac{1}{\left| S \right|\left| D \right|}\mathop \sum \limits_{I = 1}^{\left| S \right|} \sqrt {\mathop \sum \limits_{t = 1}^{Dim} \left( {x_{it} - \overline{x}_{t} } \right)^{2} } } \\ \end{array}$$
(26)

And the \({\overline{x} }_{t}\) is the center of the populace. The diversity analysis of selected benchmark functions from CEC2005 and CEC2022 is represented in Fig. 21. We selected F1, F15 and F19 to represent CEC2005, while CEC2022 is represented by F3, F7 and F12. From the graphs, EOBAVO consistently maintained lower diversity levels for CEC2005 functions, particularly in F15 and F19. This means that with a reduced diversity, EOBAVO exploited the search space more aggressively, enabling faster convergence but at the potential risk of reduced exploration. In F1, both algorithms showed a similar decline in diversity, though AVO retained higher diversity for a longer period, suggesting greater exploration capacity in simpler unimodal landscapes. In contrast, the diversity gap between AVO and EOBAVO widened in F15 and F19, highlighting EOBAVO’s stronger exploitation tendencies on more complex multimodal functions. On the CEC2022 functions, EOBAVO consistently exhibited lower diversity than AVO, with the largest difference in F7. This indicates that EOBAVO’s enhanced OBL intensifies exploitation and accelerates convergence, especially in complex landscapes. However, the reduced diversity may limit exploration, while AVO maintains broader search capability at the cost of slower convergence. Overall, EOBAVO enhanced convergence speed by reducing population diversity and promoting exploitation across benchmarks.

Fig. 21
figure 21

Diversity graphs of AVO and EOBAVO.

Performance of EOBAVO on practical engineering problems

In this section, the proposed EOBAVO technique is examined for its performance and effectiveness by applying it to various real-world engineering design issues73. These issues encompass the pressure vessels, the speed reducers, the welded beams, the tension/compression spring, and the gear train problems.

The EOBAVO method is employed to address each issue, and the outcomes are compared with those obtained from other advanced algorithms such as AVO, OBAVO, ROBAVO, PSO, WOA, SCA, and SSA.

Pressure vessel problems

The main aim of this engineering challenge is to reduce the total cost, which includes the material, welding, and forming, of a cylindrical vessel. Figure 22 shows the vessel’s schematic representation. The shell’s width (Ts), the width of the head (Th), the cylinder’s length without measuring the radius (R), and the head (L) are the four decision parameters in this pressure vessel problem.

Fig. 22
figure 22

Pressure vessel problems.

The design can be stated mathematically as

$$\vec{p} = \left[ {p_{11} , p_{22} , p_{33} , p_{44} } \right] = \left[ {T_{s} T_{h} RL} \right]$$
$$\begin{array}{*{20}c} {Minimize f\left( {\vec{p}} \right) = 0.6224 p_{11} p_{33} p_{44} + 1.7781p_{22} p_{33}^{2} + 3.1661p_{11}^{2} p_{44} + 19.84p_{11}^{2} p_{33} } \\ \end{array}$$
(27)

Subject to

$$\begin{aligned} g_{1} \left( {\vec{p}} \right) & = - p_{11} + 0.0193p_{33} \le 0, \\ g_{2} \left( {\vec{p}} \right) & = - p_{22} + 0.00954p_{33} \le 0, \\ g_{3} \left( {\vec{p}} \right) & = - \pi p_{33}^{2} p_{44} - \frac{4}{3}\pi p_{33}^{3} + 1296000 \le 0, \\ g_{4} \left( {\vec{p}} \right) & = p_{44} - 240 \le 0, \\ \end{aligned}$$

The following are the decision variables’ ranges:

$$0 \le p_{11} \le 99,0 \le p_{22} \le 99,10 \le p_{33} \le 200,10 \le p_{44} \le 200$$

The records of the aforementioned MAs are compared with the results of the EOBAVO approach. The best results from each algorithm are compared in Table 18, showing that the EOBAVO method minimizes the costs of the cylindrical pressure vessel with superior performance.

Table 18 Outcomes of the pressure vessel problem in EOBAVO.

Problems involving welded beam design

The purpose of this problem is to create a beam that is economically viable in terms of production costs. Figure 20 shows the features and structure of the welded beam. The beam’s shear stress (\(\tau\)), bending stress (\(\sigma\)), buckling load (Pb) and the end deflection (\(\delta\)) are the optimization constraints in this issue. Additionally, there are four consecutive decision variables: Weld thickness (h), bar height (t), bar thickness (b), and clamped bar length (l). Figure 23 and Table 19 shows the welded beam and the outcome of the Welded Beam design in EOBAVO respectively.

Fig. 23
figure 23

A Welded beam73.

Table 19 Outcomes of the welded beam design in EOBAVO.

This problem can be represented numerically as follows:

$$\begin{array}{*{20}c} \begin{aligned} & \vec{w} = \left[ {w_{1} , w_{2} , w_{3} , w_{4} } \right] = \left[ {hltb} \right] \\ & Minimize\;f\left( {\vec{w}} \right) = 1.1047w_{1}^{2} w_{2} + 0.04811w_{3} w_{4} \left( {14.0 + w_{2} } \right) \\ \end{aligned} \\ \end{array}$$
(28)
$$\begin{aligned} {\text{Subject to}} & \\ & g_{1} \left( {\vec{w}} \right) = \tau \left( {\vec{w}} \right) - \tau _{{max}} \le 0, \\ & g_{2} \left( {\vec{w}} \right) = \sigma \left( {\vec{w}} \right) - \sigma _{{max}} \le 0 \\ & g_{3} \left( {\vec{w}} \right) = \delta \left( {\vec{w}} \right) - \delta _{{max}} \le 0 \\ & g_{4} \left( {\vec{w}} \right) = w_{1} - w_{4} \le 0 \\ & g_{5} \left( {\vec{w}} \right) = P - P_{c} \left( {\vec{w}} \right) \le 0, \\ & g_{6} \left( {\vec{w}} \right) = 0.125 - w_{1} \le 0, \\ & g_{7} \left( {\vec{w}} \right) = 1.1.471w_{1}^{2} + 0.04811w_{3} w_{4} \left( {14 + w_{2} } \right) - 5 \le 0 \\ \end{aligned}$$

The following are the decision variables’ ranges:

$$0.1 \le w_{1} \le 2,0.1 \le w_{2} \le 10,0.1 \le w_{3} \le 10,0.1 \le w_{4} \le 2$$

where

$$\tau \left( {\vec{w}} \right) = \sqrt {\left( {\tau \prime } \right)^{2} + 2\tau \prime \tau \prime \prime \frac{{w_{2} }}{2R} + \left( {\tau \prime ^{\prime}} \right)^{2} } ,\quad \tau \prime = \frac{P}{{\sqrt {2w_{1} w_{2} } }},\tau \prime \prime = \frac{MR}{J},$$
$$M = P\left( {L + \frac{{w_{2} }}{2}} \right), R = \sqrt {\frac{{w_{2}^{2} }}{4} + \left( {\frac{{w_{1} + w_{3} }}{2}} \right)^{2} }$$
$$J = 2\left\{ {\sqrt {2w_{1} w_{2} } \left[ {R^{2} } \right]} \right\},$$
$$\sigma \left( {\vec{w}} \right) = \frac{6PL}{{w_{4} w_{3}^{2} }}, \delta \left( {\vec{w}} \right) = \frac{{6PL^{3} }}{{Ew_{3}^{2} w_{4} }}$$
$$P_{c} \left( {\vec{w}} \right) = \frac{{4.013E\sqrt {\frac{{w_{3}^{2} w_{4}^{6} }}{36}} }}{{L^{2} }} \left( {1 - \frac{{w_{3} }}{2L} \sqrt{\frac{E}{4G}} } \right)$$
$$P = 6000lb, L = 14in., E = 30*10^{6} psi., G = 12*10^{6} psi$$
$$\tau_{max} = 13,600psi., \sigma_{max} = 30,000psi., \delta_{max} = 0.25in.$$

Tension/compression spring design problems

This engineering design problem’s primary objective was to reduce the weight of the spring, while three nonlinear inequality constraints were accounted for with a linear constraint. The wire diameter (d), the count of active coils (K), and the mean coil diameter (D) represent the three subsequent criteria for decision making in this engineering design. Figure 24 shows the spring’s schematic representation. The design’s numerical expression is shown as follows.

$$X = \left[ {x_{1} ,x_{2} ,x_{3} } \right] = \left[ {d,D,K} \right]$$
$$\begin{array}{*{20}c} {Minimize f\left( x \right) = \left( {x_{3} + 2} \right)x_{2} x_{1}^{2} } \\ \end{array}$$
(29)
$$\begin{aligned} & g_{1} \left( x \right) = 1 - \frac{{x_{2}^{3} x_{3} }}{{71.785x_{1}^{4} }} \le 0,\quad g_{2} \left( x \right) = \frac{{4x_{2}^{3} - x_{1} x_{2} }}{{12.566\left( {x_{2} x_{1}^{3} } \right)}} + \frac{1}{{5108x_{1}^{2} }} - 1 \le 0 \\ & g_{3} \left( x \right) = 1 - \frac{{140.45 x_{1} }}{{x_{2}^{2} x_{3} }} \le 0,\quad g_{4} \left( x \right) = \frac{{x_{1} + x_{2} }}{1.5} - 1 \le 0 \\ & 0.05 \le x_{1} \le 2,\quad 0.25 \le x_{2} \le 1.3\quad and\quad 2 \le x_{3} \le 15, \\ \end{aligned}$$
Fig. 24
figure 24

Compression/tension spring.

The results of the aforementioned MAs that have been successful in resolving this issue are compared to the results of the recently introduced EOBAVO approach. Table 20 presents the comparative findings, where, in terms of effectiveness, EOBAVO performs better than the other algorithms.

Table 20 EOBAVO’s findings for the compression or tension spring problem.

Gear train design problems

The key focus of a gear train design is the reduction of the gear ratio, which constitutes a mechanical problem. Figure 25 shows the schematic representation of a trained gear. Four successive decision variables with this engineering design include: \({n}_{A}\left({g}_{1}\right)\), \({n}_{B}\left({g}_{2}\right)\), \({n}_{C}\left({g}_{3}\right)\) and \({n}_{D}\left({g}_{4}\right)\).

Fig. 25
figure 25

Gear train design74.

This design’s mathematical expression has been provided in Eq. (30).

Let \(\vec{x} = \left[ {x_{1} , x_{2} , x_{3} , x_{4} } \right]\) = \(n_{A} n_{B} n_{C} n_{D}\)

$$\begin{array}{*{20}c} {Minimize f\left( {\vec{x}} \right) = \left( {\frac{1}{6.931} - \frac{{x_{3} x_{2} }}{{x_{1} x_{4} }}} \right)^{2} } \\ \end{array}$$
(30)

The following are the design variables’ ranges:

$$x_{1} , x_{2} , x_{3} , x_{4} \in \left\{ {12, 13, 14, \ldots , 60} \right\}$$

The EOBAVO technique’s best results are compared to those of other MAs, and the results that are best from each algorithm are compared in Table 21, showing that the EOBAVO method fared better than the others.

Table 21 EOBAVO gear train trouble outcomes.

Speed reducer design problems

Designing a gear box speed reduction having the least amount of weight is the primary objective of this project. Eleven inequality constraints form the basis of the speed reducer weight; seven of these constraints are nonlinear, while the remaining four are linear. In this engineering problem, there are seven variables to consider: the module of teeth m(s2), the face width b(s1), the teeth count in the pinion z(s3), the first shaft’s length between bearings l1(s4), the first shaft’s diameter d1(s6), the second shaft’s length between the bearings l2(s5) and the second shaft’s diameter d2(s7). Figure 26 shows a graphic illustration of this design.

Fig. 26
figure 26

Speed reducer design problems73.

Mathematically, we can represent the design in Eq. (31):

\({\text{Suppose}}\;\vec{s} = \left[ {s_{1} ,s_{2} ,s_{3} ,s_{4} ,s_{5} ,s_{6} ,s_{7} } \right] = \left[ {bmxl_{1} l_{2} d_{1} d_{2} } \right]\)

$$\begin{array}{*{20}c} {Minimize f\left( {\vec{s}} \right) = 0.7854s_{1} s_{2}^{2} \left( {3.333s_{3}^{2} + 14.9334s_{2} - 43.0934} \right) - 1.508s_{1} \left( {s_{6}^{2} + s_{7}^{2} } \right)} \\ { + 7.4777\left( {s_{6}^{3} + s_{7}^{3} } \right) + 0.7854\left( {s_{4} s_{6}^{2} + s_{5} s_{7}^{2} } \right)\# \left( {31} \right)} \\ \end{array}$$
(31)

Subject to

$$\begin{aligned} & G_{1} = \frac{27}{{S_{1} S_{2}^{2} S_{3} }} - 1 \le 0;\;G_{2} = \frac{397.5}{{S_{1} S_{2}^{2} S_{3}^{2} }} - 1 \le 0;\;G_{3} = \frac{{1.93s_{4}^{2} }}{{S_{2} S_{6}^{2} S_{3} }} - 1 \le 0\;G_{4} = \frac{{1.93s_{5}^{3} }}{{S_{2} S_{7}^{4} S_{3} }} - 1 \le 0 \\ & G_{5} = \sqrt {\sqrt {\frac{{\left( {\frac{{745s_{4} }}{{s_{2} s_{3} }}} \right)^{2} + 16.9 \times 10^{6} }}{{110s_{6}^{3} }}} - 1 \le 0} \quad G_{5} = \sqrt {\sqrt {\frac{{\left( {\frac{{745s_{4} }}{{s_{2} s_{3} }}} \right)^{2} + 16.9 \times 10^{6} }}{{110s_{6}^{3} }}} - 1 \le 0} \\ & G_{7 } = \frac{{s_{2} s_{3} }}{40.0} - 1 \le 0\;and\;G_{8} = \frac{{5s_{2} }}{{s_{1} }} - 1 \le 0\;and\;G_{9 } = \frac{{s_{1} }}{{12s_{2} }} - 1 \le 0 \\ \end{aligned}$$

The decision variables’ ranges are as follows:

$$\begin{aligned} & 2.6 \le s_{1} \le 3.6; \\ & 0.7 \le s_{2} \le 0.8; \\ & 17 \le s_{3} \le 28; \\ & 7.8 \le s_{4} ; \\ & s_{5} \le 8.3; \\ & 2.9 \le s_{6} \le 3.9; \\ & 5 \le s_{7} \le 5.5 \\ \end{aligned}$$

The suggested EOBAVO approach outperformed the selected MAs in determining the ideal cost for the speed reducer design problem, according to the simulation results shown in Table 22.

Table 22 EOBAVO’s speed reducer results.

The Three-bar truss design problems

The objective of this engineering design is to ascertain the ideal values for two variables, A1 and A2, to minimize the truss’s weight, subject to three optimization constraints, namely: Defection, stress, and buckling. (a third value, A3 = A1, is observed). Figure 27 shows the three-bar truss design’s structural outline.

Fig. 27
figure 27

Three bar truss design problems.

Mathematically, the design’s formula is shown in Eq. (32) as follows:

Consider \(\vec{z} = \left[ {z_{1} z_{2} } \right] = \left[ {A_{1} A_{2} } \right]\)

$$\begin{array}{*{20}c} {Minimize f\left( {\vec{z}} \right) = 2\sqrt 2 z_{1} + z_{2} \times l} \\ \end{array}$$
(32)

Subject to

$$g_{1} \left( {\vec{z}} \right) = \frac{{\sqrt 2 z_{1} + z_{2} }}{{\sqrt 2 z_{1}^{2} + 2z_{1} z_{2} }}P - \sigma \le 0;\quad g_{2} \left( {\vec{z}} \right) = \frac{{ z_{2} }}{{\sqrt 2 z_{1}^{2} + 2z_{1} z_{2} }}P - \sigma \le 0;$$
$$g_{3} \left( {\vec{z}} \right) = \frac{1}{{\sqrt 2 z_{2} + z_{1} }}P - \sigma \le 0;\quad l = 100\;{\text{cm}},P = 2k\frac{N}{{cm^{3} }},\sigma = \frac{2kN}{{cm^{3} }};$$

The decision variables’ ranges are as follows: \(0 \le z_{1} , z_{2} \le 1\).

According to the results shown in Table 23, the recommended EOBAVO technique performs better than other MAs in determining the least cost design value.

Table 23 Results of the three-bar truss issue used in EOBAVO.

Statistical analysis for engineering problems

Statistical analysis is crucial in optimization engineering for objectively evaluating algorithm performance, comparing solutions, and ensuring results are reliable and significant across multiple runs. This also supports parameter tuning, convergence assessment, and robustness evaluation, enhancing the validity and applicability of optimization solutions75. Table 24 highlights the comparative performance of EOBAVO and its base version (AVO) and other state-of-the-art MA across sic standard engineering design problems namely Pressure Vessel Design (PVD), Welded Beam Design (WBD), Tension/Compression Spring Design (TCSD), Gear Train Design (GTD), Speed Reducer Design (SRD), Three-Bar Truss Design (TBTD).In general, EOBAVO achieved the best average solution in four out of six design problems and ties for the best in one, demonstrating its superior global search capability. EOBAVO consistently yields the lowest standard deviation in five out of six problems, reflecting remarkable robustness and stability across independent runs. The only exception is the Tension/Compression Spring Design (TCSD) problem, where the Whale Optimization Algorithm (WOA) shows superior consistency despite matching EOBAVO’s average solution. Overall, these results affirm the strength of the opposition-based learning enhancements in improving the exploration–exploitation balance of AVO, enabling it to outperform both its baseline and competing metaheuristic algorithms in complex constrained optimization contexts.

Table 24 Comparative performance of EOBAVO and optimizers.

Figure 28 presents the convergence curves of six algorithms—PSO, SSA, SCA, WOA, AVO, and EOBAVO—across six engineering design problems (PVD, WBD, TCSD, GTD, SRD, and TBTD). The EOBAVO consistently achieves superior fitness values across all test functions, converging more rapidly and attaining lower objective function values than the competing algorithms. Notably, in PVD and TBTD, EOBAVO demonstrates faster convergence toward the global optimum while maintaining stability, outperforming AVO and the other baseline algorithms. In contrast, SSA exhibits poor convergence across most functions and yielding suboptimal fitness values, particularly evident in PVD and TCSD. PSO and SCA display moderate convergence but plateau prematurely without significant improvement over iterations. WOA shows competitive performance in some cases but fails to match EOBAVO’s convergence precision. Overall, the results validate that integrating opposition-based learning into AVO (i.e., EOBAVO) enhances both convergence speed and solution accuracy, confirming its robustness and effectiveness in solving these benchmark optimization problems.

Fig. 28
figure 28

Convergence curves of EOBAVO and other Optimizers on Engineering design problems.

Utilizing the EOBAVO algorithm to address the issue of wind turbine installation

The electrical energy generated when wind is converted into mechanical energy by windmills or turbines is known as Wind energy. This is one of the most significant renewable energy bases because of its widespread availability and abundance. The effective utilization of this energy has the potential to produce a remarkable amount of electricity. Wind energy has recently become more popular due to the rise in the need for electricity. Strategically positioning the wind turbines in a wind farm can lead to increased total electricity production. The wake effect may cause most wind farms to produce less electricity, and the haphazard placement of the turbines within wind farms can hinder turbines from reaching their full capacity. Effective management of the wake effect, combined with the strategic positioning of wind turbines, can lead to an increase in the energy yield from wind farms. Optimizing the output of these facilities is vital for lowering energy expenditures. In this section, the recommended EOBAVO technique is employed to ascertain the most advantageous sites for wind turbines, ensuring that the highest power output is achieved at the lowest cost per kilowatt.

The research encompasses two separate case studies: The first examines Constant Wind Speed (CWS) in conjunction with Variable Wind Direction (VWD), while the second investigates Variable Wind Speed (VWS) paired with Variable Wind Direction (VWD). In comparison to the statistical results, LSHADE32 , GWO8, GA76, GA77, RSA78, SBO79, and BPSO TVAC80 are used.

The nearby turbines in wind turbine fields have an impact on the wind’s speed. A wake forms downstream of a wind turbine when it uses the wind to generate power. The downstream turbine’s power output decreases compared to the turbine running in a free wind flow if the nearby turbine is operating within the wake. A module developed by N.O. Jensen, known as the wake effect model, was used in the majority of the wind turbine layout optimization studies44,81. This model is shown in Fig. 29.

Fig. 29
figure 29

Jensen Wake effect model (Schematic).

At a downstream distance of x from the wind turbine, the wind speed v in the wake is represented using Eqs. (33)–(36) as follows:

$$\begin{array}{*{20}c} {v = v_{0} \left[ {1 - \frac{2a}{{\left( {1 + \mu \left( \frac{x}{rdr} \right)} \right)^{2} }}} \right]} \\ \end{array}$$
(33)
$$\begin{array}{*{20}c} {rdr = r_{r} \sqrt {\frac{1 - a}{{1 - 2a}}} } \\ \end{array}$$
(34)
$$\begin{array}{*{20}c} {C_{t} = 4a\left( {1 - a} \right)} \\ \end{array}$$
(35)
$$\begin{array}{*{20}c} {\mu = \frac{0.5}{{ln\left( {\frac{h}{{h_{0} }}} \right)}}} \\ \end{array}$$
(36)

where μ is the entrainment constant, v0 is the velocity of the free stream, and a is the induction factor for axial. The radius of the downstream rotor is represented by rdr, the wind turbine’s thrust coefficient by Ct, the rotor radius by rr, the hub height by h, and the turbine’s surface roughness by h0.

Using Eq. (37), at any distance x, the downstream wake radius (rdw) is estimated.

$$\begin{array}{*{20}c} {r_{dw} = R + \mu x} \\ \end{array}$$
(37)

where the wind turbine radius is indicated by R.

The total of the kinetic energy defects is the mixed wake kinetic energy when one turbine passes next to multiple turbines. Thus, the downstream velocity of the N turbine is calculated using Eq. (38).

$$\begin{array}{*{20}c} {\left( {1 - \frac{v}{{v_{0} }}} \right)^{2} = \mathop \sum \limits_{i = 1}^{N} \left( {1 - \frac{{v_{i} }}{{v_{0} }}} \right)^{2} } \\ \end{array}$$
(38)

Every wind turbine produces power and the sum is determined by Eq. (39):

$$\begin{array}{*{20}c} {P_{wt} = \mathop \sum \limits_{i = 1}^{N} 0.3 \times v^{3} } \\ \end{array}$$
(39)

The wind farm’s wind turbines serve as the basis for the cost model created by76. According to this model expression, establishing a single wind turbine cost is 1, whereas adding many wind turbines can save costs by one-third. The remaining expenses require two thirds. The wind farm costs are computed using the expression shown in Eq. (40)

$$\begin{array}{*{20}c} {Cost = N\left[ {\frac{2}{3} + \frac{1}{3}e^{{ - 0.00174N^{2} }} } \right]} \\ \end{array}$$
(40)

The objective of the cost function is to reduce the investment cost associated with each unit of energy generated to the minimum, while simultaneously enhancing the power output of a wind farm. The objective function is then shown in Eq. (41) as follows:

$$\begin{array}{*{20}c} {Objective\;function = \frac{Cost}{{P_{wt} }}} \\ \end{array}$$
(41)

For wind turbine’s optimal situation in the field, the 2000 × 2000 m2 farm area is partitioned into 10 × 10 square grids. This particular arrangement of grids permits the construction of 100 turbines in a wind farm. A wind turbine (1) or nothing (0) can be placed in every grid cell of the wind farm. As shown in Fig. 30, the width of a cell is five times that of a rotor diameter, which requires a rotor diameter of 200 m for horizontal directions and 40 m for vertical directions.

Fig. 30
figure 30

Wind farm area subdivision.

Table 25 shows the characteristics of wind turbines. The findings are evaluated against other cutting-edge algorithms such as LSHADE32, GWO8, GA76, GA77, RSA78, SBO79, and BPSO TVAC80. The suggested EOBAVO approach is combined with CWS with VWD and VWS with VWD. It should be mentioned that each algorithm has a maximum of 100 iterations and 200 populations. Additionally, the upper and lower bounds are 0 and 1, and the problem size is 100.

Table 25 Specifications of wind turbine.

Two case studies are examined below to assess the efficacy of the recommended strategy for the optimal placement of a wind turbine:

VWD combined with CWS

At this instance, the wind moves in different directions and surrounding the wind farm increases at equal interval of 10° between 0° to 360°, with a fixed speed of 12 m/s. The statistical results for different MAs and the suggested EOBAVO approach are given in Table 26. The table makes it clear that EOBAVO method performs better than the other MAs. Additionally, the optimal wind turbine location in a wind farm as established by the EOBAVO approach is shown in Fig. 31. The EOBAVO technique uses 40 turbines to produce 18,120 kW at an efficiency of 88.31% and a cost of 0.001734/kW.

Table 26 Results of comparing VWD with CWS.
Fig. 31
figure 31

EOBAVO’s wind farm setup for a 10 × 10 square grid for CWS with VWD.

VWD combined with VWS

In this instance, the wind flows around the wind farm in three different directions at speeds of 8, 12, and 17 m/s. It also increases the angles by 100°, ranging from 0° to 360°. The statistical results of the various MAs and the proposed EOBAVO approach are shown in Table 27. The table makes it clear that the EOBAVO approach fared better than the other metaheuristic algorithms. Additionally, the optimal wind turbine location in a wind farm, as established by the EOBAVO approach, is shown in Fig. 32. The EOBAVO uses 39 turbines to produce 30,830 kW at an efficiency of 84.49% and a cost of 0.000807/kW.

Table 27 Results of comparing VWD with VWS.
Fig. 32
figure 32

EOBAVO’s wind farm setup for a 10 × 10 square grid for VWS with VWD.

Finally, the results clearly show that the EOBAVO technique performed better than other algorithms in terms of setting up the wind turbines in both case studies. Finally, it can be said that the suggested EOBAVO has outperformed the other selected algorithms on a range of practical engineering design and applications issues.

Conclusion

OBL, which has gained increased attention recently, is used to improve the efficacy of metaheuristic algorithms. To achieve varying densities across all vulture species and to foster population diversity, this research intends to refine the recently developed nature-inspired algorithm known as the African Vulture Optimizer (AVO). This enhancement will involve the incorporation of an opposition-based learning mechanism, resulting in the creation of the Opposition-Based African Vulture Optimizer Algorithm (OBAVO). Nevertheless, this approach elevates the computational demands necessary for discovering superior solutions and is susceptible to premature convergence. To effectively resolve these issues while steering clear of local optima and facilitating faster convergence, a novel adaptation of Opposition-based Learning, called Evolved Opposition-based Learning (EOBL), has been proposed. This adaptation is incorporated into a refined version of the African Vulture Optimizer Algorithm, known as the Enhanced Opposition-based African Vulture Optimizer Algorithm (EOBAVO). The investigation’s findings demonstrate that, for a significant number of the functions studied, both the accuracy of the solutions and the convergence rates were notably superior when assessed against other optimization techniques. Evaluations of engineering difficulties show that the newly created EOBAVO performs better than its contemporaries. In contrast to other Metaheuristics, this one is negligible when it comes to solving multi-modal functions. While the proposed EOBAVO algorithm demonstrates strong performance across a wide range of optimization tasks, future work will focus on addressing its limitations on highly ill-conditioned and rotated problems (observed in CEC2022 functions CEC01–CEC03). Incorporating hybrid local search technique and developing rotation-invariant opposition strategies are promising directions to further enhance EOBAVO’s adaptability and performance in complex, non-separable optimization landscapes. Future initiatives designed to minimize the frequency of function calls are projected to find utility in several domains, such as image processing, feature selection, and data mining. We also aim to extend the proposed EOBAVO framework to handle discrete and mixed-integer optimization problems in our future works, which are prevalent in engineering design. This adaption will involve redefining OBL mechanisms for discrete spaces, hybrid encoding strategies for mixed variables, and specialized constraint-handling techniques to maintain feasibility while preserving diversity and convergence speed.