Abstract
This paper introduces the Reindeer Cyclone Optimization Algorithm (RCOA), a novel metaheuristic optimization technique inspired by the survival behavior of reindeer during predator attacks in formation cyclonic storms. RCOA imitates the defense-centric cooperative behavior of reindeer, where individuals cluster together to withstand external threats. This behavior is analogous to the optimization process where exploration (global search for exploring new areas) and exploitation (local refinement to copy or learn from neighbor in cyclonic form) are carefully balanced. The algorithm has been extensively evaluated against 14 unimodal and multimodal benchmark functions and 4 real-world complex optimization problems. RCOA demonstrates a moderate improvement of around 5–12% over other algorithms such as PSO, DE, COA and GSA on unimodal functions. On multimodal functions, RCOA shows more competitive performance, especially in terms of stability, with an improvement of around 10–15% in accuracy and consistency compared to WDO and PSO. The algorithm is evaluated using the CEC’17 benchmark suite with 50 dimensions and compared against different well-established optimization algorithms, including WOA, PSO, GSA, and DE. Experimental results demonstrate that RCOA outperforms existing methods on multiple test functions by achieving superior convergence speed and solution accuracy. The Wilcoxon Signed-Rank test confirms the statistical significance of RCOA’s performance, indicating its robustness and reliability in handling diverse optimization landscapes. The findings suggest that RCOA is a competitive optimization method suitable for a wide range of real-world applications.
Similar content being viewed by others
Introduction
Metaheuristic algorithms have become essential tools for tackling complex optimization problems, especially those that are nonlinear, multimodal, and constrained1. Their flexibility, adaptability, and ease of implementation have made them popular across various scientific and engineering fields. Nature-inspired algorithms, such as Genetic Algorithms (GA)2, Particle Swarm Optimization (PSO)3, Ant Colony Optimization (ACO)4, and more recently the Firefly Algorithm (FA)5 and the Whale Optimization Algorithm (WOA)6, have been successfully applied to a wide range of optimization challenges. However, these methods often struggle to maintain a consistent balance between exploration (the broad search of the solution space) and exploitation (the focused search around promising regions), particularly in high-dimensional or highly multimodal search spaces7. This imbalance can lead to premature convergence or inefficient search behavior.
To address these limitations, we introduce the Reindeer Cyclone Optimization Algorithm (RCOA), a novel metaheuristic technique inspired by the survival behavior of reindeer during predator attacks and cyclonic storms. In nature, reindeer herds form tight clusters in response to external threats, leveraging a cooperative defense strategy that parallels the exploration-exploitation dilemma in optimization. This behavior allows the herd to dynamically balance between evading threats (exploration) and maintaining cohesion for survival (exploitation). RCOA adopts this adaptive clustering strategy to achieve a more efficient and robust search process, enhancing both the global search capabilities and the local refinement of solutions.
The mathematical foundation of RCOA is presented in this paper, along with its implementation on 14 unimodal and multimodal benchmark functions and four real-world complex optimization problems. Rigorous comparisons with established algorithms such as PSO, WOA, Differential Evolution (DE)8, Gravitational Search Algorithm (GSA)9, and other state-of-the-art methods like Grey Wolf Optimizer (GWO)10 and show that RCOA consistently delivers competitive performance. Specifically, RCOA demonstrates superior accuracy, convergence stability, and solution quality, particularly in complex, multimodal search spaces.
The results of this study highlight RCOA’s ability to efficiently navigate challenging optimization landscapes, offering a promising alternative for solving real-world engineering problems. By leveraging the cooperative behavior of reindeer, RCOA achieves a dynamic balance between exploration and exploitation, outperforming several traditional and contemporary metaheuristics in both precision and consistency. This positions RCOA as a robust and scalable solution for addressing a wide variety of optimization challenges.
Contributions of the paper
The novelty of RCOA lies in its biologically inspired search mechanism, which differs from existing metaheuristics in the following ways:
-
RCOA is modeled after the natural defense strategy of reindeer herds, where individuals form a swirling motion to protect weaker members, leading to a unique balance between exploration and exploitation.
-
Unlike traditional algorithms with fixed exploration-exploitation settings, RCOA adjusts its search dynamics throughout the iterations, allowing for more efficient navigation of complex landscapes.
-
Incorporating Lévy flight helps RCOA escape local optima by introducing random, large jumps, improving global search capability.
-
The algorithm simulates the swirling movement of reindeer, which refines promising solutions by systematically adjusting positions in a controlled manner.
-
Agents update their positions based on interactions with strong candidates rather than just the single best solution, fostering diverse learning and reducing stagnation.
-
Instead of blindly following the best candidate, each agent moves in proportion to its distance from high-performing solutions, ensuring a smooth balance between global and local search.
The rest of the paper is structured as follows. Section “Related work” describes the related study. Section “Proposed algorithm: reindeer cyclone optimization algorithm (RCOA)” proposes the Reindeer Cyclone Optimization Algorithm (RCOA) and its inspiration. Section “Mathematical model of RCOA” introduces the Mathematical model of RCOA. Section “Results and discussion” gives the result and discussion section. Section 6 concludes the main findings and suggests directions for future research.
Related work
Metaheuristic algorithms have gained widespread attention in the optimization community because of their ability to solve complex, non-linear, and multimodal problems in various domains. These algorithms are particularly advantageous because they do not rely on gradient information, making them highly effective in escaping local optima, a significant limitation in traditional optimization methods. This section discusses several well-established nature-inspired algorithms and their advancements, providing a foundation for the development of the Reindeer Cyclone Optimization Algorithm (RCOA).
The earliest and most widely used category of metaheuristic algorithms is evolution-based methods. Genetic Algorithms (GA), introduced by Holland2,11, simulate the process of natural selection. GA evolves a population of candidate solutions through selection, crossover, and mutation operators to find the optimal solution. Other evolutionary methods include Evolution Strategy (ES)12 and Genetic Programming (GP)13, which have demonstrated efficacy in various optimization problems by employing biological evolution principles.
Recently, Differential Evolution (DE)14 has gained significant attention due to its simplicity and efficiency. DE is known for its ability to handle continuous optimization problems by relying on differential operators to enhance population diversity. Despite their successes, evolution-based methods often suffer from slow convergence and are prone to premature convergence in complex, multimodal search spaces.
Physics-based algorithms mimic natural physical phenomena and processes to drive the optimization search. Simulated Annealing (SA)15 was one of the earliest algorithms in this category, inspired by the annealing process in metallurgy. SA is effective in avoiding local optima by probabilistically accepting worse solutions during the search, which improves the exploration phase.
Another prominent physics-based approach is the Gravitational Search Algorithm (GSA)16, where candidate solutions are considered as masses attracting each other based on Newtonian gravity. Over time, the search converges toward more massive objects, corresponding to better solutions. Similarly, the Charged System Search (CSS)17 and Central Force Optimization (CFO)18 rely on other physical forces to guide optimization.
While these algorithms have provided robust solutions in various applications, they often struggle with balancing exploration and exploitation, especially in high-dimensional search spaces.
Swarm-based algorithms, inspired by the collective intelligence of animals, have become some of the most popular and successful metaheuristics for optimization. Particle Swarm Optimization (PSO)19, proposed by Kennedy and Eberhart3, mimics the social behavior of bird flocking or fish schooling. PSO excels in exploration by maintaining individual and global best solutions, although it can sometimes converge prematurely in multimodal problems.
Ant Colony Optimization (ACO)20, introduced by Dorigo et al., mimics the foraging behavior of ants by using a pheromone-based communication system. ACO has been widely applied to combinatorial optimization problems, especially in routing and scheduling tasks.
Recent swarm-based methods have shown improvements in performance. For example, the Whale Optimization Algorithm (WOA)21 simulates the hunting behavior of humpback whales and has been particularly successful in solving multimodal problems. Likewise, the Grey Wolf Optimizer (GWO)10 imitates the hierarchical leadership and hunting strategies of wolves. Firefly Algorithm (FA)22, inspired by the bioluminescent communication of fireflies, has also demonstrated strong convergence properties in multimodal optimization.
Several novel algorithms have emerged in recent years, pushing the boundaries of metaheuristic performance. Teaching-Learning-Based Optimization (TLBO) and its variants23,24 draws on the concept of knowledge transfer between teachers and students to optimize complex problems. Similarly, Imperialist Competitive Algorithm (ICA)25 mimics socio-political imperialism to find optimal solutions. Both of these methods have shown competitive performance in benchmark problems, particularly in their ability to balance exploration and exploitation. There are several recent algorithms available in literature which are useful in optimation problems26,27,28,29,30.
Another noteworthy recent method is the Moth-Flame Optimization (MFO)31, inspired by the navigation method used by moths called transverse orientation. The algorithm has shown superior performance in high-dimensional problems. Similarly, the Salp Swarm Algorithm (SSA)32 and Seagull Optimization Algorithm (SOA)33 introduce novel swarm behaviors that have demonstrated improvements in convergence speed and accuracy.
The literature also includes several machine learning and deep learning methods for optimization, in addition to metaheuristic approaches34. Nature-inspired algorithms have remarkably efficiently solved complex optimization problems, particularly in dynamic and uncertain environments. Recent studies have explored various bio-inspired strategies, such as gecko-inspired locomotion for robotic coordination35, cockroach bio-robot navigation for autonomous path planning36, and multimaterial soft robotic hand control37, showcasing the effectiveness of nature-inspired solutions in engineering applications. Similarly, UAV communications and pursuit-evasion models utilizing swarm intelligence and deep reinforcement learning (DRL)38,39 highlight the significance of optimizing collective movements for enhanced performance. In the realm of network optimization, heuristic-based techniques have been used for identifying influential nodes in social networks40, mining spatial co-location patterns41, and designing hybrid optimization-based adversarial attack strategies42, reinforcing the role of intelligent search mechanisms in high-dimensional spaces.
Despite advances in metaheuristics43, maintaining a robust balance between exploration and exploitation remains a central challenge, particularly in highly multimodal and complex search spaces. Most of the discussed methods exhibit weaknesses in convergence stability, often leading to suboptimal solutions. Reindeer Cyclone Optimization Algorithm (RCOA) is introduced to address these limitations by leveraging the adaptive survival behavior of reindeer herds during cyclonic storms, which naturally balance global exploration and local exploitation. The cooperative cyclone formation in reindeer herds not only enables effective defense mechanisms but also mirrors the optimization process of dynamically searching through and refining promising regions in the search space.
In comparison to the aforementioned algorithms, RCOA offers a more dynamic and adaptive balance, ensuring both precision and consistency across various optimization challenges. By combining the strengths of swarm-based search and dynamic adaptability, RCOA is poised to outperform many existing algorithms in both accuracy and robustness.
This study positions RCOA as a competitive and promising addition to the growing family of nature-inspired metaheuristic algorithms, with particular strength in solving real-world and multimodal optimization problems.
Proposed algorithm: reindeer cyclone optimization algorithm (RCOA)
Inspiration
The Reindeer Cyclone Optimization Algorithm (RCOA) is inspired by the unique defense mechanism of reindeer herds during a threat. In nature, reindeers form a cyclone-like structure where the younger and older members of the herd are kept in the center for protection, while the stronger reindeers form an outer circle to defend against predators as shown in Fig. 1. This behavior is analogous to the optimization process where exploration (global search) and exploitation (local refinement) are carefully balanced. The center of the herd corresponds to the optimum region, and the outer reindeers represent the search agents dynamically exploring and converging towards this optimum. This collective motion enhances global search (exploration), while their synchronized movements during foraging improve local search (exploitation). The algorithm models reindeer survival strategies such as exploration, exploitation, and random walk behaviors based on Levy flight.
Mathematical model of RCOA
The optimization process involves the following steps:
Position update equation
This step reflects the initial, scattered positions of reindeers before a threat arises. The random positions ensure that the algorithm begins with agents distributed across the search space, allowing a thorough exploration of the solution space. This ensures that the algorithm starts with a wide diversity of possible solutions (like reindeers being spread out in an open field), crucial for avoiding local optima. The position of each agent (reindeer) is updated based on a combination of exploration, exploitation, and sometimes a random walk. Let \(X_i^{(t)}\) be the position of the \(i\)-th agent at iteration \(t\) in a \(D\)-dimensional search space. The new position at iteration \(t+1\) is updated as:
where:
-
\(X_i^{(t)}\)—Current position of the \(i\)-th agent.
-
\(X^*\)—Best-known position in the current iteration.
-
\(X_{\text {neighbor}}\)—Position of a randomly selected neighboring agent.
-
\(\alpha\) and \(\beta\)—Learning rates for exploration and exploitation.
-
\(r_1, r_2 \sim U(0,1)\)—Random vectors.
-
\(D = \Vert X_i - X^*\Vert\)—Euclidean distance to the best agent.
-
\(b\) - Spiral constant, \(t \sim U(0,1)\)—Random parameter.
-
\(\gamma\)—Controls the influence of spiral movement.
-
\({\mathcal {L}}(\beta )\)—Lévy flight random step.
Exploration term
The reindeers in the outer ring constantly move and adjust their positions to protect the herd. In the algorithm, exploration represents the broader search for new potential solutions, mimicking the outer reindeer’s task of scanning the environment for threats. The exploration equation allows agents to explore new areas by pulling the search agents toward the current best solution. As the iteration progresses, the learning rate (\(\alpha\)) decreases, making exploration more focused on the most promising areas of the search space.
In the exploration phase, each agent moves towards the best-known solution with a learning rate that decreases over time. The exploration term is defined as:
where \(\alpha\) is a constant influencing the exploration step.
The exploration learning rate \(\alpha\) is given by:
Where:
-
\(\text {lr}_{\min }\) and \(\text {lr}_{\max }\) are the minimum and maximum learning rates.
-
\(k\) is the current iteration.
-
\(K_{\text {max}}\) is the maximum number of iterations.
-
\(r_1\) is a random vector of size D uniformly distributed in [0, 1].
This equation ensures that the exploration phase dominates at the beginning of the search and gradually decreases over time.
Exploitation term (neighbor-based)
This is analogous to the inner ring of reindeers clustering more tightly to defend vulnerable members. Here, exploitation is the process of refining the search around promising solutions, focusing on local optimization by intensifying the search within a smaller region. The exploitation strategy is based on neighboring agents. It encourages the search agents to “copy” or learn from their neighbors, leading to convergence around good solutions. As iterations progress, \(\beta\) increases, indicating a shift from broad search to intensified local refinement.
In the exploitation phase, agents adjust their positions based on the location of a randomly selected neighboring agent. The exploitation term is defined as:
The exploitation learning rate \(\beta\) increases over time and is defined as:
-
\(r_2\) is a random vector of size D uniformly distributed in [0, 1].
-
\({X}_{\text {neighbor}}\) is the position of a randomly selected neighboring agent.
Levy flight (random walk)
To balance exploration and exploitation and avoid local optima, RCOA introduces a random walk mechanism based on Lévy flight. With a small probability \(p = 0.1\), the agent updates its position as:
where Lévy flight is modeled as:
with:
-
\(\beta = 1.3\) is the Lévy flight parameter.
-
\(u \sim N(0, \sigma _u^2)\), where \(\sigma _u\) is computed as:
$$\begin{aligned} \sigma _u = \left( \frac{\Gamma (1 + \beta ) \cdot \sin \left( \frac{\pi \beta }{2}\right) }{\Gamma \left( \frac{1 + \beta }{2}\right) \cdot \beta \cdot 2^{\frac{\beta - 1}{2}}}\right) ^{\frac{1}{\beta }}. \end{aligned}$$(6) -
\(v \sim N(0, 1)\) is a normally distributed random variable.
This random walk adds randomness to the search process, encouraging further exploration in cases where agents may be trapped in local optima.
In nature, there are always random elements or disturbances that can push a reindeer off course. Similarly, in the algorithm, a random walk (via Levy flight) introduces sudden, unpredictable movements to ensure that the search agents do not get stuck in local optima. Levy flight adds a long-tailed distribution for random steps, which is useful for escaping local minima and encouraging exploration of distant regions of the search space. The 10% probability of a random walk ensures that the algorithm occasionally jumps to new regions, preventing premature convergence.
Boundary constraints
To ensure the agents remain within the problem’s search space, their positions are clipped to the boundaries:
Where \({\textbf{X}}_{\min }\) and \({\textbf{X}}_{\max }\) are the lower and upper bounds of the search space, respectively.
In nature, reindeers have a limited range within which they can move. Similarly, search agents are constrained within defined boundaries of the search space. The clipping ensures that the search agents do not go outside the permissible bounds of the solution space, maintaining the feasibility of the solutions.
Objective function
The objective function f(x) evaluates the quality of each solution. The best solution found over all iterations is stored as:
Results and discussion
The performance of the Reindeer Cyclone Optimization Algorithm (RCOA) is compared with RCOA, WOA, WDO, PSO, DE, and GSA, on three unimodal benchmark functions (Sphere, Schwefel 2.22, and Rosenbrock) as shown in Table 1, six multimodal benchmark functions as shown in Table 2 and five fixed dimension multimodal benchmark functions as shown in Table 3. Each algorithm was run 30 times for each benchmark function, and Mean and Standard Deviation were collected. For all the algorithms, a population size and maximum iteration equal to 30 and 500 have been utilized. Note that \(V_no\) indicates the number of design variables.
The average results (ave) and standard deviations (std) of each algorithm’s performance for unimodal benchmark functions are summarized in Table 4. For the Sphere function, RCOA achieved a competitive performance with an average value of 1.66e+02 and a standard deviation of 7.87e+02. Although WOA shows lower average values in this case, RCOA outperforms other algorithms like PSO, DE, and GSA, which have much higher average values (e.g., PSO at 1.44e+04 and DE at 4.20e+03). RCOA provides a balance between exploration and exploitation, avoiding premature convergence. Its relatively low average error reflects this behavior, suggesting that it finds near-optimal solutions. However, it can improve further in stabilizing the results as indicated by the standard deviation.
The results demonstrate that RCOA consistently achieves competitive performance across the three unimodal benchmark functions. RCOA’s dynamic learning rates and neighbor-based exploitation strategies allow it to effectively navigate the search space and avoid getting trapped in local optima. This behavior is particularly important for high-dimensional and multimodal problems. RCOA performs comparably to WOA on simpler functions like F1 and F2 and shows promise even on more complex problems like Rosenbrock (F3), where PSO and DE struggle significantly. Although RCOA delivers good average performance, its standard deviations indicate room for improvement in terms of convergence stability, particularly on multimodal functions like F3.
RCOA achieves satisfactory results and demonstrates competitiveness with state-of-the-art algorithms, particularly in avoiding premature convergence and balancing exploration and exploitation. Further tuning of the algorithm, particularly in reducing variability across runs, can make it an even more robust optimization tool.
The comparison of various algorithms, including RCOA, WOA, WDO, PSO, DE, and GSA on multimodal benchmark functions (F4–F9) as shown in Table 5 reveals that RCOA is highly competitive across these challenging optimization problems. Specifically, RCOA consistently provides satisfactory results, demonstrating low average error and standard deviation values in functions like Rastrigin (F5) and Ackley (F6), where precision is crucial. Its performance on Schwefel (F4) and Griewank (F7) functions further solidifies its effectiveness, where it shows comparable results to other leading algorithms like WOA and GSA. Particularly in the penalized functions (F8, F9), RCOA’s robust performance showcases its ability to handle complex search spaces and constraints, positioning it as a versatile and reliable optimization technique in diverse scenarios. Overall, RCOA achieves an excellent balance of exploration and exploitation, proving its competitiveness against state-of-the-art algorithms.
Table 6 present a comprehensive evaluation of Fixed-Dimension Multimodal Benchmark Functions, comparing the performance of several optimization algorithms, including RCOA, WOA, WDO, PSO, DE, and GSA. These benchmark functions, like Shekel’s Foxholes, Kowalik, Six-Hump Camelback, Branin RCOS, and Goldstein-Price, are well-known for their complexity and multimodal nature, challenging the ability of algorithms to find global minima as shown in Table 3.
RCOA achieves competitive results across all functions, indicating its capability to solve complex optimization problems effectively. For example, on the Shekel’s Foxholes function (F10), RCOA achieves an average of 2.50e−01 with a standard deviation of 8.02e−01, which, although not as precise as some other algorithms like WOA or WDO, shows satisfactory performance given the complexity of the problem. Similarly, for Kowalik (F11), RCOA closely approximates the global minimum, with an average of 7.37e−03 and low variability. On the Six-Hump Camelback function (F12), RCOA performs exceptionally well, matching the optimal value of -1.03e+00 with almost negligible error. Across the board, RCOA is competitive in terms of both accuracy (ave) and stability (std), showcasing its robustness against complex multimodal landscapes.
CEC 2017 test suite analysis
The comparative evaluation of the Reindeer Cyclone Optimization Algorithm (RCOA) against prominent metaheuristic algorithms-such as Differential Evolution (DE), Particle Swarm Optimization (PSO), Whale Optimization Algorithm (WOA), Grey Wolf Optimizer (GWO), Gravitational Search Algorithm (GSA), Water-Drop Optimization (WDO), Firefly Optimization (FFO), and Coati Optimization Algorithm (COA)-demonstrates the robustness and efficiency of RCOA across a wide spectrum of the CEC2017 benchmark functions. The analysis is conducted based on the mean fitness values and standard deviation across 29 test functions, enabling a deep understanding of solution quality and algorithmic stability.
RCOA achieves the best results (lowest mean fitness) in several functions, including \(f_1, f_4, f_5, f_6, f_8, f_{12}, f_{14}, f_{16}, f_{18}, f_{20}, f_{22}, f_{24}, f_{28}\). This establishes its superiority in solving unimodal, multimodal, and hybrid optimization problems as shown in Table 7. The consistently low mean values across these functions indicate RCOA’s strong exploitation ability in reaching optimal solutions. Moreover, its standard deviation remains significantly low for most of these functions, showcasing its stability and repeatability, which are essential for real-world optimization problems where consistency is critical.
The convergence performance of RCOA in comparison to other algorithms is illustrated in Fig. 2 for the CEC’17 test suite with 50 dimensions. The results highlight that RCOA consistently demonstrates a faster convergence rate than its counterparts in \(f_3, f_6, f_8, f_{11}, f_{12}, f_{13}, f_{14}, f_{17}, f_{22}, f_{27},\) and \(f_{28}\). This advantage can be attributed to its effective balance between exploration and exploitation, allowing it to efficiently navigate complex search landscapes. In particular, for several functions, RCOA not only converges more quickly but also achieves superior optimal solutions compared to the eight competing algorithms. These findings emphasize the robustness and efficiency of RCOA in solving diverse optimization problems.
Statistical results
Table 8 presents the Wilcoxon Signed-Rank test results, highlighting the statistical significance of RCOA’s performance compared to other algorithms. The results indicate that RCOA outperforms traditional optimization methods such as GWO, DE, GSA, and COA across most test functions, demonstrating its robustness. WOA emerges as a strong competitor, particularly in functions \(f_2\), \(f_7\), \(f_9\), \(f_{11}\), and \(f_{17}\), where performance differences are marginal. While PSO and WDO show competitiveness in select cases, RCOA maintains its superiority with lower p-values, confirming statistical significance. Overall, the results validate RCOA’s effectiveness in solving complex optimization problems. The Wilcoxon Signed-Rank Test confirms that RCOA statistically outperforms traditional algorithms like GWO, DE, GSA, and COA, while WOA remains a competitive alternative in select cases. Despite WOA and PSO exhibiting comparable performance in certain functions, RCOA’s balanced exploitation-exploration tradeoff ensures robust search capability. Against WDO and FFO, RCOA maintains a significant advantage, except for a few functions where results are comparable. These findings establish RCOA as one of the most reliable and efficient metaheuristic optimizers for complex optimization problems.
Classical engineering problems
In this section, RCOA was tested with two constrained engineering design problems: a tension/compression spring and a welded beam.
Tension/compression spring design problem
The tension/compression spring design is a benchmark problem where the objective is to minimize the spring weight while satisfying certain constraints. The objective function is:
Subject to the constraints:
Where:
-
\(x_1\) = wire diameter, d,
-
\(x_2\) = mean coil diameter, D,
-
\(x_3\) = number of active coils, N.
Bounds:
Table 9 presents a statistical comparison between RCOA, WOA, PSO, and GSA based on three metrics: the average optimized result, the standard deviation, and the number of function evaluations (i.e., how many times the function was computed).
RCOA has the lowest average result (0.012734) and a small standard deviation (0.000051), indicating that it provides very consistent and optimal results with minimal variation. WOA has a slightly better average (0.012711), but its standard deviation (0.000312) is higher than RCOA, indicating more variability in its performance. PSO shows a higher average (0.013193), but maintains a low standard deviation, showing reasonable consistency. GSA has the highest average (0.013628) and the largest standard deviation (0.003691), showing greater variability and less reliable results. In terms of function evaluations, WOA requires the least evaluations (4810), indicating efficiency, while RCOA is slightly higher (5110).
Table 10 focuses on the optimum variables and optimum weight achieved by each algorithm for the tension/compression spring design problem. The problem involves finding the optimal values for three design variables: d: wire diameter, D: spring diameter, and N: number of active coils. The objective is to minimize the spring’s weight while adhering to constraints related to the spring’s mechanical properties.
RCOA achieves a near-optimal result with an optimum weight of 0.012756, with corresponding values for d = 0.050000, D = 0.317425, and N = 14.027810. WOA provides a slightly better optimum weight of 0.0126763 with different variables d = 0.051207, D = 0.345215, and N = 12.004032. PSO and GSA also show competitive results, with PSO achieving a weight of 0.0126747 and GSA achieving 0.0127022. Overall, these tables show that while each algorithm performs well, RCOA demonstrates high accuracy with consistent and low variability results, proving its reliability for solving this complex optimization problem. The schematic design and performance can be seen in Fig 3.
Welded beam design problem
The objective function is:
Subject to the constraints:
Where:
-
\(x_1\) = thickness of the beam, h,
-
\(x_2\) = width of the beam, l,
-
\(x_3\) = length of the beam, t,
-
\(x_4\) = weld thickness, b.
Bounds:
The RCOA outperforms the other algorithms in terms of the average objective function value (1.4961) and has a competitive standard deviation (0.0242), indicating its consistency across different runs. It also achieves these results with relatively fewer function evaluations (10,870), making it an efficient algorithm compared to PSO (13,770) and WOA (9900). GSA shows much higher variability with an average result of 3.5761 and a significantly higher standard deviation (1.2874), indicating less reliability in its solutions as shown in Table 11.
In the optimization of the welded beam problem as shown in Table 12, RCOA achieves the lowest optimum weight (1.4961), which directly corresponds to a more optimal design in terms of material usage and cost-effectiveness. The optimal variables (h,l,t,b) provided by RCOA, such as beam thickness (h) and length (l), are finely tuned compared to other algorithms. The WOA and PSO algorithms also produce competitive results but fall short of RCOA in terms of minimizing weight. GSA produces a heavier beam design and shows more fluctuation in the solution quality. The schematic desiogn and convergence per run can be seen in Fig. 4.
Overall, the results indicate that RCOA is highly effective in solving the welded beam design problem, offering a good balance between exploration and exploitation in the search space, leading to both higher accuracy and efficiency in achieving the minimum weight solution while satisfying the necessary constraints.
Pressure vessel design
In this problem, the goal is to minimize the total cost (material, forming, and welding) of a cylindrical pressure vessel as shown in Fig. 5. Both ends of the vessel are capped while the head has a hemispherical shape. There are four optimization variables: the thickness of the shell (\(T_s\)), the thickness of the head (\(T_h\)), the inner radius (R), and the length of the cylindrical section without considering the head (L). The problem includes four optimization constraints and is formulated as follows:
Subject to:
Variable range:
The optimization results for the Pressure Vessel Design problem using various metaheuristic algorithms indicate that RCOA, WOA, and PSO achieve the most cost-effective designs, with objective function values of 6059.95, 6059.75, and 6061.08 , respectively as shown in Table 13. These algorithms converge towards nearly identical optimal solutions, demonstrating their efficiency in structural optimization. The design variables, including shell thickness (\(x_1\)), head thickness (\(x_2\)), inner radius (\(x_3\)), and vessel length (\(x_4\)), remain consistent across these three methods, highlighting their robustness. Conversely, COA results in a higher cost of 6390.00 due to increased material usage, while GSA performs the worst, producing a significantly higher cost of 8538.84 due to suboptimal design parameter selection. The results affirm that RCOA and WOA are the most effective approaches, with PSO closely following, making them well-suited for engineering optimization tasks in pressure vessel design.
Speed reducer design problem
The objective of the speed reducer design problem is to minimize the overall weight of the mechanical system while satisfying multiple constraints related to gear teeth bending stress, surface stress, shaft stresses, and transverse deflections as shown in Fig. 6. The weight of the system is formulated based on seven design variables:
-
Face width: b or \(x_1\)
-
Module of teeth: m or \(x_2\)
-
Number of pinion teeth: z or \(x_3\)
-
First shaft length between bearings: \(l_1\) or \(x_4\)
-
Second shaft length between bearings: \(l_2\) or \(x_5\)
-
First shaft diameter: \(d_1\) or \(x_6\)
-
Second shaft diameter: \(d_2\) or \(x_7\)
Objective function
The weight of the speed reducer is given by:
Constraints
Several constraints ensure the mechanical integrity of the system:
Design variable bounds
The decision variables are subject to the following constraints:
This formulation ensures the optimization of the speed reducer weight while maintaining the required mechanical performance.
Table 14 compares the performance of various optimization algorithms for the speed reducer design problem. RCOA achieved the best cost of 3103.01, followed closely by WOA (3113.03), demonstrating their efficiency in identifying optimal design parameters. COA and GSA performed moderately, while PSO resulted in the highest cost (5598.64), indicating its lower effectiveness for this problem. The results confirm the superiority of RCOA and WOA in minimizing cost while maintaining feasible design constraints.
From the results, it is evident that RCOA and WOA provided the best design solutions in terms of minimizing cost while maintaining feasible values for all design variables. These findings reinforce the effectiveness of reindeer-inspired optimization techniques in engineering applications, particularly in complex design problems such as the speed reducer design.
Conclusion
This paper introduces the Reindeer Cyclone Optimization Algorithm (RCOA), which combines swarm intelligence with the natural migratory behaviors of reindeer herds. RCOA introduces a flexible balance between exploration and exploitation, leveraging Lévy flight for enhanced global search and neighbor-based exploitation for local search. Compared to WOA, RCOA’s explicit separation of these phases and its novel random walk mechanism make it suitable for multimodal optimization problems and avoiding local optima. RCOA has consistently proven to be a highly competitive and robust optimization algorithm, delivering near-optimal solutions across a diverse range of complex problems. The mathematical results reflect RCOA’s strength in minimizing objective functions with minimal variance, making it a viable solution for solving real-world engineering and mathematical optimization problems. The results demonstrate that RCOA consistently delivers superior performance across different optimization problems. In the tension/compression spring design, RCOA outperformed WOA, PSO, and GSA with minimal variation in results. For the welded beam design, RCOA achieved the lowest optimum weight (1.4961) compared to WOA (1.7305) and GSA (3.5761), with fewer function evaluations than PSO. These findings highlight RCOA’s effectiveness in finding robust and accurate solutions, making it a competitive tool for real-world optimization challenges. The performance of RCOA was evaluated on the CEC’17 benchmark functions and compared with eight popular optimization algorithms. The results demonstrated that RCOA achieved superior performance in terms of convergence speed and solution accuracy, outperforming many existing methods. The convergence analysis illustrated that RCOA effectively maintains a balance between exploration and exploitation, allowing it to avoid local optima and converge to high-quality solutions efficiently. The statistical validation through the Wilcoxon Signed-Rank test confirmed the significance of RCOA’s performance improvements over traditional optimization methods. Future work will focus on enhancing RCOA by incorporating adaptive parameter control mechanisms and extending its applications to multi-objective and dynamic optimization problems. Future work will further explore adaptive control of the cyclone factor to improve the algorithm’s performance in dynamic environments.
Data availability
No datasets were generated or analysed during the current study.
References
Dokeroglu, T., Sevinc, E., Kucukyilmaz, T. & Cosar, A. A survey on new generation metaheuristic algorithms. Comput. Ind. Eng. 137, 106040 (2019).
Holland, J. H. Genetic algorithms. Sci. Am. 267(1), 66–73 (1992).
Kennedy, J. & Eberhart, R. Particle swarm optimization. In Proceedings of ICNN’95-International Conference on Neural Networks, vol. 4, 1942–1948. (IEEE, 1995).
Dorigo, M., Birattari, M. & Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 1(4), 28–39 (2006).
Fister, I., Iztok, F., Yang, X.-S. & Brest, J. A comprehensive review of firefly algorithms. Swarm Evol. Comput. 13, 34–46 (2013).
Mirjalili, S. & Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 95, 51–67 (2016).
Kerschke, P. et al. Search dynamics on multimodal multiobjective problems. Evol. Comput. 27(4), 577–609 (2019).
Pant, M., Zaheer, H., Garcia-Hernandez, L. & Abraham, A. Differential Evolution: A review of more than two decades of research. Eng. Appl. Artif. Intell. 90, 103479 (2020).
Sabri, N. M., Puteh, M. & Mahmood, M. R. An overview of Gravitational Search Algorithm utilization in optimization problems. In 2013 IEEE 3rd International Conference on System Engineering and Technology, 61–66. (IEEE, 2013).
Mirjalili, S., Mirjalili, S. M. & Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 69, 46–61 (2014).
Tang, K.-S., Man, K.-F., Kwong, S. & He, Q. Genetic algorithms and their applications. IEEE Signal Process. Mag. 13(6), 22–37 (1996).
Schwefel, H. P. Evolution and Optimum Seeking (Wiley, 1995).
Koza, J. R. G. P. On the programming of computers by means of natural selection. In Genetic Programming (1992).
Storn, R. & Price, K. Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 11, 341–359 (1997).
Kirkpatrick, S., Daniel Gelatt, C. & Vecchi, M. P. Optimization by simulated annealing. Science 220(4598), 671–680 (1983).
Rashedi, E., Nezamabadi-Pour, H. & Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 179(13), 2232–2248 (2009).
Kaveh, A. & Talatahari, S. Charged system search for optimum grillage system design using the LRFD-AISC code. J. Constr. Steel Res. 66(6), 767–771 (2010).
Ding, D. et al. Convergence analysis and performance of an extended central force optimization algorithm. Appl. Math. Comput. 219(4), 2246–2259 (2012).
Shi, Y. Particle swarm optimization. IEEE Connect. 2(1), 8–13 (2004).
Dorigo, M. & Gambardella, L. M. Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Trans. Evol. Comput. 1(1), 53–66 (1997).
Rana, N., Muhammad Shafie, A., Latiff, S. I., Abdulhamid, M. & Chiroma, H. Whale optimization algorithm: A systematic review of contemporary applications, modifications and developments. Neural Comput. Appl. 32, 16245–16277 (2020).
Yang, X.-S. Firefly algorithms for multimodal optimization. In International Symposium on Stochastic Algorithms, 169–178. (Springer, 2009).
Tejani, G. G., Savsani, V. J., Patel, V. K. & Bureerat, S. Topology, shape, and size optimization of truss structures using modified teaching-learning based optimization. Adv. Comput. Des. 2(4), 313–331 (2017).
Rao, R. V. & Venkata Rao, R. Teaching-Learning-Based Optimization Algorithm (Springer, 2016).
Atashpaz-Gargari, E., & Lucas, C. Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition. In 2007 IEEE Congress on Evolutionary Computation, 4661–4667 (IEEE, 2007).
Hamadneh, T. et al. On the application of potter optimization algorithm for solving supply chain management application. Int. J. Intell. Eng. Syst. 17(5), 1–10 (2024).
Alomari, S. et al. Carpet weaver optimization: A novel simple and effective human-inspired metaheuristic algorithm. Int. J. Intell. Eng. Syst. 17(4), 1–10 (2024).
Hamadneh, T. et al. Fossa optimization algorithm: A new bio-inspired metaheuristic algorithm for engineering applications. Int. J. Intell. Eng. Syst 17(5), 1038–1047 (2024).
Hamadneh, T. et al. Addax optimization algorithm: A novel nature-inspired optimizer for solving engineering applications. Int. J. Intell. Eng. Syst. 17(3), 1–10 (2024).
Kaabneh, K. et al. Dollmaker optimization algorithm: A novel human-inspired optimizer for solving optimization problems. Int. J. Intell. Eng. Syst. 17(3), 1–10 (2024).
Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 89, 228–249 (2015).
Mirjalili, S. et al. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 114, 163–191 (2017).
Dhiman, G. & Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl.-Based Syst. 165, 169–196 (2019).
Nonut, A. et al. A small fixed-wing UAV system identification using metaheuristics. Cogent Eng. 9(1), 2114196 (2022).
Wang, B. et al. A neural coordination strategy for attachment and detachment of a climbing robot inspired by gecko locomotion. Cyborg Bionic Syst. 4, 0008 (2023).
Ma, S. et al. The autonomous pipeline navigation of a cockroach bio-robot with enhanced walking stimuli. Cyborg Bionic Syst. 4, 0067 (2023).
Alves, S. et al. Integrated design fabrication and control of a bioinspired multimaterial soft robotic hand. Cyborg Bionic Syst. 4, 0051 (2023).
Gao, N. et al. Energy model for UAV communications: Experimental validation and model generalization. Chin. Commun. 18(7), 253–264 (2021).
Jin, W. et al. Enhanced UAV pursuit-evasion using boids modelling: A synergistic integration of bird swarm intelligence and DRL. Comput. Mater. Contin. 80(3), 1–10 (2024).
Zhu, E., Wang, H., Zhang, Y., Zhang, K. & Liu, C. PHEE: Identifying influential nodes in social networks with a phased evaluation-enhanced search. Neurocomputing 572, 127195 (2024).
Zhou, G., Wang, Z. & Li, Q. Spatial negative co-location pattern directional mining algorithm with join-based prevalence. Remote Sens. 14(9), 2103 (2022).
Liu, Z. et al. HyGloadAttack: Hard-label black-box textual adversarial attacks via hybrid optimization. Neural Netw. 1, 106461 (2024).
Aye, C. M. et al. Airfoil shape optimisation using a multi-fidelity surrogate-assisted metaheuristic with a new multi-objective infill sampling technique. CMES-Comput. Model. Eng. Sci. 137(3), 1–10 (2023).
Author information
Authors and Affiliations
Contributions
Bharat Rawal and Gopal Chaudhary proposed and conceptualized the manuscript. Gopal Chaudhary simulated and wrote the main manuscript text. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Chaudhary, G., Rawal, B.S. A novel reindeer cyclone optimization algorithm (RCOA). Sci Rep 15, 12506 (2025). https://doi.org/10.1038/s41598-025-97069-1
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41598-025-97069-1









