Introduction

Optimization is a branch of applied mathematics that is widely used in various scientific disciplines because many problems can be expressed in the form of an optimization problem. Obviously, with the present rate of progress in all scientific fields, we face a variety of new real-world problems that have become more complex, such that conventional mathematical methods, such as exact optimizers, cannot solve them efficiently. In particular, exact optimizers do not have sufficient efficiency in dealing with many non-continuous, non-differentiable, and large-scale real-world multimodal problems1.

Early studies in the field of nature-inspired computation demonstrated that some numerical methods developed based on the behavior of natural creatures can solve real-world problems more effectively than exact methods2. Metaheuristic methods are numerical techniques that combine the heuristic rules of natural phenomena with a randomization process. Notably, over the past few decades, many researchers have concluded that developing and enhancing metaheuristic algorithms are practically-effective and computationally-efficient approaches to tackling complex and challenging unsolved real-world optimization problems3,4,5,6,7,8. A key advantage of metaheuristic methods is that they are problem-independent algorithms which provide acceptable solutions to complex and highly nonlinear problems in a reasonable time. Furthermore, they generally do not need any significant contributions to the algorithm structure from implementers, but it is only needed that they formulate the problem according to the requirements of the chosen metaheuristic. The point worth mentioning is that the core operation of the metaheuristic approaches is based on non-gradient procedures, where there is no need for cumbersome computations such as calculations of derivatives and multivariable generalizations. Moreover, randomization components enable metaheuristic algorithms to perform generally better than conventional methods. In particular, their stochastic nature enables them to escape from local optima and move toward global optimum on the search space of large-scale and challenging optimization problems.

Conventionally, two general criteria are used to classify metaheuristic methods: (1) the number of agents, and (2) the origin of inspiration. Based on the first criterion, metaheuristic algorithms can be divided into two groups: (1) single-solution-based algorithms, and (2) population-based algorithms. Also, according to inspiration, metaheuristic algorithms are divided into two main categories, namely Evolutionary Algorithms (EAs) and Swarm Intelligence (SI) algorithms. Single-solution-based methods try to modify one solution (agent) during the search process like what goes in the Simulated Annealing (SA) algorithm9; on the other hand, in population-based algorithms, a population of solutions is used to find the optimal answer similar to the simulation process in the Particle Swarm Optimization (PSO) algorithm10.

In EAs, the genetic evolution process is the main origin. Evolutionary Programming (EP)2, Evolutionary Strategy (ES)11, Genetic Algorithm (GA)12, and Differential Evolution (DE) are among the most famous methods in this domain. Besides, Simon13 proposed the Biogeography-Based Optimization (BBO) algorithm, which is used for global recombination and uniform crossover. Also, SI algorithms are based on the simulation of the collective behavior of creatures. SI algorithms are classified into three categories as follows. The first category is associated with the behavioral models of animals such as PSO10, Ant Colony Optimization (ACO)14, Artificial Bee Colony (ABC)15, Firefly Algorithm (FA)16, Cuckoo Search (CS)17, Bat Algorithm (BA)18, Eagle Strategy (ES)19, Krill Herd (KH)20, Flower Pollination Algorithm (FPA)21, Grey Wolf Optimizer (GWO)22, Ant Lion Optimizer (ALO)23, Grasshopper Optimization Algorithm (GOA)24, Symbiotic Organisms Search (SOS)25,26, Moth Flame Optimizer (MFO)27, Dragonfly Algorithm (DA)28, Salp Swarm Algorithm (SSA)29, Crow Search Algorithm (CSA)30, Whale Optimization Algorithm (WOA)31,32, Developed Swarm Optimizer (DSO)33, Spotted hyena optimizer (SHO)34, Farmland fertility algorithm (FFA)35,36, African Vultures Optimization (AVO)37, Bald Eagle Search Algorithm (BES)38,39 Tree Seed Algorithm (TSA)40,41, and Artificial Gorilla Troops (GTO) optimizer42. The second category concerns algorithms based on the physical and mathematical laws, such as Simulated Annealing (SA)9, Big Bang–Big Crunch optimization (BB–BC)43, Charged System Search (CSS)44,45, Chaos Game Optimization (CGO)46,47, Gravitational Search Algorithm (GSA)48, Sine Cosine Algorithm (SCA)49, Multi-Verse Optimizer (MVO)50, Atom Search Optimization (ASO)51, Crystal Structure Algorithm (CryStAl)52,53,54,55, and Electromagnetic field optimization (EFO)56. The third category includes algorithms that mimic various optimal behaviors of humans, for example, Imperialist Competitive Algorithm (ICA)57, Teaching Learning Based Optimization (TLBO)58, Interior Search Algorithm (ISA)59, and Stochastic Paint Optimizer (SPO)60.

Though there is a wide range of metaheuristic methods developed over the past few decades, they solve problems with different accuracies and time efficiencies; that is, one algorithm may not solve a specific problem with a desired accuracy or within a reasonable time, whereas another algorithm may be capable of achieving this goal. Therefore, computational time and accuracy are two essential considerations in developing novel metaheuristic methods. In other words, new robust methods are developed for more efficient search in the space of problems, and to find more accurate solutions to complex and large-scale problems in less time than previous ones. Therefore, there is an ongoing ambition in the optimization community to develop novel high-performance optimizers which can solve challenging problems more efficiently. In other words, each algorithm has particular advantages and disadvantages that are listed in Table 1 for the abovementioned algorithms.

Table 1 Advantages and disadvantages of various metaheuristic algorithms.

The contribution of this paper is to develop a new physics-based metaheuristic algorithm called Fusion Fission Optimization (FuFiO) algorithm. The proposed algorithm simulates the tendency of nuclei to increase their binding energy and achieve higher levels of stability. In the FuFiO algorithm, the nuclei are divided into two groups, namely stable and unstable, based on their fitness. Each nucleus can interact with other nuclei using three different types of nuclear reactions, including fusion, fission, and \(\beta\)-decay. These reactions establish the stabilization process of unstable nuclei through which they gradually turn into stable nuclei.

The performance of the FuFiO algorithm is also examined and explained in two steps as follows. In the first step, FuFiO and seven other metaheuristic algorithms are used to solve a complete set of 120 benchmark mathematical test functions (including 60 fixed-dimensional and 60 N-dimensional test functions). Then, to make a valid judgment about the performance of the FuFiO algorithm, the obtained statistical results of FuFiO and the other algorithms are utilized as a dataset to be analyzed by non-parametric statistical methods. In the second step, to compare the ability of the proposed algorithm with state-of-the-art algorithms, the single-objective real-parameter numerical optimization problems of the recent Competitions on Evolutionary Computation (CEC 2017) including sets of 10-, 30-, 50-, and 100- dimensional benchmark test functions are considered. It should be noted that in this work, the main novelty is two-fold. First, the source of inspiration is provided by some fundamental aspects of nuclear physics. Second, that is of higher importance and rigor, the theory of nuclear binding energy to generate stable nuclei is used to develop the equations of a metaheuristic method for the first time. In this model, the tendency of nuclei to increase their binding energy and achieve higher levels of stability using nuclear reactions, including fusion, fission, and β-decay, is considered the central principle to develop the three main steps of the new algorithm.

The rest of this paper is organized as follows: “Fusion–fission optimization (FuFiO) algorithm” section describes the background, inspiration, mathematical model, and implementation of the proposed algorithm. “FuFiO validation” section explains comparative metaheuristics, mathematical functions, comparative results, and statistical analyses. “Analyses based on competitions on evolutionary computation (CEC)” section compares the performance of the FuFiO algorithm on the CEC-2017 and CEC-2019 special season with state-of-the-art algorithms. Finally, conclusions are given in “Conclusions and future work” section.

Fusion–fission optimization (FuFiO) algorithm

In the following sub-sections, the general principles of nuclear reactions, nuclear binding energy, and nuclear stability are discussed as an inspirational basis for the development of the Fusion–Fission Optimization (FuFiO) algorithm.

Inspiration

In nuclear physics, the minimum energy needed to dismantle the nucleus of an atom into its constituent nucleons, i.e., the collection of protons (Z) and neutrons (N), is called nuclear binding energy. The strong nuclear force that attracts the nucleons to each other has a positive value and creates this nuclear binding energy. Therefore, a nucleus with more binding energy provides more stability93. Importantly, the Coulomb repulsive force of protons reduces the nuclear attraction force and decreases the binding energy. Consequently, the stability of the nucleus further decreases when more protons are replaced with neutrons. Also, in the nuclei, most of the paired protons are close to each other such that their repulsive force decreases the strong nuclear force, leading to instability.

The concept of average nuclear binding energy, denoted by \({B}_{Avg}\), is generally used to evaluate the stability of nuclei. \({B}_{Avg}\) is the amount of energy required to disassemble every single nucleon from the nucleus, which is defined as the nuclear binding energy per nucleon in the nucleus. As \({B}_{Avg}\) increases, disassembling every single nucleon from the nucleus becomes progressively more difficult; in other words, the most stable nucleus corresponds to the highest \({B}_{Avg}\). The experimental diagram of \({B}_{Avg}\) associated with mass number \(A\) is shown in Fig. 1. According to this diagram, the binding energy reaches its peak at \(A=56\) (\({}^{56}\mathrm{Fe}\)), and in \(A>56\), the rate of energy reduction is low, such that the diagram has a relatively flat behavior due to saturation. The \({}^{56}\mathrm{Fe}\) nucleus divides the diagram into two parts, namely fusion and fission. The nuclei of the fusion part tend to participate in a fusion reaction, whereas in the fission part, each nucleus tends to participate in a fission reaction.

Figure 1
figure 1

Experimental binding energy \({B}_{Avg}(A, Z)\) with respect to mass number A49.

Fusion is a nuclear reaction and occurs when two highly-energetic stable nuclei slam together to form a heavier stable nucleus. In the sun, this reaction creates a lot of energy through the fusion of two hydrogen nuclei to form one helium nucleus. On the other hand, fission is a nuclear reaction in which a larger unstable nucleus is split into two smaller (stable or unstable) nuclei due to a hit by a smaller stable or unstable one. This type of reaction is used to produce a lot of energy in nuclear power reactors through the fission of Uranium and Plutonium nuclei by neutrons. The procedures of nuclear fusion and fission are illustrated in Fig. 2a,b, respectively.

Figure 2
figure 2

Nuclear reactions: (a) fusion, and (b) fission.

In nuclear processes, in addition to fusion and fission, there is another process called \(\beta\)-decay. The two types of \(\beta\)-decay are known as \({\beta }^{-}\) and \({\beta }^{+}\). In \({\beta }^{-}\)-decay, a neutron is converted to a proton, and the process creates an electron and an electron antineutrino (\(\overline{v }\)), while in \({\beta }^{-}\)-decay, a proton is converted to a neutron and the process creates a positron and an electron neutrino (\(v\))94. Also, neutrino and antineutrino particles have no essential role in reactions because they have considerably smaller masses compared to other particles. Therefore, protons and neutrons are the main factors in \({\beta }^{\pm }\)-decays. In Fig. 3, the schematic representations of \({\beta }^{-}\)- and \({\beta }^{+}\)-decays are presented.

Figure 3
figure 3

Processes of \(\beta\)-decay: (a) \({\beta }^{-}\)-decays, and (b) \({\beta }^{+}\)-decays.

Mathematical model

In this section, we describe the mathematical model of the FuFiO algorithm, which is developed based on the tendency of nuclei to increase their binding energy and get a higher level of stability using nuclear reactions, including fusion, fission, and \(\beta\)-decay. Importantly, as a nucleus with a higher level of binding energy is considered a better solution, the FuFiO algorithm will move in a direction that increases the binding energy of the nuclei. FuFiO is designed as a population-based metaheuristic method in which a set of nuclei are considered as the agents of the population. Each agent of the population has a specific position, and each of them has a particular dimension (d) which is determined by the number of problem variables. Therefore, the nuclei move in a d-dimensional space, and are represented in the form of a matrix as follows:

$$X=\left[\begin{array}{c}{X}_{1}\\ \vdots \\ {X}_{i}\\ \vdots \\ {X}_{n}\end{array}\right]=\left[\begin{array}{cc}\begin{array}{ccc}{x}_{1}^{1}& {x}_{1}^{2}& \cdots \\ \vdots & \vdots & \cdots \\ {x}_{i}^{1}& {x}_{i}^{2}& \cdots \end{array}& \begin{array}{ccc}{x}_{1}^{j}& \cdots & {x}_{1}^{d}\\ \vdots & \cdots & \vdots \\ {x}_{i}^{j}& \cdots & {x}_{i}^{d}\end{array}\\ \begin{array}{ccc}\vdots & \vdots & \cdots \\ {x}_{n}^{1}& {x}_{n}^{2}& \cdots \end{array}& \begin{array}{ccc}\vdots & \cdots & \vdots \\ {x}_{n}^{j}& \cdots & {x}_{n}^{d}\end{array}\end{array}\right]$$
(1)

where \(i (i = 1, 2, 3, \dots , n)\) is the counter of nucleus and \(j(j=1, 2, 3, \dots , d)\) is the counter of design variables; \(n\) is the population size; \(X\) is the matrix of positions of all nuclei updated in each iteration of algorithm; \({X}_{i}\) is the position of the i-th nucleus; and \({x}_{i}^{j}\) is the j-th design variable of the i-th nucleus the initial value of which is determined randomly as follows:

$${x}_{i}^{j}\left(0\right)={lb}^{j}+r({ub}^{j}- {lb}^{j})$$
(2)

where \({x}_{i}^{j}\left(0\right)\) represents the initial position of the j-th design variable of the i-th nucleus; \({ub}^{j}\) and \({lb}^{j}\) are respectively the maximum and minimum possible values for the j-th design variable; and \(r\) is a random number in the interval [0,1]. The set of initial \({x}_{i}^{j}\left(0\right)\) s will create \({X}^{0}\) that represents the initial position of nuclei. Furthermore, in the FuFiO method, the nuclei are divided into two groups, namely stable and unstable nuclei, based on the level of binding energy. Depending on the types of reacting nuclei, nuclear reactions (i.e., fusion, fission, and \(\beta\)-decay) are regarded differently. In other words, as illustrated in Fig. 4, three different types of reaction can be considered in each group for nuclei to update their positions.

Figure 4
figure 4

Graphical representation of different reactions in each group of nuclei.

The mathematical formulation of each reaction in each group modeled as follows:

Group 1: Stable nucleus

If the i-th nucleus is stable (\({X}_{i}^{stable}\)), one of the following three reactions is selected randomly:

Reaction 1: In this reaction, the i-th nucleus slams with another stable nucleus. The new position is determined as follows:

$${X}_{i}^{new}=r{X}_{i}^{stable}+\left(1-r\right){X}_{j}^{stable}$$
(3)

where r is a random vector in [0,1] and \({X}_{j}^{stable}\) is a stable nucleus selected randomly from other stable nuclei. This reaction simulates fusion, where two stable nuclei slam together to produce a new nucleus. Figure 5 shows a schematic view of this reaction, from which it can be seen that the new solution is a random point generated in the reaction space using \(r\) and \(1-r\).

Figure 5
figure 5

Schematic representation of a fission reaction.

Reaction 2: If the i-th nucleus interacts with an unstable nucleus, this collision produces a new solution expressed as:

$${X}_{i}^{new}={X}_{i}^{stable}+r\left({X}_{i}^{stable}-{X}_{j}^{unstable}\right)$$
(4)

where \({X}_{j}^{unstable}\) is an unstable nucleus selected randomly from other unstable nuclei. The process of this reaction, shown in Fig. 6, simulates the rule of fission, where a stable nucleus is hit by an unstable one.

Figure 6
figure 6

Schematic representation of a fission reaction.

Reaction 3: If the i-th nucleus decays, the new solution will be generated as follows:

$${X}_{i new}^{k}=\left\{\begin{array}{c}{X}_{i}^{k} k\notin p\\ {R}^{k} k\in p\end{array} , p\subseteq d\right.$$
$$R=LB+r(UB-LB)$$
(5)

where \(p\) denotes a random subset of problem variables; \(d\) is the set of all variables; \(k\) is the counter of variables; \(R\) is a random nucleus; and \(UB\) and \(LB\) are the vectors of the lower and upper bound of variables, respectively. This reaction models the process of \(\beta\)-decay in a stable nucleus as presented in Fig. 7.

Figure 7
figure 7

Procedure of \(\beta\)-decay in a stable nucleus.

Group 2: Unstable nucleus

In the second group, if the i-th nucleus is unstable (\({X}_{i}^{unstable}\)), one of the following three reactions will be used randomly to update the i-th nucleus:

Reaction 1: If the unstable nucleus slams with another unstable nucleus, the new position is obtained as follows:

$${X}_{i}^{new}=r{X}_{i}^{unstable}+(1-r)({X}_{j}^{unstable}-{X}_{i}^{unstable})$$
(6)

where \(r\) is a random vector in interval [0,1] and \({X}_{j}^{unstable}\) is an unstable nucleus selected randomly from other unstable nuclei. As illustrated in Fig. 8, this reaction simulates the rule of fission where an unstable nucleus is hit by an unstable one.

Figure 8
figure 8

Fission of two unstable nuclei.

Reaction 2: If the unstable nucleus, \({X}_{i}^{unstable}\), interacts with a stable nucleus, the new position is as follows:

$${X}_{i}^{new}={X}_{i}^{unstable}+r({X}_{i}^{unstable}-{X}_{j}^{stable})$$
(7)

where \({X}_{j}^{stable}\) is a randomly selected stable nucleus from stable nuclei. The process of this reaction, which establishes a fission model of stable and unstable nuclei, is shown in Fig. 9.

Figure 9
figure 9

Fission of stable and unstable nuclei.

Reaction 3: If the i-th unstable nucleus decays, the new position is defined as follows:

$${X}_{i new}^{k}=\left\{\begin{array}{l}{X}_{i}^{k}\quad k\notin p\\ {X}_{j}^{k} \quad k\in p\end{array} , p\subseteq d\right.$$
(8)

where \(p\) denotes a random subset of variables; \(d\) is the set of all variables; \(k\) is the counter of variables; and \({X}_{j}^{stable}\) is a randomly selected nucleus from stable nuclei. As presented in Fig. 10, this reaction models the \(\beta\)-decay process of an unstable nucleus.

Figure 10
figure 10

Procedure of β-decay in an unstable nucleus.

Both third reactions in the stable and unstable groups represent the \({\beta }^{\pm }\)-decays. In the former reaction, a random set of decision variables takes new random values between their corresponding allowable lower and upper bounds, whereas, in the latter one, a random subset of decision variables takes their new values from the corresponding decision variables of a randomly-chosen stable solution. Importantly, the \({\beta }^{\pm }\)-decays are considered as mutation operators to escape from local optima.

Stable and unstable nuclei

The level of binding energy of a nucleus determines whether it is stable or unstable, and in the FuFiO algorithm, the objective function value, \(F(X)\), is used to specify the group of agents. In other words, in the FuFiO algorithm, a nucleus with a better \(F(X)\) is considered to be more stable. Moreover, as can be seen from Fig. 1, the \({}^{56}Fe\) nucleus is the boundary of stable and unstable groups. This boundary is also considered in the FuFiO algorithm to distinguish stable nucleus from unstable ones. To this end, the nucleus is evaluated in each iteration and a set of better ones is considered as the set of stable nuclei. The size of stable nuclei is determined as follows:

$${S}_{z}=fix\left[n\times \left({L}_{s}+\frac{Iter\times \left({U}_{s}-{L}_{s}\right)}{MaxIter}\right)\right]$$
(9)

where \({S}_{z}\) is the size of stable nuclei at each iteration; \(fix\) is a function that rounds its argument to the nearest integer number; \(n\) is the population size; \({L}_{s}\) and \({U}_{s}\) are the minimum and maximum percent of stable nuclei at the start and the end of the algorithm, respectively; \(Iter\) is the counter of iterations; and \(MaxIter\) is the maximum iteration of the algorithm. In Eq. (9), the size of stable particles is determined dynamically as the algorithm progresses. Also, in determining \({S}_{z}\), the two parameters \({L}_{s}\) and \({U}_{s}\) should be fine-tuned. The values of \({L}_{s}\) and \({U}_{s}\) are considered 10% and 70%, respectively. This formulation increases the size of stable nuclei from 10 to 70% at the end of the algorithm. In addition, the value of \({U}_{s}\) is naturally adopted in which the ratio of stable nuclei to unstable nuclei is assumed to be around 70%.

Boundary handling

In solving an optimization problem with \(d\) variables, optimizers search in a d-dimensional search space. Each of these dimensions has its upper and lower boundaries, and the variables of found solutions should be placed in the interval of boundaries. Given that some variables may violate boundaries during their movements, in the FuFiO algorithm, the following equations, which replace violated boundaries with violated variables, are used to return them within the boundaries:

$${x}_{i new}^{j}=\mathit{min}\left({x}_{i}^{j}, {ub}^{j}\right)\, \mathrm{and}\, {x}_{i new}^{j}=max({x}_{i}^{j}, {lb}^{j})$$
(10)

where \({x}_{i new}^{j}\) is the j-th design variable of the i-th new solution \({X}_{i}^{new}\), and min and max are operators that return the minimum and maximum of \({(x}_{i}^{j}, {ub}^{j})\) and \({(x}_{i}^{j}, {lb}^{j})\), respectively.

Replacement strategy

In each reaction, a new position \({X}_{i}^{new}\) is generated to be replaced with the current position of the i-th nucleus \({X}_{i}\). This replacement will take place whenever the new solution has a better level of binding energy than the current one. This procedure is formulated as follows:

$${X}_{i}=\left\{\begin{array}{c}{X}_{i} {\quad}{\quad} f\left({X}_{i}\right)\, is \, better \, than \, f\left({X}_{i}^{new}\right)\\ {X}_{i}^{new} {\quad}{\quad}f\left({X}_{i \, new}\right)\, is \, better \, than \, f\left({X}_{i}\right)\end{array}\right.$$
(11)

Selection of reactions

In the FuFiO algorithm, nuclei are categorized into two groups; in each group, three different reactions are developed, of which one is randomly selected to generate a new solution. It should be noted that different groups and reactions do not represent different phases of the algorithm. In other words, the FuFiO algorithm has one phase, wherein for each nucleus in each iteration, one of the reactions is randomly selected according to the group of the nucleus to generate the new solutions, as shown in Fig. 11.

Figure 11
figure 11

Flowchart of the process of determining groups and reactions in each iteration for each agent.

Terminating criterion

In metaheuristics, the search process will be finished after satisfying a terminating criterion, following that the best result will be reported. Some of the most common stop criteria are as follows:

  • The best result is equal to the minimum specified value determined for the objective function.

  • The optimization process will be terminated after a fixed number of iterations.

  • The value of the objective function does not change during the specified period.

  • The optimization process time has reached a predetermined value.

Implementation of FuFiO

Based on the concepts developed in previous sections, the FuFiO algorithm is implemented in two levels as follows:

Level 1: Initialization

  • Step 1: Determine the number of nucleus (\(nPop\)), maximum number of iterations (\(MaxIter\)), and variable bounds \(UB\) and \(LB\).

  • Step 2: Determine the parameters of FuFiO, namely \({L}_{s}\) and \({U}_{s}\).

  • Step 3: Define initial solutions (Eqs. (1) and (2)).

  • Step 4: Calculate the objective function of initial solutions.

Level 2: Nuclear reaction

In each iteration of the FuFiO algorithm, all of the agents will perform the following steps:

  • Step 1: \({S}_{z}\) is updated (Eq. (9)).

  • Step 2: Population is sorted according to \(F(X)\).

  • Step 3: Stable and unstable nuclei are determined.

  • Step 4: The group of current nucleus is determined.

  • Step 5: The new solution is generated using the selected reaction (Eqs. (3), (4), (5), (6), (7), and (8)).

  • Step 6: The new solution is clamped as Eq. (10).

  • Step 7: The new solution is evaluated and objective function \(F(X)\) is calculated.

  • Step 8: The new solution is checked to replace the current solution as Eq. (11).

  • Step 9: Nuclear reaction level is repeated until a terminating criterion is satisfied.

The flowchart of the FuFiO algorithm is illustrated in Fig. 12.

Figure 12
figure 12

Flowchart of the Fusion–Fission Optimization (FuFiO) algorithm.

FuFiO validation

The No Free Lunch (NFL) theorem95 is one of the most famous theories which have been cited many times in literature to pave the way for introducing new metaheuristic algorithms. This theorem has logically proved that no algorithm can solve all types of problems. However, the NFL theorem is used here for a different purpose. In other words, it is used here to validate the capability of the FuFiO algorithm in solving various problems compared to other algorithms. To this end, in this study, 120 benchmark test functions are considered to challenge the performance of the proposed algorithm in solving different types of problems. Also, another application of these problems is to create a dataset to be used in non-parametric statistical analyses to examine the performance of the proposed algorithm more thoroughly.

In this section, first, the description of the test problems is presented; then, a number of rival metaheuristics with their settings are reviewed. Subsequently, the evaluation metrics and comparative results are explained; and finally, the results of non-parametric statistical methods will be presented.

Test functions

To evaluate the capability of the proposed algorithm in handling various types of benchmark functions with different properties, a set of 120 mathematical problems has been used. Based on their dimensions, these problems have been categorized into two groups: (1) fixed-dimensional problems, and (2) N-dimensional problems.

Amongst these functions, F1 to F60 are fixed-dimensional functions, with dimensions of 2 to 10. The second group of problems, F61 to F120, includes 60 N-dimensional test functions, the dimensions of which are considered to be equal to 30. The details of the mathematical functions in these two groups are presented in Tables 2 and 3, respectively. In these tables, C, NC, D, ND, S, NS, Sc, NSC, U, and M denote Continuous, Non-Continuous, Differentiable, Non-Differentiable, Separable, Non-Separable, Scalable, Non-Scalable, Unimodal, and Multi-modal, respectively. In addition, R, D, and Min represent the variables range, variables dimension, and the global minimum of the functions, respectively.

Table 2 Details of the fixed-dimensional benchmark mathematical functions.
Table 3 Details of the N-dimensional benchmark mathematical functions.

Metaheuristic algorithms for comparative studies

To investigate the overall performance of the FuFiO algorithm, its results should be compared with those of other methods. The selected metaheuristics for this purpose are FA, CS, Jaya, TEO, SCA, MVO, and CSA algorithms, of which the most recent and improved versions are utilized here. Among the selected methods, only SCA is parameter-free, whereas the other metaheuristics have some specific parameters that should be tuned carefully. Table 4 presents a summary of these parameters, adopted from the literature, that we have utilized in our evaluations.

Table 4 Summary of parameters associated with the methods used for comparative analyses.

Generally speaking, the performance of a powerful and versatile algorithm should be independent of the problem that is to be solved. In other words, for a good algorithm, parameter tunning should not be of crucial importance. Considering this point, we developed the FuFiO algorithm in a way that there are only two extra parameters, namely Ls and Us. We performed a statistical study on the effect of these parameters and found out that if they are chosen from within predefined limits, determining the exact values of them is not necessary. Knowing that Ls and Us are respectively the minimum and maximum percentages of stable nuclei at the beginning and end of the algorithm, Ls should be a small value, e.g. 0.1–0.4, whereas Us should be in the range of 0.5–0.9. In this study, we considered Ls and Us to be 0.1 and 0.7, respectively.

Numerical results

This section presents the results of the FuFiO and other methods in dealing with benchmark problems. In this study, due to the random nature of metaheuristics, each algorithm is independently run 50 times for each problem. Then, the statistical results of these runs are utilized to analyze the algorithms. The population size for each of the methods is set to be 50, and the maximum Number of Function Evaluations (NFEs) is considered 150,000 for all of the metaheuristics. The tolerance of 1 × 10−12 from the optimal solution is considered as the terminating criterion, and the NFEs are counted until the algorithm stops. The statistical results of the fixed-dimensional and N-dimensional benchmark problems are presented in Tables 5 and 6, respectively. These results include the minimum (Min), average (Mean), maximum (Max), Standard deviation (Std. Dev.), and mean of the NFEs of each algorithm. Moreover, the last row of each function shows the rank of algorithms, where the ranking is based on the value of the Means.

Table 5 Comparative results of algorithms for the fixed-dimensional functions.
Table 6 Comparative results of algorithms for the N-dimensional functions.

Non-parametric statistical analyses

Non-parametric statistical methods are useful tools for comparing and ranking the performance of metaheuristic algorithms. In this study, four well-known non-parametric tests including the Wilcoxon Signed-Rank98, Friedman99, Friedman Aligned Ranks100, and Quade101 tests, are used to analyze the ability of algorithms in solving benchmark problems; in all of these tests, the significance level, \(\alpha\), is 0.05102.

The results of the Wilcoxon Signed-Rank test are presented in Table 7, which shows that the R+ of FuFiO is less than the R of all the other methods, which means that FuFiO performs better than all of the compared ones. Furthermore, the p-values show that the FuFiO algorithm significantly outperforms other algorithms in solving benchmark problems, except in competition with the CS and CSA algorithms in solving the fixed-dimensional problems.

Table 7 The Wilcoxon Signed-Rank test results.

The Friedman test is a ranking method the results of which are presented in Table 8. According to this test, the FuFiO algorithm is placed in the first rank in all types of problems.

Table 8 The Friedman test results.

In the Friedman Aligned Rank test, the average of each set of values is calculated and then subtracted from the results. Subsequently, this method ranks algorithms based on their corresponding shifted values which are called aligned ranks. The results of this test, presented in Table 9, show that the FuFiO algorithm gains the first rank in solving both fixed- and N-dimensional benchmark problems.

Table 9 The Friedman aligned ranks test results.

The Quade test can be considered as an extension of the Wilcoxon Signed-Rank test for comparing multiple algorithms, making it often more effective than the previous tests. The results of the Quade test are presented in Table 10, showing that the FuFiO method is ranked first in comparison with the other methods for all types of problems.

Table 10 The Quade test results.

The final statistical method considered here is the analysis of variance (ANOVA) test, which compares the variance of results across the means of various algorithms. In this research, the ANOVA test has been employed with a significance level of 5% to study the efficiency and relative performance of optimizers. The results of this test are presented in Table 11. According to these results, the p-values indicate significant differences between the means in the majority of the considered problems. Besides, the results of the ANOVA test for four fixed-dimension and four N-dimension problems are plotted in Figs. 13 and 14, respectively.

Table 11 Results of the ANOVA test.
Figure 13
figure 13

ANOVA test results for fixed-dimension functions.

Figure 14
figure 14

ANOVA test results for N-dimension functions.

Analyses based on competitions on evolutionary computation (CEC)

In this section, the performance of the FuFiO algorithm is investigated using the single-objective real-parameter numerical optimization problems of two recent Competitions on Evolutionary Computation, namely CEC-2017 and CEC-2019 benchmark test functions. Then, the computational time and complexity of FuFiO is compared with other state-of-the-art algorithms.

Comparative analyses based on the CEC-2017 test functions

To investigate the ability of FuFiO in solving more difficult problems, the CEC 2017 Special Season on single-objective problems are utilized in this sub-section. To establish and perform a comparative analysis, four state-of-the-art algorithms including the Effective Butterfly Optimizer with Covariance Matrix Adapted Retreat (EBOwithCMAR)103, ensemble sinusoidal differential covariance matrix adaptation with Euclidean neighborhood (LSHADE-cnEpSin)104, Multi-Method-based Orthogonal Experimental Design (MM_OED)105], and Teaching Learning Based Optimization with Focused Learning (TLBO-FL)106 are considered. Table 12 contains a list of these problems the mathematical details of which was presented by the CEC 2017 committee107.

Table 12 Summary of the CEC-2017 test functions.

The statistical results of FuFiO and the other algorithms in solving 10-, 30-, 50- and 100-dimensional problems are presented in Tables 13, 14, 15, and 16, respectively. These results are based on 51 independent runs. An error value is considered in this study such that when it is less than 10−8, the error is considered zero. The total number of function evaluations for each test problem is taken as 10000D, where D is the problem dimension. The results confirm that the FuFiO method can provide very competitive results.

Table 13 Statistical results of different algorithms for the 10-dimensional CEC-2017 problems.
Table 14 Statistical results of different algorithms for the 30-dimensional CEC-2017 problems.
Table 15 Statistical results of different algorithms for the 50-dimensional CEC-2017 problems.
Table 16 Statistical results of different algorithms for the 100-dimensional CEC-2017 problems.

Computational time and complexity analyses

A complete computational time and complexity analysis is conducted to evaluate the FuFiO algorithm. Awad et al. have proposed a simple procedure to analyze the complexity of metaheuristic algorithms in the CEC-2017 instructions107, in which complexity is reflected by four times, namely \({T}_{0}\), \({T}_{1}\), \({T}_{2}\), and \(\widehat{{T}_{2}}\), as follows: \({T}_{0}\) is the computing time of the test program shown in Fig. 15; \({T}_{1}\) is given by the time of 200,000 evaluations of \({F}_{18}\) by itself with D dimensions; \({T}_{2}\) is the total computing time of the FuFiO algorithm in 200,000 evaluations of the same D-dimensional \({F}_{18}\); and \(\widehat{{T}_{2}}\) denotes the mean value of five different runs of \({T}_{2}\).

Figure 15
figure 15

Procedure of T0 assessment.

The complexity results of the FuFiO algorithm and other methods in 10, 30, 50, and 100 dimensions are presented in Table 17, which demonstrate that FuFiO can perform competitively.

Table 17 Computational complexity of the FuFiO algorithm versus the other algorithms.

The key metric in evaluating the running time of an algorithm is computational complexity, which is defined based on its structure. According to Big O notation, the complexity of the FuFiO algorithm is calculated based on the number of nuclei n, number of design variables d, maximum number of iterations t, and the sorting mechanism of nuclei in each iteration as follows:

$$ \begin{aligned}O\left(FuFiO\right)&=O\left(t\times \left[O\left(sort\right)+O\left(nuclear\, reaction\, level\right)\right]\right)\\&=O\left(t\times \left[{n}^{2}+n\times d\right]\right)\\&=O(t{n}^{2}+nd)\end{aligned} $$

Comparative analyses based on the CEC-2019 test functions

In this sub-section, the problems defined by the CEC-2019 Special Season are utilized. Different physics-based methods including the Gravitational Search Algorithm (GSA)86 and Electromagnetic Field Optimization (EFO)56. Furthermore, three recently-developed evolutionary methods including the Farmland Fertility Algorithm (FFA)35, African Vultures Optimization Algorithm (AVOA)37, and Artificial Gorilla Troops Optimizer (GTO)42, are considered for this comparative study. Table 18 presents the properties of the CEC-2019 examples108.

Table 18 Summary of the CEC 2019 test functions.

The statistical results of the algorithms are presented in Table 19. These results are based on 50 independent runs, but for reporting the final result, we select the best 25 ones according to the CEC-2019 rules. An error value is considered in this study such that when it is less than 10−10, the error is considered zero. The total number of function evaluations for each test problem is taken as 106. A conclusion concerning the statistical results is also added to the table. The final output shows that FuFiO is placed in the second place with a very small difference while its stability in finding results is so far better that the other methods based on the standard divination values. Moreover, the ANOVA test has been employed with a significance level of 5% and the related results for all problems are plotted in Fig. 16. The results show a good performance of the present method for many of the examined functions.

Table 19 Statistical results of different algorithms for the CEC-2019 problems.
Figure 16
figure 16

ANOVA test results for the CEC-2019 functions.

Conclusions and future work

Inspired by the concept of nuclei stability in physics, we developed a swarm-based intelligence metaheuristic method, called Fusion Fission Optimization (FuFiO), to deal with various optimization problems. In this method, three nuclear reactions including fusion, fission, and \(\beta\)-decay are modeled to simulate the tendency to change a stable nuclei.

The effectiveness of the FuFiO algorithm in solving optimization problems with better results can be related to its mechanism for creating the right balance between exploration and exploitation. Also, in the FuFiO method, three different reactions are proposed for each group with novel formulations. The search procedure of each reaction in each group can be interpreted as follows:

  • Fusion: Through this reaction, a nucleus in the stable group slams with another stable nucleus and exploits the search space. On the other hand, this operator explores the search space in the unstable group because the unstable nuclei slam with each other.

  • Fission: Through this reaction, in the first group, a stable nucleus slams with an unstable one that explores the search space around the stable nucleus. On the other hand, in the second group, the fission operator guides the unstable nuclei toward the stable region to exploit it.

  • \({\varvec{\beta}}\)-decay: According to these operators, a stable nucleus slams with a randomly-generated nucleus, which results in exploration. However, in the second group, \(\beta\)-decay generates the new solution by a uniform crossover between the unstable nucleus and a stable one to transfer some stable features to the unstable nucleus.

The right balance between exploration and exploitation is guaranteed by randomness in selecting the reactions in each group algorithm.

To examine the performance of FuFiO in comparison with seven well-known optimizers, an extensive set of 120 benchmark problems were considered, where the obtained results were used as the inputs of several non-parametric statistical methods. The results of statistical analysis showed that the FuFiO algorithm has a superior performance in solving all considered types of problems. To further investigate the ability of FuFiO in solving complex optimization problems, the CEC 2017 and CEC 2019 was utilized. The results showed that the FuFiO algorithm can perform competitively when compared to the state-of-the-art algorithms.

Despite the good performance of FuFiO in solving different well-studied mathematical problems, this method, like other metaheuristics, may have some limitations for solving difficult constrained or engineering problems. The main reason is the influence of the utilized constraint-handling approach on the performance of the proposed method. In addition, for more complex problems where each function evaluation needs a considerable amount of time, applying this method may need further investigations. Importantly, not the advantages of the new method, but its limitations open up a new avenue to improve or adapt it for applications in other fields.

Future studies concerning the FuFiO algorithm can be classified into two main categories. The first category contains investigations in which FuFiO is utilized as an optimization solver in dealing with complex real-world optimization problems. The second category concerns modifying the FuFiO algorithm to enhance its computational accuracy and efficiency. To this end, various kinds of modification can be designed, some of which are as follows:

  1. 1.

    The proposed algorithm has two parameters, namely \({U}_{s}\) and \({L}_{s}\). The value of \({U}_{s}\) is determined according to the natural ratio of stable nuclei, whereas the value of \({L}_{s}\) is decided empirically. These parameters and their effects should be studied more thoroughly.

  2. 2.

    In this paper, as the first version of the algorithm, the value of \({S}_{z}\) is determined through a deterministic procedure. A more advanced approach could be developed to define the size of stable nuclei.

  3. 3.

    For updating the position of nuclei, in each group, three different reactions are modeled. In order to enhance the performance of the algorithm, developing new formulations for reactions could be advantageous.

  4. 4.

    In each reaction, another stable or unstable nucleus, \({X}_{j}\), is selected randomly. Using a more thoughtful, systematic selection method could improve the performance of the algorithm.

  5. 5.

    During the updating process, a reaction is randomly selected without any specific rule. Developing a deterministic, adaptive, or self-adaptive approach to choosing an appropriate reaction could improve the algorithm.

In addition to the abovementioned approaches, one may use alternative strategies to improving the FuFiO algorithm. For example, as a conventional approach, the hybridization of the proposed algorithm with other popular metaheuristic algorithms could lead to the development of more robust optimization algorithms.