Introduction

Shipbuilding is a quintessential example of a discrete manufacturing industry characterized by a non-linear, multi-objective, and uncertain complex environment. Throughout the construction process, it encounters challenges such as low production efficiency1, extended manufacturing cycles, and discontinuous production. Currently, the majority of shipbuilding enterprises in my country operate under the principles of overall planning and optimization, focusing on segmented components and employing group technology to facilitate ship assembly and welding tasks. However, the processing efficiency of plate processing workshops within these segmented production environments remains suboptimal. This inefficiency primarily stems from the diverse range of product types2, intricate processing contours, and various constraints and parallelism in processing operations associated with plate cutting. Consequently, there is a significant disruption in information flow during the plate workshop’s construction process, making it challenging to synchronize construction resources and production processes in a timely manner. Furthermore, when disturbances arise, production scheduling typically reverts to conventional on-site methods, with most scheduling plans relying on manual intervention3. This reliance on manual scheduling often leads to issues such as resource wastage and delays in updating scheduling plans.

Currently, relevant scholars have conducted extensive research on production scheduling. Kim et al. described the multi-shop curved plate cutting scheduling problem using the theory of CSP (Constraint Satisfaction Problem). They focused on the constraints of the plate cutting process, including time, space, and manpower, to determine the optimal plate processing workshop and startup time, aiming to minimize unfinished points. The optimization objectives include balancing the number of segments and manpower load, and they proposed a solution method based on constraint satisfaction. ASSIA et al. aimed to minimize both completion time and energy consumption, optimizing the scheduling problem into a binary integer linear programming model, which was validated through experiments using the Branch and Bound Algorithm4. From an energy-saving perspective, Jiang et al. investigated the flexible job shop scheduling problem under dual resource constraints5, with the goal of maximizing the minimum processing energy consumption. They proposed a dual vector coding method based on rule-making decoding and ultimately combined this approach with a migration algorithm to demonstrate the effectiveness of their solution. Additionally, Li Ming et al. introduced a new imperial competition algorithm addressing the flexible job shop scheduling problem with circulation time and constraint objectives6, emphasizing the need to consider targets while improving non-targets. Although the aforementioned research has advanced the field of scheduling, it primarily focuses on simulation studies under ideal conditions, neglecting the operational challenges posed by disturbances in workshop environments.

Recently, scholars from diverse fields has increasingly focused on multi-objective scheduling algorithms for flexible job shops. JIANG proposed an improved elite selection strategy, which dynamically adjusts the elite operator of the NSGA-II algorithm7, and validated its effectiveness through public examples, demonstrating both the global search capability and computational efficiency of the algorithm. Song enhanced the process coding and device encoding methods and designed a greedy mutation operator to ensure robust search performance across the entire solution space8. LIANG, by analyzing the deficiencies in the NSGA-II algorithm, proposed a mutation operator that integrates individual crowding degree with population crowding degree9, and implemented hierarchical selection within the elite selection strategy. This approach not only improved the algorithm’s convergence speed but also preserved population diversity. In summary, the NSGA-II algorithm remains a prominent research focus for workshop scheduling problems, with most studies concentrating on improvements in initialization and local search mechanisms10, as well as enhancements to genetic operators and elite selection strategies, yielding significant optimization results. However, these improvements often overlook the varying needs during different stages of population evolution.

This study focuses on production rescheduling triggered by machine failures, aiming to minimize completion time, processing costs, and machine load. A mathematical model incorporating process priority and machine selection constraints is established. The main contributions are: (1) Introducing an adaptive mechanism for crossover and mutation probabilities based on population fitness to prevent premature convergence. (2) Proposing a novel elite selection strategy integrating a simulated annealing cooling mechanism to balance convergence and diversity. (3) Validating the proposed method through public benchmarks and a real-world case study from a shipyard plate processing workshop, demonstrating its superiority over standard NSGA-II and NSGA-III.

Dynamic scheduling method for ship plate processing

During the operation of the ship plate processing shop, various disturbance events may occur. Common disturbances include process interruptions, machine failures, material delays, and labor issues. These disturbance events can significantly impact the stability and efficiency of workshop processing11. Currently, there are two rescheduling solutions available: full rescheduling and right-shift rescheduling. Full rescheduling primarily involves reshuffling and resequencing the affected processes after a disturbance event occurs, effectively improving machine utilization and reducing completion time. However, this approach disrupts the original production rhythm of the shop floor, is often challenging to implement, and is typically employed to address disturbances that impact delivery deadlines. In contrast, right-shift rescheduling postpones the processing of the affected processes without altering their original order12. This method minimally affects the workshop’s production rhythm, making it suitable for disturbances that do not influence the delivery date. Within the context of Flexible Job Shop Scheduling (FJSP), the right-shift strategy is widely adopted due to its advantages of low impact on production stability and high computational efficiency13,14, whereas full rescheduling is more appropriate for severe disruptions such as machine failures15,16.Therefore, based on the current ship plate processing scheduling model, and utilizing real-time production data alongside the disturbance event rescheduling mechanism, a dynamic scheduling method tailored for ship plate processing is proposed. This method aims to achieve a rapid response to dynamic events within the ship plate processing workshop. The specific scheduling process is illustrated in Fig. 1.

Fig. 1
Fig. 1
Full size image

Dynamic scheduling method based on ship plate processing.

First, analyze the geometric characteristics of the production resources within the ship plate processing workshop and create a three-dimensional model of the workshop using SolidWorks. Second, leverage the existing Internet of Things (IoT) platform communication interface technology and MySQL database to facilitate the data collection, transmission, and storage of various production equipment and resources within the physical workshop. By integrating the 3D model of the workshop with the modeling and simulation platform, Plate Simulation, and the Sim Talk language, we can effectively support the mapping of production behaviors in the ship plate processing workshop. Subsequently, the production data collected from the physical workshop will be transmitted to the scheduling system, where an initial scheduling plan will be generated based on the improved NSGA-II algorithm17,18. Finally, the scheduling simulation platform will be utilized for verification; if the scheduling plan is deemed reasonable, it will be communicated to the physical workshop via the workshop’s IoT infrastructure.

During the production process in the ship plate processing workshop, real-time detection of disturbance events is conducted using the workshop’s data collection system. In the absence of disturbance events, production proceeds according to the established plan. However, if a disturbance event occurs, the right-shift rescheduling plan is initially implemented, and the new plan is evaluated to ascertain whether it exceeds the delivery date. If the lead time remains within acceptable limits, the production plan is verified and communicated to the shop floor. Conversely, if the evaluation indicates that the delivery date has been exceeded, the improved NSGA-II algorithm is employed to completely reschedule the unfinished processes. Through continuous iterative optimization of the scheduling plan, dynamic scheduling in response to disturbance events is effectively achieved.

Shop floor scheduling problem description and modeling

Description of the scheduling problem

Through the analysis of the process flow in the plate processing workshop, the ship plate processing operation scheduling problem can be articulated as follows: Assume that the operation to be processed, Oij (where i = 1,2,3,…,n and j = 1,2,3,…,m), is assigned to processing machine Mk (where k = 1,2,3,…,s). Each operation Oij follows a specific processing sequence on machine Mk, and the processing times for operation Oij vary across different machines Mk. All processing steps for the panels are allocated to the processing machines for scribing, cutting, and hoisting in a manner that ultimately minimizes the overall completion time for panel cutting.

Conditional assumptions and parameter definitions

According to the actual production situation of the above-mentioned plate processing workshop, the following assumptions are made for the construction of the ship plate cutting operation scheduling model19:

(1) Each plate must be processed in accordance with the determined process production route, and there are process sequence constraints.

(2) All machines can be processed at zero time, and any complete process in the processing process is not allowed to be interrupted, unless the processing equipment fails.

(3) Each machine can only process one process at the same time.

(4) The same process of the same plate can only be processed by one machine at the same time point.

(5) The circulation time of the plate between the machines is not considered.

(6) The plate processing cycle cannot exceed the specified delivery time.

In order to clearly express the mathematical model of the ship’s plate processing workshop20, the following parameters are defined, as shown in Table 1.

Table 1 Definition of specific parameters and variables for scheduling problems.

Mathematical model Building

In summary, the goal is to minimize completion time, processing costs and machine load. The constraints include process, machine, and delivery time. Construct mathematical models of scheduling problems.

(1) Completion time: Completion time is defined as the completion time of the last process in the task list, the smaller the completion time, the higher the production efficiency of plate processing, the specific formula is shown in (1):

$${f_1}=A_{{\hbox{max} }}^{t},\forall t \in \infty$$
(1)

(2) Processing cost: Processing cost is defined as the sum of energy consumed by the processing machine in the plate processing workshop, and the specific formula is shown in Eq. (2):

$${f_2}=\sum\limits_{{i \in I,j \in J,k \in K}} {{P_{ijke}}+\sum\limits_{{k \in K}} {C_{{ke}}^{t}} } ,\forall I \in \{ 1,2, \ldots ,n\} ,J \in \{ 1,2, \ldots ,m\} ,K \in \{ 1,2, \ldots ,s\}$$
(2)

(3) Machine load: Machine load is defined as the sum of the time from start-up to shutdown of all machines in the plate processing workshop after the last process in the task list is completed, as shown in Eq. (3):

$${f_3}=\sum\limits_{{i \in I,j \in J}} {(E_{{ij}}^{t} - F_{{ij}}^{t})} +\sum\limits_{{k \in K}} {C_{k}^{t}} ,\forall I \in \{ 1,2, \ldots ,n\} ,J \in \{ 1,2, \ldots ,m\} ,K \in \{ 1,2, \ldots ,s\}$$
(3)

(4) Processing sequence constraints: the mathematical model definition of each processing process of the plate is mainly carried out to ensure that the process confusion will not be caused during the scheduling, and the specific constraints are shown in the following formula:

Equation (4) means that the same process of the same plate can only be processed by one machine at the same point in time.

$$\sum\limits_{{i \in I,j \in J,k \in K}} {{X_{ijk}}=1} ,\forall I \in \{ 1,2, \ldots ,n\} ,J \in \{ 1,2, \ldots ,m\} ,K \in \{ 1,2, \ldots ,s\}$$
(4)

Equation (5) indicates that each machine and plate can start processing at 0 h.

$$\sum\limits_{{i \in I,j \in J}} {F_{{ij}}^{t} \geqslant 0,} \forall I \in \{ 1,2, \ldots ,n\} ,J \in \{ 1,2, \ldots ,m\}$$
(5)

Equation (6) indicates that the same machine can only process one plate at a time.

$$\sum\limits_{{i \in I,j \in J,k \in K}} {Y_{{ijk}}^{{j+1}}=1} ,\forall I \in \{ 1,2, \ldots ,n\} ,J \in \{ 1,2, \ldots ,m\} ,K \in \{ 1,2, \ldots ,s\}$$
(6)

Equation (7) indicates that in the same plate, the next process task must be completed before the previous process can be started, and there is a process sequence constraint.

$$F_{{ij}}^{t} \geqslant E_{{i(j - 1)}}^{t},i \in I,j \in \{ 2,3, \cdots ,m+1\}$$
(7)

Equation (8) indicates that the processing task of the next plate can only be started after the completion of the previous plate task, and the processing sequence constraint is saved.

$$F_{{ij}}^{t} \geqslant E_{{(i - 1)j}}^{t},i \in \{ 2,3, \cdots ,n+1\} ,j \in J$$
(8)

Equation (9) indicates that the completion time cannot exceed the delivery time constraint.

$${f_{\text{4}}}=A_{{\hbox{max} }}^{t} \leqslant D$$
(9)

The improved NSGA-II algorithm solves the scheduling problem

Algorithm description

The operation scheduling of a ship plate processing workshop represents a typical Flexible Job Shop Scheduling Problem (FJSP). Although the Non-dominated Sorting Genetic Algorithm II (NSGA-II) is widely used for multi-objective optimization tasks21, it suffers from limitations such as fixed crossover and mutation rates and a simplistic elite retention strategy, often leading to premature convergence and local optima22. To address these drawbacks, based on the classic NSGA-II framework proposed by Deb et al.23, adaptive mechanisms and an improved elite selection strategy are introduced. The overall flowchart of the proposed improved NSGA-II algorithm is depicted in Fig. 2.

Fig. 2
Fig. 2
Full size image

Improved NSGA-II algorithm flow.

Decoding design

Traditional workshop scheduling only considers the problem of operation sequence constraints and generally adopts an operation-based encoding method. However, the scheduling problem of ship plate processing workshops also needs to account for machine selection. Therefore, the MSOS (Machine Selection and Operation Sequence) approach is adopted. The MSOS encoding mainly consists of two parts: one is operation-based encoding, which is used to clarify operation sequence constraints; the other is machine-based encoding, which specifies the machine assigned to each operation to ensure machine selection constraints are satisfied.

Operation-based Encoding: In the encoding process, i denotes the index of the plate to be processed, and j represents the index of the operation to be processed. For example, O12 indicates the 2nd operation of plate 1. Operations are encoded sequentially, with the length of the operation-based encoding equal to the total number of operations to be processed.

Machine-based Encoding: Each operation must be assigned to a corresponding machine for processing. Therefore, the machine-based encoding must have the same length as the operation-based encoding, and it should correspond to the encoding sequence of the plates and operations to be processed.

Table 2 presents a case study of the Flexible Job Shop Scheduling Problem (FJSP), with its encoding process illustrated in Fig. 3. Taking an example involving 4 processing machines, 3 jobs, and 9 operations, encoding is performed in accordance with the aforementioned encoding method. Based on the provided case data, if the operations to be processed are as follows: \({O_{11}}{\text{-}}{O_{31}}{\text{-}}{O_{21}}{\text{-}}{O_{22}}{\text{-}}{O_{32}}{\text{-}}{O_{12}}{\text{-}}{O_{33}}{\text{-}}{O_{13}}{\text{-}}{O_{23}}\), the corresponding operation-based encoding is: 1-3-2-2-3-1-3-1-2. Since no single machine is capable of processing all operations, the sequence of processing machines corresponding to the aforementioned operations is as follows: \({M_1}{\text{-}}{M_2}{\text{-}}{M_1}{\text{-}}{M_2}{\text{-}}{M_3}{\text{-}}{M_2}{\text{-}}{M_3}{\text{-}}{M_4}{\text{-}}{M_3}\), and thus the corresponding machine-based encoding is: 1-2-4-1-2-3-2-3-3.

Table 2 Examples of flexible job shop scheduling problems.
Fig. 3
Fig. 3
Full size image

MSOS encoding.

Crossover and mutation operations

Crossover and mutation, fundamental strategies for generating new populations, significantly influence algorithm convergence and population diversity. The traditional NSGA-II algorithm employs fixed crossover and mutation probabilities throughout the iterative process24, which often leads to premature convergence and entrapment in local optimal solutions due to inadequacies in probability settings. To address these issues, adaptive methods for adjusting crossover and mutation probabilities are introduced to meet the varying requirements at different stages of the algorithm. During the initial phase of algorithm iteration, when individual quality is low25, larger crossover and mutation probabilities are necessary to encourage individuals to actively explore the entire solution space, thereby avoiding premature convergence. Conversely, in the later stages of algorithm iteration, smaller crossover and mutation probabilities are essential to safeguard the optimal solution set and facilitate convergence. Formula (10) represents the adaptive crossover operation, while formula (11) denotes the adaptive mutation operation.

$${p_a}=\left\{ {\begin{array}{*{20}{c}} {1+\frac{{({N_{avg}} - N_{a}^{{\hbox{max} }})}}{{{N_{\hbox{max} }} - {N_{avg}}}} \times 0.1,N_{a}^{{\hbox{max} }} \geqslant {N_{avg}}} \\ {1,N_{a}^{{\hbox{max} }}<{N_{avg}}} \end{array}} \right.$$
(10)
$${p_b}=\left\{ {\begin{array}{*{20}{c}} {0.1+\frac{{({N_{avg}} - {N_b})}}{{{N_{\hbox{max} }} - {N_{avg}}}} \times 0.99,N_{b}^{{\hbox{max} }} \geqslant {N_{avg}}} \\ {0.1,N_{b}^{{\hbox{max} }}<{N_{avg}}} \end{array}} \right.$$
(11)

The crossover probability boundary value is defined as [0.9, 1]. The mutation probability boundary value is set at [0.01, 0.1]. Among them, the fitness values \({N_{\hbox{max} }}\),\({N_{avg}}\),\(N_{a}^{{\hbox{max} }}\),\({N_b}\) used to calculate the adaptive probabilities in formulas (10) and (11) are not directly derived from the original objective function values, but are scalar fitness values calculated based on the non-dominated rank (Pareto Front Rank) and crowding distance of the individual in the population. Specifically, the fitness of each individual is determined as follows:

First, perform fast non-dominated sorting on the population to obtain the non-dominated rank R of each individual (where R = 1 represents the optimal front).

Within each non-dominated rank, calculate the crowding distance CD of the individual.

The scalar fitness value N of an individual is calculated as:

\(N=\frac{1}{R}+\varepsilon *\frac{{CD}}{{C{D_{\hbox{max} }}}}\)

where ϵ is an extremely small positive number, and CDmax is the maximum crowding distance in the current population. This formula ensures that individuals with a higher-ranked (smaller R) non-dominated rank have higher fitness, and within the same rank, it slightly favors individuals with a larger crowding distance to maintain diversity.

Therefore, \({N_{\hbox{max} }}\)and \({N_{avg}}\) denote the maximum and average fitness values within the population.\(N_{a}^{{\hbox{max} }}\) represents the maximum fitness value of the selected parent during the crossover operation. \({N_b}\) indicates the fitness value of the current mutated individual. This ranking and crowding distance-based fitness definition method does not require normalization of multi-objective function values, can be directly used for the calculation of adaptive probabilities, and ensures that the probability adjustment can effectively respond to the convergence state and diversity distribution of the population.

Improved elite retention strategy

The conventional elite strategy in NSGA-II selects the top N individuals from the combined parent-offspring population based on non-dominated rank and crowding distance26,27,28.While effective in preserving elites, this strategy often reduces population diversity in later iterations as individuals congregate in the first few non-dominated fronts, increasing the risk of local optima.

To achieve a better balance between elitism and diversity, this paper proposes a novel elite selection method inspired by the cooling schedule of simulated annealing. This method dynamically adjusts the number of individuals selected from each non-dominated front based on the iteration count and the front’s hierarchy. The improved algorithm steps are as follows:

(1)Merge the individuals from the parent and offspring generations to form a new population of size 2 N, followed by performing non-dominated sorting and crowding distance calculation on this new population.

(2)Begin from the highest level of the non-dominated hierarchy and determine the number of individuals to retain based on the calculation formula (12) and the crowding distance.

(3)Incorporate the hierarchical cooling coefficient alongside the temperature coefficient, and use this combined value with the crowding distance to sequentially calculate the number of retained individuals across different Pareto layers.

(4)This process continues until N individuals are selected. If the number of individuals processed in a single iteration does not meet the target, non-dominated sorting and crowding distance calculations will be repeated for the unselected individuals, and steps 2 and 3 will be reiterated until the total number of selected individuals reaches N.

This approach not only ensures that elite operators are retained but also considers the diversity within the population. The relevant diagram is presented in Fig. 4, and the specific calculation formula is provided in Eq. (12).

$${n_{ij}}=\left\{ {\begin{array}{*{20}{c}} {{N_{ij}}(C{G_i} - {B_j}),C{G_i}>{B_j}} \\ {{\text{ }}0{\text{ }},C{G_i} \leqslant {B_j}} \end{array}} \right.$$
(12)

In the formula, nij is the number of selected individuals in layer j of the i-th iteration; Nij is denoted as the total number of individuals in layer j of the i-th iteration; C is the temperature coefficient; Gi is the cooling coefficient of the algorithm for i iterations, and the boundary value is [0,1]; Bj is the cooling coefficient of the algorithm iterated to layer J, and the boundary value is [0,1]. This formula dynamically adjusts the selection pressure across different fronts and iterations, balancing convergence and diversity.

Fig. 4
Fig. 4
Full size image

Improved elite retention policy schematic.

Pseudo code for elite selection strategy

The specific pseudocode is shown below.

Algorithm 1
Algorithm 1
Full size image

Improved elite selection strategy with SA cooling mechanism.

Engineering case verification

Simulation conditions

In this paper, PyCharm2022.1.3 is used to simulate and verify the proposed improved NSGA-II algorithm. The experiments were conducted on a computer equipped with an Intel Core i5-12500 H processor (@2.5 GHz) and 16 GB of RAM.

Algorithm performance evaluation index

To comprehensively evaluate the performance of the improved algorithm, two widely adopted multi-objective optimization metrics, Hypervolume (HV) and Inverted Generational Distance (IGD), were employed. The HV metric, proposed by Zitzler and Thiele29, is used to measure the convergence and diversity of the solution set; the IGD metric, on the other hand, is utilized to assess the proximity of the solution set to the true Pareto front30.A larger HV value indicates better comprehensive performance in terms of both convergence and diversity of the obtained solution set. A smaller IGD value signifies that the solution set is closer to the true Pareto front, reflecting superior convergence.

The performance of the improved NSGA-II algorithm is significantly influenced by its parameters, primarily the population size and the maximum number of iterations. Using the MK02 benchmark instance, a parameter sensitivity analysis was conducted. The improved algorithm was run independently ten times under different parameter combinations, and the average HV and IGD values were recorded, as presented in Table 3.

Table 3 Results of MK02 studies at different scales.

The results show that a population size of 150 with 200 iterations yields the highest HV value, while a population size of 200 with 50 iterations achieves the lowest IGD value. This indicates that a larger population size facilitates faster convergence (better IGD) in early stages, whereas more generations (iterations) are beneficial for enhancing the diversity and overall quality (HV) of the solution set. Considering the trade-off between computational cost and solution quality, a population size of 150 and 200 iterations were selected as the parameter settings for subsequent experiments.

Case verification

To validate the effectiveness of the proposed improvements, the enhanced NSGA-II algorithm was compared against the standard NSGA-II and the more recent NSGA-III algorithm31 on ten public benchmark instances (MK01-MK10). Each algorithm was executed independently 10 times on each instance to ensure statistical reliability. The results of the comparative verification are presented in Table 4.

Table 4 MK01-MK10 experimental comparison.

As evident from Table 3, the improved NSGA-II algorithm consistently outperforms both NSGA-II and NSGA-III across most benchmark instances regarding both HV and IGD metrics. This demonstrates that the proposed adaptive strategies and elite retention mechanism significantly enhance the algorithm’s ability to approximate the true Pareto front with better diversity and convergence.

The distribution of HV and IGD values for instances MK01, MK05, and MK09 is further illustrated using box plots in Fig. 5. The box plots clearly show that the improved algorithm achieves not only higher median values for HV and lower median values for IGD but also exhibits greater stability (smaller interquartile range) and fewer outliers compared to the other two algorithms. This confirms the robustness and superiority of the proposed approach.

Fig. 5
Fig. 5
Full size image

Algorithm performance comparison box plot.

4.4 engineering case

To further assess the algorithm’s performance in a practical setting, a real-world case from a shipyard’s plate processing workshop (Manufacturing Order N1072) was adopted. The scheduling problem involves 16 plates and 7 machines, with a delivery time constraint of 3 h (180 min). The detailed processing times for each operation are presented in Table 5.

Table 5 N1072 processing Information.

The three algorithms were each run 10 times on this engineering case. Table 6 summarizes the best and average values of the three objectives (Completion Time, Machine Load, Processing Cost) and the performance metrics (HV, IGD).

Table 6 The result of the case run.

The results indicate that the improved NSGA-II algorithm achieves the best performance in minimizing completion time and machine load, while yielding competitive results in processing cost optimization. The significantly higher HV and lower IGD values further confirm that it provides a superior and more stable Pareto front compared to the other algorithms. The distribution of the objective values from ten runs is also visualized using box plots in Fig. 6, reinforcing the statistical advantage of the proposed algorithm.

Fig. 6
Fig. 6
Full size image

Box plot of the objective function.

To more clearly illustrate the distribution of the three objective function solution sets within the solution space, each of the three algorithms was executed once to obtain the distributions of completion time, machine load, and processing cost on the Pareto front, as depicted in Fig. 7.

Figure 7(b) presents a comparison chart illustrating the relationship between completion time and machine load. It is evident that the solution set derived from the improved algorithm is more concentrated within the optimal solution range. Figure 7(c) indicates that a decrease in machine load leads to more favorable processing costs under identical conditions. Figure 7(d) demonstrates that, when considering the dual objectives of completion time and processing cost, the performance of the improved algorithm is slightly inferior to that of the NSGA-III algorithm. This limitation represents a shortcoming of the current algorithm, which will be addressed in future research. Furthermore, based on the three-objective conditions depicted in Fig. 7(a), it is apparent that the improved NSGA-II algorithm outperforms the other two algorithms overall.

Fig. 7
Fig. 7
Full size image

Pareto front distribution map. (a) Comprehensive comparison figure. (b) Completion time - machine load comparison figure. (c) Machine load - processing cost comparison figure. (d) Completion time - processing cost comparison figure.

(1) Initial scheduling plan: that shipbuilding companies currently face a high volume of orders, necessitating a strong focus on time costs. Figure 8 presents the Gantt chart depicting the initial scheduling outcomes achieved under the minimum makes pan criterion. By enhancing the NSGA-II algorithm, the analysis reveals that the minimum completion time is 118 min, with a machine load of 512 min and a processing cost of 273.75 kW·h.

Fig. 8
Fig. 8
Full size image

Initial scheduling Gantt chart.

(2)Rework order scheduling: After Job7-1 is processed, Jobs 13 − 2 and 13 − 3 from the historical orders need to be reworked and assigned to machines 2# (11 min) and 5# (4 min). At this point, the data collection system in the plate processing workshop detects a disturbance in the production area, prompting an immediate activation of the dynamic scheduling process for the plate processing workshop. The disturbance event is subsequently shifted to the right and rescheduled. The rescheduling plan is illustrated in Fig. 9. By enhancing the NSGA-II algorithm, it is determined that the minimum completion time is 129 min, the machine load totals 534 min, and the processing cost amounts to 281.85 Kw·h. According to the scheduling results, the rescheduling plan adheres to the delivery time constraint, thus confirming that this schedule is executable following simulation verification.

Fig. 9
Fig. 9
Full size image

Insert orders to reschedule the Gantt chart.

(3)Machine fault scheduling: The 2# plasma cutting machine experienced a failure at 0 min, and maintenance efforts continued until production capacity was restored at 50 min. Due to the right-shift rescheduling scheme, the resulting schedule exceeds the delivery date, necessitating a full rescheduling of this cutting task. The scheduling results obtained through the enhanced NSGA-II algorithm are presented in Fig. 10. The calculations indicate that the minimum completion time is 130 min, the machine load is 514 min, and the processing cost is 297 kW·h, all of which comply with the delivery time constraints.

Fig. 10
Fig. 10
Full size image

Gantt chart for machine failure rescheduling.

Comparative analysis of dynamic scheduling strategies

To verify the comprehensive performance of the proposed dynamic scheduling method (i.e., the “right-shift preliminary judgment + improved NSGA-II full rescheduling when necessary” illustrated in Fig. 1, hereinafter referred to as Method P), it is compared with the following two benchmark scheduling strategies:

  1. a.

    (a) Benchmark Method A (Pure Right-Shift): After detecting a disturbance event, only the affected operations and their subsequent operations are subjected to a right-shift operation, without altering the original processing sequence or machine assignment of any operations.

  2. b.

    (b) Benchmark Method B (Right-Shift + Standard NSGA-II Full Rescheduling): It adopts the same dynamic scheduling logical framework as this study, but when full rescheduling is required, the standard NSGA-II algorithm is used for solution.

The experiments are conducted based on the engineering case described in Section “Engineering case” (Manufacturing Order N1072), simulating two typical disturbance scenarios: “urgent order insertion” (corresponding to Sect. 4.4.2) and “machine failure” (corresponding to Sect. 4.4.3). For each scenario, the three dynamic scheduling strategies are run independently, and the performance indicators of the generated scheduling schemes are recorded. Each strategy is executed 10 times independently, and the average value is taken as the final result. The comparative results are presented in Table 7.

Table 7 Performance comparison of different dynamic scheduling strategies in disturbance Scenarios.

Within an acceptable range, the pure right-shift strategy (Method A) offers the advantages of extremely fast computation speed and minimal interference with the original plan. This makes it an efficient and practical choice when disturbances are minor and do not threaten delivery deadlines, which aligns with the design philosophy of prioritizing the right-shift strategy in the preliminary judgment phase of the proposed method. When disturbances are severe (e.g., machine failures leading to missed delivery deadlines), the pure right-shift strategy becomes ineffective. At this point, optimization-driven full rescheduling strategies (Method P and Method B) become inevitable choices. Comparing the two, the proposed Method P outperforms the benchmark Method B in all optimization objectives (makespan, machine load, and processing cost), which directly verifies the superiority of the improved NSGA-II algorithm over the standard version in solving rescheduling problems.

Robustness and statistical analysis under disturbance scenarios

To evaluate the robustness and statistical performance of the proposed dynamic scheduling method and the improved NSGA-II algorithm in an uncertain environment, Monte Carlo simulation experiments with random disturbances are carried out.

(1) Experimental Design.

Taking the basic case (Order N1072) in Section “Engineering case” as the initial scheduling scheme, two types of random disturbance scenarios are designed as follows:

Scenario A: Random Rework Insertion. This scenario simulates the random insertion of rework operations at random time points during the production process. The specific parameter settings are as follows: 1–3 operations are randomly selected from historical orders as rework orders; the insertion time is randomly generated within the first 30% of the total production cycle; each random experiment is run independently.

Scenario B: Random Machine Failure: This scenario simulates failures of random machines that occur at random times and last for random durations. The specific parameter settings are as follows: the failed machine is randomly selected from 7 machines; the failure start time is randomly generated within the first 50% of the total production cycle; the failure maintenance duration follows a uniform distribution U (20,80) minutes; each random experiment is run independently.

For each disturbance scenario, 100 random disturbance instances are generated respectively. For each instance, the dynamic scheduling method proposed in this paper (i.e., the process in Fig. 1: first conduct a right-shift judgment, and perform full rescheduling with the improved NSGA-II if the delivery date is exceeded) is applied for response.

(2) Experimental Results and Analysis.

The proposed method (improved NSGA-II) is compared with the standard NSGA-II on the same set of random disturbance instances. Both algorithms are run independently 5 times for each instance to eliminate randomness, and the optimal solutions are selected for statistics. The results are presented in Table 8.

Table 8 Statistical comparison of algorithm performance and robustness under random disturbance Scenarios.

As shown in Table 8, under the two types of random disturbance scenarios, the improved NSGA-II outperforms the standard NSGA-II in terms of the average values of all objectives (makespan, machine load, and processing cost), and also has smaller standard deviations, indicating its superior average performance and stronger stability. Especially in the random machine failure scenario, the improved NSGA-II achieves a higher scheduling success rate (96% vs. 92%) and a lower maximum delay, further demonstrating its reliable ability to cope with severe disturbances and ensure delivery deadlines.

Significance test and ablation experiment analysis

Significance test

To statistically confirm the significance of the performance advantages of the improved algorithm (INSGA-II), we adopted the Wilcoxon signed-rank test (significance level α = 0.05) to conduct pairwise comparisons of the HV and IGD results from 10 independent runs of INSGA-II, NSGA-II, and NSGA-III on all MK benchmark problems. The test results are presented in Table 9.

Table 9 Results of Wilcoxon Signed-Rank test (p-values).

As shown in Table 9, for all 10 test instances, the p-values of the comparisons between INSGA-II and the two comparison algorithms in terms of HV and IGD indicators are all far less than 0.05. This indicates that the performance improvement of the improved algorithm INSGA-II is highly significant in statistics, and is not caused by random fluctuations.

Ablation experiment and analysis

To further investigate the individual effects and synergistic effect of the adaptive genetic operator and the improved elite selection strategy proposed in this paper, ablation experiments are designed. On MK02, MK05, MK09 and the engineering case N1072, the following four algorithm configurations are compared: NSGA-II: Standard algorithm; NSGA-II-A: The standard algorithm with only the adaptive crossover and mutation operators described in Section “Decoding design” introduced; NSGA-II-B: The standard algorithm with only the improved elite selection strategy described in Section “Crossover and mutation operations” introduced; INSGA-II: The complete algorithm integrating both improvement strategies.

Each algorithm is run independently 10 times, and the average HV and IGD values are counted. The results are shown in Table 10.

Table 10 Ablation experiment results (Average HV / Average IGD).

NSGA-II-A and NSGA-II-B outperform the standard NSGA-II in all indicators, indicating that both the adaptive genetic operator and the improved elite selection strategy can effectively improve the performance of the algorithm. Among them, the adaptive operator (A) contributes more prominently to improving convergence (reducing IGD), while the elite selection strategy (B) plays a more obvious role in expanding the front distribution (improving HV). The complete algorithm (INSGA-II) significantly outperforms any single improved variant in both HV and IGD. This indicates that the adaptive operator facilitates effective exploration in the early stage of iteration, while the improved elite selection strategy better maintains population diversity in the late stage of iteration. The two complement each other, jointly promoting a leap-forward improvement in the comprehensive performance of the algorithm.

Conclusion

To address the impact of frequent disturbance events on the production schedule in the ship plate processing workshop, this paper proposes a dynamic scheduling method for plate processing in response to disturbance events. First, by integrating the existing scheduling logic of the plate processing workshop with the rescheduling mechanism for disturbance events, a coupled response between processing equipment and the production process is achieved. On this basis, with the optimization objectives of minimizing make - span, machine load, and processing cost, and subject to constraints such as delivery deadlines, machine selection, and operation priorities, a multi - objective scheduling mathematical model for plate processing is constructed. Regarding the solution method, in view of the problems of the traditional NSGA - II algorithm, such as being prone to falling into local optima and having insufficient population diversity in workshop scheduling applications, an improved NSGA - II algorithm is proposed. By introducing adaptive crossover and mutation probabilities based on the population evolution status, premature convergence of the algorithm is avoided. Meanwhile, combined with the simulated annealing cooling mechanism, an elite selection strategy that dynamically adjusts with the number of iterations and non - dominated levels is proposed, which can maintain population diversity while retaining elite individuals. Finally, simulation verification is carried out through cases from public test sets and a real - world engineering case in a shipyard’s plate processing workshop. The results show that the proposed improved algorithm outperforms the standard NSGA - II and NSGA - III algorithms in evaluation indicators such as HV and IGD. Moreover, the constructed dynamic scheduling method demonstrates good feasibility and superiority in responding to disturbance events.

The research in this paper provides valuable solutions and practical references for the production scheduling problem in the shipbuilding industry. However, the study mainly focuses on static constraints such as machine selection and operation priority, failing to fully reflect the dynamic uncertainties in the actual plate production process. In future research, more practical constraints related to workshop scheduling (such as fluctuations in material supply, multi - workshop collaboration, and human factors) will be further incorporated to enhance the model’s ability to depict the real production environment. Meanwhile, the algorithm structure will be continuously optimized to improve its generalization performance and real - time response ability in different production scenarios.