Introduction

With advancements in computer technology, modern engineering design increasingly relies on high-fidelity simulation models, such as Finite Element (FE) analysis and computational fluid dynamics simulations, to reduce costly experimental expenses. However, these high-fidelity simulations inevitably incur significant computational costs. Particularly in the engineering optimization process, traditional numerical optimization methods (e.g., Grey Wolf Optimizer1, Genetic Algorithm2, Particle Swarm Optimization3) require numerous numerical simulations, making them less efficient for high-cost black-box optimization problems.

To address this issue, the field of engineering optimization has widely adopted approximate modeling, commonly referred to as Approximation-Based Design Optimization (ABDO). This approach reduces the need for costly simulation analyses by constructing mathematical models based on a limited number of sample points to represent the relationship between design variables and responses. Over the past few decades, various approximate modeling techniques have been developed, including Response Surface Model (RSM), Kriging interpolation model (KRG), Radial Basis Function neural networks (RBF), Back Propagation Neural Networks (BPNN), Least Squares Model (LSM), Moving Least Squares Model (MLSM), and Support Vector Regression (SVR). These methods have been successfully applied in engineering optimization.

With the continuous advancement of the automotive industry, safety design has become one of the core requirements of vehicle performance. As a critical component of the automobile, the seat not only provides a comfortable seating experience for passengers but also plays a key protective role. However, as the requirements for seating comfort and safety continue to increase, the structural complexity also increases, leading to a significant rise in weight. Therefore, optimizing the design of automotive seat skeletons is particularly important and represents a key area of research in the automotive industry. The actual design of seat structures must consider factors such as cost efficiency, weight reduction, and structural stiffness, making the optimization of seat skeletons a complex multi-objective collaborative process. Shan et al.4 and Dai et al.5 explicitly identified two primary integrated framework strategies for solving Multi-Objective Optimization Problem (MOOP) in automotive structural components: local optimization and global optimization approaches (Fig. 1). One of these methods is local optimization, which treats the sample data obtained through the Design of Experiments (DOE) as the solution set of the MOOP and subsequently employs the Multi-Criteria Decision-Making (MCDM) method to identify the best compromise from these samples. This approach does not require consideration of the construction of approximation models and eliminates the step of solving the MOOP through a multi-objective optimization algorithm; however, it tends to yield suboptimal solutions. Another method is the global optimization approach, in which an approximation model is constructed based on the sample data obtained from DOE, and this approximation model is then used to replace the expensive high-fidelity FE simulation model, combined with a multi-objective intelligent algorithm to solve the MOOP and obtain the Pareto Frontier Solutions (PFSs). Subsequently, the MCDM method is employed to identify the optimal compromise solutions from the PFSs. This approach exploits the advantages of the multi-objective intelligent optimization algorithm for solving MOOP, and the optimized solutions obtained are typically of high quality. Clearly, the use of a global optimization strategy based on an approximation model is an ideal method for achieving multi-objective optimization in automotive seat design.

Fig. 1
figure 1

Two primary optimization methods in multi-objective optimization. MOOP, multi-objective optimization problem; MCDM, multi-criteria decision-making.

However, automotive seats are subjected to a range of crash conditions, and the underlying theory behind FE modeling of these conditions is highly complex, leading to time-consuming submissions of FE solvers to a central processor for computation. In ABDO design for automotive seats, the responses typically exhibit highly nonlinear characteristics, and efficiently fitting these responses using simple approximation techniques to obtain highly accurate models becomes increasingly difficult. Consequently, constructing highly accurate predictive models for all responses in the multi-objective optimization design of automotive seats remains a challenging task. Dai et al.5 compared three classical approximation models: RSM, KRG, and RBF, in the multi-objective optimization of front car seats and found that RSM was more suitable for this application. However, the review of the paper showed that the RSM displayed a low coefficient of determination (R2) of approximately 0.7 when fitting the indicator response data, AH1, in the study. Li et al.6 stated that the R2 of the approximation model for the multi-objective optimization design of automotive structural components should exceed 0.8. Since a single approximation model may not be sufficient to achieve higher prediction accuracy for all responses, Li et al. proposed a hybrid RBF-KRG approximation model for the lightweight optimization of the body-in-white skeleton. To enhance the prediction performance of the approximation models, Long et al.7 proposed a heuristic weighting method (EG method) to determine the weighting coefficients of three approximation models: RSM, KRG, and RBF. Based on these weighting coefficients, they combined the three approximation models to develop a new approximation model for the multi-objective optimization of rear passenger car seat. By comparing the accuracy of the genetic aggregation response surface (GARS) model’s accuracy with that of RSM, KRG, and RBF, Zhou et al.8 found that the GARS model exhibits high global predictive accuracy and strong local predictive capability, making it more suitable for the multi-objective optimization of automotive seat design. Considering the studies mentioned above, it is evident that (1) the approximation models predominantly used in current multi-objective optimization of automotive seats are based on RSM, KRG, and RBF, with limited exploration of other approximation model fitting methods, and (2) a single approximation model may be insufficient to accurately fit the highly nonlinear feature data responses under collision conditions. Therefore, this paper proposes an Approximation-Based Global Multi-Objective Optimization Design (ABGMOOD) for automotive seats under crash conditions, which combines response surface prediction models (Taylor Polynomial (TP) and Optimal RSM (ORSM)), machine learning prediction models (Shallow Neural Networks (SNN), Deep Neural Networks (DNN), Random Forest Regression (RFR), and Support Vector Regression (SVR)), and interpolated predictive models (KRG and RBF). These models represent state-of-the-art approximation methods. Subsequently, multiple approximation models are employed to construct the hybrid approximation models, thereby improving the prediction accuracy of the entire system.

This paper proposes an ABGMOOD strategy for the multi-objective optimization design of automotive seats based on HAM-MSAM for the MOOP of the rear seat of a passenger car. The PFSs is established by integrating the Hybrid Approximation Models based on the Multi-Species Approximation Model (HAM-MSAM) with the Non-dominated Sorting Genetic Algorithm-III (NSGA-III) optimization algorithm. Finally, the newer combination weights-comprehensive and weighted improvement degree (CW-CWID) MCDM method is employed to identify the optimal compromise solution from the PFSs. The results indicate that the ABGMOOD method effectively allocates the materials and thicknesses of components within the seat skeleton structure to achieve improvements in lightweight design, cost efficiency, and safety. Furthermore, it offers reliable theoretical guidance for the multi-objective optimization design of automotive seats.

Optimization strategy and methodology

Optimization strategy

This study carries out a multi-objective optimization of the rear seat skeleton of a passenger car and proposes an ABGMOOD strategy that aims to optimize the safety protection of the driver and passengers, structural lightweight, and production cost. To efficiently achieve the multi-objective collaborative optimization design of automotive seats, this paper proposes an ABGMOOD strategy based on HAM-MSAM and the global optimization concept shown in Fig. 1.

First, an accurate finite element model is created according to the finite element modeling specifications, and a finite element cloud analysis model for the rear seat luggage compartment collision condition of passenger cars is developed in compliance with international safety codes and industry standards. Subsequently, the accuracy of the finite element model was validated through the strong correlation between the physical test results and the FE simulation results. Based on this, the MOOP for the backrest skeleton of the rear seat of a passenger car was formulated, considering material cost, quality, seat safety performance, and other factors. Next, sample points were collected as a training set for constructing the HAM-MSAM using the Optimal Latin Hypercube Design (OLHD) method, and the HAM-MSAM was employed as an alternative FE model to solve the MOOP and obtain the PFSs by combining it with the NSGA-III optimization algorithm. Finally, the recently proposed CW-CWID MCDM method is applied to select the best compromise solution from the PFSs, which serves as the final optimized design solution. Further details of the ABGMOOD optimization are provided in Fig. 2.

Fig. 2
figure 2

ABGMOOD optimization strategy.

Methodology and theory

Radial basis function neural network

RBF are widely used in solving many technical and non-technical problems9,10. The RBF approximation model can be considered a constrained quadratic optimization problem, where the method aims to approximate a given dataset using a function.

$$f\left( x \right) = \sum\limits_{j = 1}^{{n_{c} }} {w_{j} \phi \left( {\left\| {x - c^{\left( i \right)} } \right\|} \right)}$$
(1)

where \(c^{\left( i \right)}\) represents the center of the \(i\)th kernel function, \(n_{c}\) represents the total number of kernel functions, \(\phi\) denotes the distance function (calculated as the Euclidean distance from the predicted point of the basis function to the center point), and \(w\) represents the weight coefficient. The RBF model can be adapted to different functions based on varying weights, and its response changes according to the distance from the center.

Kriging interpolation model

The KRG model was originally developed in geostatistics by Danie Krige, a South African geostatistician. In general, the KRG model can be viewed as a combination of a global model and local deviations11, denoted as Eq. (2).

$$Y\left( x \right) = \sum\limits_{j = 1}^{m} {\beta_{j} f_{j} \left( x \right) + Z\left( x \right)}$$
(2)

where \(f_{j} \left( {j = 1,2, \ldots ,m} \right)\) is a known basis function, \(\beta_{j} \in R\) is an unknown parameter, \(Z\) is a random variable obeying a Gaussian distribution \(Z\left( x \right) \sim N\left( {0,k} \right)\), and \(k\) is the covariance function. The advantage of the KRG model is that it not only generates an interpolated spatial model but also provides an estimate of the mean squared deviation of the predictions. The main difference between the Kriging model and a polynomial regression or radial basis function model is that it is constructed from observed sample data rather than relying on a predefined model. If the data are not spatially correlated or if the dataset is limited in size, then the accuracy of the model is constrained12,13,14.

Support vector regression

SVR is used for nonlinear regression estimation, relying on the kernel function to establish the relationship between inputs and outputs15,16. Given a training set \(\left( {x_{i} ,y_{i} } \right)\left( {i = 1,2, \ldots n} \right)\) with a nonlinear relationship, the linear regression equation in the feature space is represented as Eq. (3).

$$f\left( x \right) = w \cdot \phi \left( x \right) + b$$
(3)

Deriving the optimal regression function through mathematical programming.

$$\begin{aligned} & \min \, \frac{1}{2}\left\| w \right\|^{2} + C\sum\limits_{i = 1}^{n} {\left( {\xi_{i} + \xi_{i}^{*} } \right)} \\ & s.t. \, \left\{ {\begin{array}{*{20}l} {y_{i} - w \cdot \phi \left( {x_{i} } \right) - b \le \varepsilon + \xi_{i} } \hfill \\ {w \cdot \phi \left( {x_{i} } \right) + b - y_{i} \le \varepsilon + \xi_{i}^{*} } \hfill \\ {\xi_{i} ,\xi_{i}^{*} \ge 0} \hfill \\ \end{array} } \right. \\ \end{aligned}$$
(4)

where \(C\) is the regularization parameter, \(\xi_{i}\) and \(\xi_{i}^{*}\) are slack variables, and \(\varepsilon\) is the parameter indicating the approximation accuracy. The Gaussian function is one of the most common kernel functions in SVR.

$$k\left( {x,x_{i} } \right) = \exp \left( { - \frac{{\left( {x - x_{i} } \right)\left( {x - x_{i} } \right)^{T} }}{{2\sigma^{2} }}} \right)$$
(5)

Random forest regression

RFR is a decision tree-based learning method, classified as an ensemble learning technique, and has a wide range of applications in the fields of machine learning and data mining17,18. The objective of using the RFR model is to obtain a sample set \(\Gamma\) by repeatedly sampling the learning set \(H\) with replacement \(N\) times, and to build a CART regression tree for each subset \(\Gamma\). Finally, an estimation function \(h\left( x \right)\) is derived by calculating the arithmetic mean of the response values at each \(x\). This process results in a decision tree model with an estimated function \(h\left( x \right)\).

$$h\left( {x,\theta_{1} , \cdots ,\theta_{M} ,\Gamma } \right) = \frac{1}{M}\sum\limits_{m = 1}^{M} {h\left( {x,\theta_{m} ,\Gamma } \right)}$$
(6)

where \(h\left( {x,\theta_{1} , \cdots ,\theta_{M} ,\Gamma } \right)\) is the set of decision trees, \(h\left( {x,\theta_{m} ,\Gamma } \right)\) is the CART tree constructed with the CART model, and \(\theta_{m}\) is a random vector independently and identically distributed for the mth decision tree, representing the growth process of that tree. Although random forests typically achieve higher accuracy than single decision trees, they compromise the inherent interpretability associated with individual decision trees.

Taylor polynomial and optimal response surface models

LSM is a fitting method that identifies the best-fitting curve by minimizing the sum of squared residuals between the curve and a set of sample points, also called the least squares method. Noesis Optimus employs two methods to obtain the optimal solution for least squares modeling: the TP method and the ORSM method19.

The Taylor method uses TP as the basic formulas, and in Noesis Optimus, users can select any combination of Taylor terms to best fit their design of experiments (DOE) results. The following equation gives a complete two-factor cubic TP.

$$\begin{aligned} Y & = \alpha_{1} + \alpha_{2} X_{1} + \alpha_{3} X_{2} + \alpha_{4} X_{1} X_{2} + \alpha_{5} X_{1}^{2} + \alpha_{6} X_{2}^{2} + \alpha_{7} X_{1} X_{2}^{2} + \alpha_{8} X_{1}^{2} X_{2} \\ & \quad + \alpha_{9} X_{1}^{3} + \alpha_{10} X_{2}^{3} \\ \end{aligned}$$
(7)

In the ORSM method, the following equation defines the operational objective.

$$X = \left( {X_{1} , \ldots ,X_{b} } \right)$$
(8)

Consider an acceptance parameter \(r_{i} ,i = 1, \ldots ,b\) and represent it as a vector:

$$r = \left( {r_{1} , \ldots ,r_{b} } \right)^{T}$$
(9)

Let \(c\) represent any integer less than \(b\), then we have the following equation:

$$X^{j} = \left( {X_{1}^{j} , \ldots ,X_{c}^{j} } \right)$$
(10)

Depending on the value of \(c\) chosen for \(X\), the least squares problem can be computed by the following linear formulation.

$$y\left( {x,\alpha } \right) = \sum\limits_{i = 1}^{c} {a_{i} X_{i}^{j} \left( X \right)}$$
(11)

Let \(\varepsilon_{j}\) represent the measurement error for each term in \(X\) and set:

$$r_{i} = r_{i} + \varepsilon_{j} ,\quad j = 1, \ldots ,c$$
(12)

When the above iterations are sufficient, the term can be obtained as a solution in the fitted formula, structured similarly to Eq. (11). The specific iteration algorithm is as follows:

  1. (1)

    \(j = 1,r_{i} = 0,i = 1, \ldots ,b\).

  2. (2)

    Randomly select item \(c\) in \(X\).

  3. (3)

    The selection based on step (2) is approximated using the standard formula derived from the least squares method.

  4. (4)

    The sample point results are used to calculate \(\varepsilon_{j}\).

  5. (5)

    Set \(r_{i} = r_{i} + \varepsilon_{j} ,\forall i = 1, \ldots ,c\).

  6. (6)

    Set \(j = j + 1\).

  7. (7)

    If \(j\) reaches the maximum value, then skip to step (9).

  8. (8)

    Skip to step (2).

  9. (9)

    Select the optimal values of \(c\) and \(r\) within \(\left( {r_{1} , \cdots ,r_{b} } \right):\left( {r_{1}^{best} , \cdots ,r_{c}^{best} } \right)\).

  10. (10)

    The iteration process is terminated when the fitting equation \(y\left( {x,\alpha } \right) = \sum\nolimits_{i = 1}^{c} {a_{i} X_{i}^{best} }\) is satisfied.

Shallow neural network and deep neural network

Artificial neural network (ANN) are computational models that replicate the structure and function of neurons in the human brain and are widely applied in the field of machine learning. ANN can recognize patterns and make predictions by learning from extensive datasets, thus excelling in various domains such as image recognition, natural language processing, and financial forecasting. Noesis Optimus offers two types of neural network machine learning approximation models: Shallow Neural Network (SNN)20 and Deep Neural Network (DNN)21.

SNN are defined as artificial neural networks with fewer layers, characterized by faster computational speed and the ability to effectively address most classification problems with lower complexity. According to the Optimus lightweight neural network training function, the number of samples per epoch was set to 2 × 105, the maximum allowable training error was set to 0.01, with 8 hidden layers. The regression model between the outputs and inputs was constructed using the neural network. The neural network processes the input data to generate output, simultaneously calculates the error and adjusts the weights to minimize it. Through continuous iteration, the desired target value is achieved. Figure 3 shows the calculation iteration process.

Fig. 3
figure 3

Neural network workflow.

Simple neural networks usually refer to neural networks with only one or two hidden layers or even fewer. They are suitable for handling relatively simple tasks such as binary classification or regression problems (shown below on the left in Fig. 4). The basic principle of DNN is to learn the representation and features of data by passing data through layers. It consists of an input layer, multiple hidden layers, and an output layer (shown below on the right in Fig. 4). Data is passed through each layer of the network, with each layer transforming the input data and forwarding it to the next layer. Each of these layers consists of multiple neurons, each connected to all the neurons in the previous layer, with weights to adjust the influence of the input. During training, the weights are adjusted to minimize the loss function so that the network can make accurate predictions or classifications of the input data.

Fig. 4
figure 4

Schematic illustration of the principles of simple and deep neural networks.

The goal of DNN in Noesis Optimus is to bridge the gap between interpolation models, which are highly accurate but limited to small datasets, and regression models, which can handle large datasets but lack accuracy with highly nonlinear data. The aim is to provide a model capable of fitting very large datasets in reasonable time, while providing the user with the capability to tune the speed/accuracy trade-off. DNNs offer this capability as the number of parameters in the model can be tuned (almost) arbitrarily. The Noesis DNN is a multi-layer perceptron with rectified linear unit (ReLU) activation Eq. (13) used for a regression purpose.

$$ReLU\left( x \right) = \left\{ {\begin{array}{*{20}l} 0 \hfill & {if\;\;x < 0} \hfill \\ x \hfill & {if\;\;x \ge 0} \hfill \\ \end{array} } \right.$$
(13)

The Noesis DNN uses that ADAM optimizer, a first-order gradient-based algorithm optimization of stochastic objective functions, and the weights are initialized with a uniform distribution between -0.08 and 0.08. The learning rate is fixed at 0.001. The outer layer has a linear activation function (to achieve regression instead of classification) and the loss function is the mean square error.

CW-CWID MCDM method

The CW-CWID MCDM method is an innovative MCDM method introduced by Long et al. in October 202422. In comparison to the classical TOPSIS method, this approach offers simpler calculation steps and yields more scientifically valid evaluation results. The specific calculation steps are as follows:

Step 1: The subjective weight \(w_{j}^{*}\) for each indicator was determined using the G1 method23.

Step 2: The objective weight \(w^{\prime}_{j}\) for each indicator is derived using the entropy weight method (EWM)24.

Step 3: Integrate the subjective and objective weights using the principle of minimum discriminatory information to align them as closely as possible, and define the objective function as follows:

$$\left\{ {\begin{array}{*{20}l} {\min F\left( w \right) = \sum\limits_{j = 1}^{m} {\left( {w_{j} \ln \frac{{w_{j} }}{{w^{\prime}_{j} }} + w_{j} \ln \frac{{w_{j} }}{{w_{j}^{*} }}} \right)} } \hfill \\ {s.t. \, \sum\limits_{j = 1}^{m} {w_{j} = 1,w_{j} \ge 0,\quad j = 1,2, \ldots ,m} } \hfill \\ \end{array} } \right.$$
(14)

This objective function is solved using the Lagrangian multiplier method to obtain the combined weights from the subjective and objective assessments:

$$w_{j} = \frac{{\sqrt {w_{j}^{*} w^{\prime}_{j} } }}{{\sum\nolimits_{j = 1}^{m} {\sqrt {w_{j}^{*} w^{\prime}_{j} } } }}$$
(15)

Step 4: Construct a weighted comprehensive improvement degree objective function, designated as \(W_{P}\). Program rankings are based on the magnitude of the \(W_{P}\) values, with larger values indicating a superior comprehensive improvement effect.

$$W_{P} = 100 \times \sum\limits_{j = 1}^{m} {w_{j} \left( { - 1} \right)^{{\partial_{j} }} \frac{{\left( {P_{j} - P_{{j_{0} }} } \right)}}{{P_{{j_{0} }} }}}$$
(16)

where \(m\) represents the total number of optimization objectives, \(w_{j}\) denotes the combination weight determined by G1-EWM for the \(j\)th objective, \(P_{j}\) is the actual value associated with the \(j\)th objective in the optimized scenario, and \(j\) is the actual value linked to the \(j\)th objective in the initial scenario. \(\partial_{j}\) serves as the objective type judgment coefficient, with \(\partial_{j}\) assigned a value of 2 for objectives where larger values are preferable (the \(j\)th objective), and \(\partial_{j}\) assigned a value of 1 for objectives where smaller values are preferable (the \(j\)th objective).

Engineering multi-objective optimization case study

Rear luggage compartment crash test

  • In accordance with GB15083-2019 requirements, a passenger car rear seat assembly and two test specimens, each measuring 300 mm × 300 mm × 300 mm with an edge chamfer of 20 mm and a mass of 18 kg, were positioned on the test bench vehicle. Subsequently, the acceleration-time curves depicted in Fig. 5 were applied to the test rig to simulate a passenger car crash. This requirement is deemed fulfilled if the seat backrest and its locking mechanism remain in their original positions during and after the test. However, deformation of the seat back and its fasteners is allowed during the test, provided that the front profile of the test back and/or headrest (Shore A hardness greater than 50) does not move forward past the transverse plumb plane through which it passes.

    1. (a)

      A point 150 mm forward of point R on the seat (for the headrest section).

    2. (b)

      A point 100 mm forward of point R on the seat (for the backrest portion).

Fig. 5
figure 5

RLCCT load application curve. RLCCT, Rear Luggage Compartment Crash Test.

Figure 6 shows the simulated and test deformation diagrams for the Rear Luggage Compartment Crash Test (RLCCT). From Fig. 6, it can be seen that the rear center seat area is inwardly concave and deformed, and the simulated deformation closely aligns with the test deformation. Table 1 provides the simulated and tested values of the maximum X-direction displacement change for the two measurement points shown in Fig. 7, with a maximum error of no more than 6.5%. These results demonstrate that the FE simulation model constructed in this study is highly accurate and suitable for subsequent optimization design.

Fig. 6
figure 6

Comparison of test and simulation results.

Table 1 Analysis of experimental and simulation errors.
Fig. 7
figure 7

Physical test and simulation model measurement points.

Selection of design variables, optimization objectives and constraints

The seat consists of multiple components, and before modeling, it is essential to analyze the overall seat structure to determine the influence of each component on seat stiffness and strength characteristics, and then filter components based on their degree of influence. The seat cushion and backrest are separate structures; the impact from luggage primarily affects the seat backrest, with minimal correlation to the cushion. Therefore, only the geometric model of the seat backrest is established for strength analysis. The seat backrest must withstand external loads, with the primary load-bearing component being the backrest skeleton. The skin, cushion, and other coverings on the skeleton contribute minimally to external loads, and due to their complex and irregular shapes, modeling these elements would not significantly affect subsequent stiffness and strength analyses. However, it would considerably increase the workload and computation. Therefore, the geometric model does not include the skin or soft cushion25. Based on the above analysis, the seat skeleton primarily consists of backrest steel tubes, backrest steel wires, a latching mechanism, and brackets, as shown in Fig. 8a.

Fig. 8
figure 8

RLCCT FE model details.

In the luggage compartment crash test, the primary force-bearing and energy-absorbing component is the backrest skeleton. Therefore, based on the symmetry and functionality of the seat backrest skeleton structure, the tubes and plates of the backrest skeleton are simplified and categorized into five groups of optimized components (see Fig. 8b). The thickness and material of the optimized components are defined as design variables and labeled T1 ~ T5 and M1 ~ M5, respectively. Table 2 presents the primary performance parameters of the alternative materials for the optimized components, including yield strength (YS), ultimate tensile strength (UTS), percentage elongation (PE), and material cost (Cost). Based on the yield strength values, the candidate materials are categorized into three groups, each containing four alternatives. Table 3 outlines the range of design variable values. As illustrated in Fig. 8b, the selected measurement points for headrest and backrest performance indices were used as safety performance optimization targets in subsequent analyses. Per GB15083 standards, the headrest and backrest should not exceed 150 mm and 100 mm, respectively, in front of the R-point during the luggage compartment crash test. Consequently, two planes were created in the FE model in accordance with these regulations. The distances of the measurement points from the respective planes along the X direction in Fig. 8b are 456 mm and 430 mm, as measured by HyperMesh.

Table 2 Candidate materials for optimization design.
Table 3 Range of design variables for lightweight design.

The collision process of the rear seat luggage compartment is a complex nonlinear dynamic phenomenon that can be used to evaluate safety performance across various indicators, including speed, displacement, and deformation. According to the test requirements of GB15083, the backrest frame and its fasteners are allowed to deform to a certain extent, but not fail. This study uses the maximum stress and strain criterion as the failure criterion, introducing a failure index to determine whether the components have failed, as expressed in the following formula. If the failure index exceeds 1, the component is considered to be in a state of failure; conversely, a value of 1 or less indicates that the component is in a safe state.

$$F_{i} = \max \left\{ {\frac{{\sigma_{i} }}{{\sigma_{Si} }},\frac{{\varepsilon_{i} }}{{\varepsilon_{Si} }}} \right\}$$
(17)

where \(F_{i}\) represents the failure index of each component, \(\sigma_{i}\) and \(\sigma_{Si}\) represent the maximum stress and stress limit of the \(i\)th component of the seat skeleton, and \(\varepsilon_{i}\) and \(\varepsilon_{Si}\) represent the strain limit of the \(i\)th component of the seat skeleton. If the value of \(F\) exceeds 1, it indicates a potential risk of seat structure failure. Considering the discrepancy between simulation and actual testing, and aiming to establish a reasonably safe design margin for the automobile seat structure, this paper defines the upper threshold value of \(F\) as 0.95. In other words, if \(F\) exceeds 0.95, the seat structure is deemed to have failed.

In summary, the design variables for this study include the thickness and material of backrest skeleton components. The responses encompass maximum X-displacement at measurement points L1 and L2, denoted as \(Dis_{1}\) and \(Dis_{2}\), respectively, material cost \(Cost\), total backrest skeleton weight \(Mass\), and the failure index of optimized components \(F\). Specifically, \(Dis_{1}\) and \(Dis_{2}\) serve as safety evaluation indices, \(Cost\) represents economic considerations, and \(Mass\) corresponds to the lightweight objective, defining the optimization goals for this analysis.

Design of experiments

The design of sample points is crucial for constructing an effective approximation model, as the accuracy of this model significantly impacts the success of the subsequent optimization process. To evaluate the quality of the constructed approximation model, this study compares its predicted response values with the actual output values for the corresponding input parameters. To ensure that the evaluation process is independent of the sample points, 80 sets of training samples are generated using the OLHD, while 20 sets of test samples are generated using the Hammersley method. The relevant sample information is provided in Tables 4 and 5.

Table 4 Partial sampling samples bank information by OLHD.
Table 5 Partial sampling samples bank information by Hammersley.

Optimization function

In “Selection of design variables, optimization objectives and constraints” section, the three key components of optimal design were described. The next step involves formulating a mathematical model specifically designed for multi-objective optimization relevant to this problem. The relevant equation is presented below.

$$\left\{ {\begin{array}{*{20}l} {find\;\left( {x,y} \right) = \left( {x_{k} ,y_{u} } \right)} \hfill \\ {Minimize\;Dis_{1} \left( {x_{k} ,y_{u} } \right),Dis_{2} \left( {x_{k} ,y_{u} } \right)} \hfill \\ {\quad \quad \quad \quad \quad Cost\left( {x_{k} ,y_{u} } \right),Mass\left( {x_{k} ,y_{u} } \right)} \hfill \\ {S.t. \, F_{i} \left( {x_{k} ,y_{u} } \right) \le 0.95} \hfill \\ {\quad \;Dis_{1} \left( {x_{k} ,y_{u} } \right) \le Dis_{{1_{O} }} } \hfill \\ {\quad \;Dis_{2} \left( {x_{k} ,y_{u} } \right) \le Dis_{{2_{O} }} } \hfill \\ {\quad \;x_{k}^{\left( L \right)} \le x_{k} \le x_{k}^{\left( U \right)} } \hfill \\ {\quad \;y_{u}^{\left( L \right)} \le y_{u} \le y_{u}^{\left( U \right)} } \hfill \\ {\quad \;i \in \left[ {1,2, \ldots ,5} \right]} \hfill \\ {\quad \;k \in \left[ {1,2, \ldots ,5} \right]} \hfill \\ {\quad \;u \in \left[ {1,2, \ldots ,5} \right]} \hfill \\ \end{array} } \right.$$
(18)

where \(Dis_{1} \left( {x_{k} ,y_{u} } \right)\) and \(Dis_{2} \left( {x_{k} ,y_{u} } \right)\) represent the X-direction max displacements of the headrest and backrest measurement points, respectively, \(Cost\left( {x_{k} ,y_{u} } \right)\) represents the total cost of materials, \(Mass\left( {x_{k} ,y_{u} } \right)\) represents the total mass of the optimized components. \(F_{i} \left( {x_{k} ,y_{u} } \right)\) represents the failure index of the \(i\)th optimized component. To ensure better optimization outcomes, \(Dis_{{1_{0} }}\) and \(Dis_{{2_{0} }}\) are set to be the X-direction maximum displacement of the measurement points of the headrest and the backrest of the original proposal. \(x_{k}^{\left( L \right)} \le x_{k} \le x_{k}^{\left( U \right)}\) represents the design variable and the range of values for the thickness of the \(k\)th optimized part, \(y_{u}^{\left( L \right)} \le y_{u} \le y_{u}^{\left( U \right)}\) represents the design variable and the range of values for the material grade of the \(u\)th optimized part.

Construction of approximation models and global optimization

The data from the experimental design in “Design of experiments” section were used to construct and evaluate eight approximation models. HAM-MSAM was developed based on these models. The principle of HAM-MSAM is to select the approximation model with the highest accuracy for each response, as determined by the value of R2. The calculation of R2 is shown in Eq. (19).

$$R^{2} = 1 - \frac{{\sum\nolimits_{i = 1}^{{n_{test} }} {\left( {y_{i} - y_{ei} } \right)^{2} } }}{{\sum\nolimits_{i = 1}^{{n_{test} }} {\left( {y_{ave} - y_{i} } \right)^{2} } }}$$
(19)

The above equation defines \(n_{test}\) as the number of test sample points, \(y_{i}\) as the true value of each response, \(y_{ei}\) as the predicted value of the approximation model, and \(y_{ave}\) as the mean value of the true value of each response.

Wang et al.26 and Dai et al.27 proposed two different hybrid approximation model frameworks, namely the RBF-KRG and RSM-RBF-KRG modeling strategies, respectively. Li et al.6 pointed out that the R2 value of the approximation model for multi-objective optimization of automotive structural components should exceed 0.8 to achieve a reliable optimization design outcome. To validate the effectiveness of the method proposed in this study, HAM-MSAM is compared with the methods of Wang et al. and Dai et al. based on prediction accuracy, as presented in Table 6. HAM-MSAM demonstrates the highest prediction accuracy for all responses, addressing the limitation of single approximation models, which often struggle to fit highly nonlinear data. Figure 9 presents the error analysis results of HAM-MSAM, where the R2 for each response exceeds 0.8, indicating the practical value of HAM-MSAM for the NSGA-III optimization algorithm.

Table 6 Results of error analysis of approximation models obtained by different methods.
Fig. 9
figure 9

Results of the HAM-MSAM error analysis.

The NSGA-III intelligent multi-objective optimization algorithm was selected to address the aforementioned optimization problem28, with the attribute population number set to 200 and the population size set to 300. The optimization algorithm generated the PFSs after 54,151 operations, as shown in Fig. 10. The CW-CWID MCDM method was applied to determine the weight relationships between each optimization objective, as shown in Table 7, with the combined weighted improvement ranking results presented in Fig. 11.

Fig. 10
figure 10

Pareto frontier solutions obtained by NSGA-III.

Table 7 Weighting factors for different design responses.
Fig. 11
figure 11

Combined weighted improvement ranking results.

Figure 11 shows the combined weighted improvement calculated by CW-CWID, which reaches a maximum value of 3.41%; therefore, the 15th solution set, corresponding to this maximum value, is selected as the optimal solution in this study. A comparison of the test metrics for the optimized and initial solutions is provided in Table 8. In the initial solution, the failure index of the 4th optimized part exceeds 1, indicating failure. However, in the optimized design, the failure index of the 4th optimized part is reduced to 0.862, meeting regulatory requirements. The safety performance, lightweight properties, and cost-effectiveness of the optimized rear car seat are simultaneously improved: the displacement of the headrest skeleton and backrest skeleton is reduced by 4.41% and 3.26%, respectively; material cost is reduced by 4.55%; and mass is reduced by 3.11%. The overall optimization results are positive, as all optimization objectives have been met without compromising performance, leading to substantial reductions in material cost and quality, consistent with the requirements of this study’s multi-objective collaborative optimization design. Additionally, as shown in the table, the maximum prediction error of HAM-MSAM does not exceed 4.5%, indicating that the approximation model used in this study demonstrates high predictive accuracy.

Table 8 Comparison of seat performance before and after optimization.

Local optimization

Using the sample data for design variables obtained through OLHD in Table 4, excluding sample points that fail to satisfy the constraints, the remaining four feasible solutions are presented in Table 9. Subsequently, these solutions are ranked using the CW-CWID MCDM method to identify the fourth set of solutions as the optimal compromise for local optimization.

Table 9 Local optimization results.

Comparison of optimization results of different multi-objective optimization methods

Table 10 and Fig. 12 compare the global optimization results with the local optimization results. For the passenger car rear seat ABGMOOD, the global optimization results achieved improvements of 4.41%, 3.26%, 4.55%, and 3.11% in the evaluation metrics Dis1, Dis2, Cost, and Mass, respectively, over the pre-optimization stage. The local optimization results achieved 5.57% and 3.26% improvements in Dis1 and Dis2, respectively, whereas Cost and Mass showed declining trends. The global optimization scheme offers clear advantages over the local optimization approach, particularly in economic efficiency and lightweight performance, achieving simultaneous improvements across all four objectives. For the multi-objective optimization design of automobile seats, the global optimization results are better aligned with the practical demands of industry.

Table 10 Comparison of seat performance before and after optimization.
Fig. 12
figure 12

Comparison of optimization results of different optimization methods.

Notably, both global and local optimization require 80 sample points, with global optimization involving the additional processes of constructing approximation models and solving optimization algorithms. The RLCCT FE model used in this paper is solved using LS-DYNA on the host computer’s CPU. A single solver computation takes about 3 h, resulting in a total of 240 h or 10 days for 80 computations. Training a single approximation model typically takes only 1–3 min, while the HAM-MSAM constructed in this paper requires approximately 8–24 min. Additionally, solving with the NSGA-III optimization algorithm using the approximation model instead of the FE model is even shorter, typically requiring only 5–10 min. From this, it can be observed that the global optimization in this paper requires approximately half an hour more computation time than local optimization. This additional time is exchanged for a better optimization solution, and compared to the 240 h required to compute 80 sample points, the additional CPU computation time is entirely acceptable for multi-objective optimization in automotive seat engineering.

Conclusion

This paper proposes an ABGMOOD strategy based on HAM-MSAM and applies it to the multi-objective optimization design of the rear seat of a passenger car, and summarizes the following conclusions.

  1. (1)

    The established FE model of working condition of automobile seats, when compared to physical test, all exhibit high accuracy. Therefore, the optimized design based on this model is reliable.

  2. (2)

    The HAM-MSAM predictive modeling framework proposed in this paper effectively compensates for the limitations of single approximation models in fitting response data with highly nonlinear characteristics under car seat crash conditions. The HAM-MSAM framework is also compared with recently published hybrid approximation model frameworks, demonstrating the highest prediction accuracy across all responses with an equivalent number of training sample points.

  3. (3)

    The ABGMOOD strategy in this paper acts as a global optimization strategy that more effectively explores the optimization space and achieves superior results compared to local optimization.

  4. (4)

    The optimized seat has been significantly improved in terms of economy, lightweight, and safety performance. There are no failures in any parts, thereby meeting the corresponding regulatory requirements for seat luggage crash tests and providing a reliable case reference for the future design of automotive seats.