Introduction

In recent years, with the rapid development of aerospace technology, many parametric curves and surfaces have been widely used to describe aerospace components due to their simple modeling and high-order continuity, including Bezier curves, NURBS splines, and PH curves. These provide a general mathematical tool for the analytical calculations. Although CNC machining paths can now be described by spline curves, most machining paths are still predominantly created using G01 commands. The tangential discontinuities between the line segments caused by the machining path's poor smoothness cause the machining feedrate to decrease to very low or even zero, with significant swings in the acceleration and jerk restrictions. Therefore, in order to improve machining efficiency, the machining path needs to be smoothed at the geometry level. Currently, smoothing methods are divided into two categories: local corner smoothing and global smoothing. Global smoothing, due to the complexity of error analysis and the difficulty of precisely controlling errors, remains challenges1,2,3. Most recent research has primarily focused on local corner smoothing.

The analytical calculation method is the most commonly utilized for local corner smoothing. References4,5,6,7,8,9,10,11,12,13,14,15,16,17,18 have provided analytical calculations explaining the reasons why asymmetric local machining paths and symmetric local machining paths can enhance machining efficiency. These findings have been validated through experiments and simulations. The difference lies in the order of continuity, the degree of curves, the different applications, and the complexity of the calculation. Lu4 employed the PSO algorithm to optimize local velocity. However, the efficiency of the PSO optimization algorithm is affected by factors such as population size and computational resources, which can reduce the solution efficiency. Xu6 provided analytical calculations to give a method of how to control the approximation-error, but there is still room for optimization, which can still further optimize the curvature by adjusting the position of the control points. Huang7 proposed a real-time local corner smoothing method that significantly reduces the acceleration of each axis while simultaneously ensuring accuracy in the tool tip and tool axis vector errors. This method has been integrated into an open environment CNC system, validating its effectiveness. Hu8 et al. presented an analytically computed cubic continuous local smoothing method, which locally inserts B-spline curves to limit the maximum approximation error while performing local smoothing. It has been experimentally verified that the method has smoother acceleration and smaller contour errors. Hu9 verified why the overlapping spline curves are not smooth enough by the numerical and analytical calculations and proposed a method to remove overlapping asymmetric curves to smooth the curves, which significantly reduces the curvature and saves machining time. Han et al.15 performed analytical calculations while also conducting parameter sensitivity analysis on local control points at different angles, and S-shaped acceleration and deceleration planning was used to plan the residual distance between speed intervals in order to improve the machining efficiency. Zhang et al.16 considered chord error and feedrate, which enhanced efficiency as well as machining quality compared to point-to-point curvature optimization. Huang17 proposed a new curve, Airthoid, which represents curvature in a more concise way and uses this curve for local corner smoothing. Additionally, a time synchronization strategy that maximizes the acceleration process is established to improve machining efficiency.

In summary, numerical analysis methods are predominantly used in most recent research for local corner smoothing, and these methods are often applied only within or on the control polygon, leaving room for further optimization. In this paper, a new optimization approach is first established near the NURBS control polygon to smooth the local corner by formulating an optimization objective. Intelligent optimization algorithms are employed to solve it, and deep learning methods are utilized to accelerate the optimization process. The methods related to the intelligent optimization algorithms, deep learning models and their corresponding optimization functions, machining process simulation parameters, and comparative methods selected in this paper will be introduced in subsequent sections19,20,21,22,23,24.

The remainder of this paper is organized as follows: “The optimization architecture of the local tool path” introduces the optimization architecture of intelligent optimization (as shown on the left in Fig. 1). Section “Deep learning optimization” presents the deep learning model to accelerate the optimization process (as shown in the center of Fig. 1). To ensure processing quality, “Feedrate planning based on multiple constraints” discusses the relevant constraints (as shown on the right in Fig. 1). Section “Simulation results” describes simulations to verify the effectiveness of the proposed method.

Fig. 1
figure 1

The flow chart of this paper.

The optimization architecture of the local tool path

The curvature of the machining path has a significant effect on the feedrate. Various constraints, including geometric constraints, acceleration constraints, and contour error constraints, are converted from curvature to feedrate on the machining path. Therefore, it is essential to smooth the machining path at the geometry level.

NURBS

NURBS provide a generalized mathematical tool for free-form curves and have become one of the most common tools for describing machining paths in recent years. NURBS is shown in Eq. (1):

$$ C(u) = \frac{{\sum\nolimits_{i = 0}^{n} {N_{i,p} ({\text{u}})w_{i} P_{i} } }}{{\sum\nolimits_{i = 0}^{n} {N_{i,p} ({\text{u}})w_{i} } }},\quad u_{0} \le u < u_{n + p + 1} $$
(1)

where \(P_{i}\) is the control point of the NURBS spline curve, \(w_{i}\) is the weight of the NURBS spline curve, and \(N_{i,p}\)(u) is the basis function of the NURBS spline curve. The basic functions of the NURBS splines are calculated through recursive calls, as shown in Eq. (2):

$$ N_{i,p} (u) = \frac{{u - u_{i} }}{{u_{i + p} - u_{i} }}N_{i,p - 1} (u) + \frac{{u_{i + p + 1} - u}}{{u_{i + p + 1} - u_{i + 1} }}N_{i + 1,p - 1} (u) $$
(2)

It is necessary to specify that “0/0 = 0” in Eq. (2). The first and second order derivatives of the basic functions of NUEBS spline curves are expressed in Eqs. (3) and (4):

$$ N{\prime}_{i,p} (u) = \frac{p}{{u_{i + p} - u_{i} }}N_{i,p - 1} (u) - \frac{p}{{u_{i + p + 1} - u_{i + 1} }}N_{i,p - 1} (u) $$
(3)

and

$$ N^{^{\prime\prime}}_{i,p} (u) = \frac{p}{{u_{i + p} - u_{i} }}N{\prime}_{i,p - 1} (u) - \frac{p}{{u_{i + p + 1} - u_{i + 1} }}N{\prime}_{i + 1,p - 1} (u) $$
(4)

Further, the first-order derivative and the second-order derivative of the NURBS curve can be expressed as:

$$ \begin{aligned} C^{\prime } (u) & = \frac{{\sum\nolimits_{i = 0}^{n} {N_{i,p}^{\prime } (u)w_{i} P_{i} - C(u)\sum\nolimits_{i = 0}^{n} {N_{i,p}^{\prime } (u)w_{i} } } }}{{\sum\nolimits_{i = 0}^{n} {N_{i,p} (u)w_{i} } }} \\ C^{\prime \prime } (u) & = \frac{{\sum\nolimits_{i = 0}^{n} {N_{i,p}^{\prime \prime } (u)w_{i} P - 2C^{\prime } (u)\sum\nolimits_{i = 0}^{n} {N_{i,p}^{\prime } (u)w_{i} } - C(u)\sum\nolimits_{i = 0}^{n} {N_{i,p}^{\prime \prime } (u)w_{i} } } }}{{\sum\nolimits_{i = 0}^{n} {N_{i,p} (u)w_{i} } }} \\ \end{aligned} $$
(5)

The curvature of the NURBS curve can be expressed as:

$$\rho (u) = \frac{{\left| {C^{\prime}(u)} \right|^{3} }}{{\left| {C^{\prime}(u) \times C^{\prime\prime}(u)} \right|}} $$
(6)

Each local corner that needs to be smoothed can be simplified into the form shown in Fig. 2b, which shows the distribution of NURBS spline control points using two different methods. In Fig. 2b, the blue circles within the black circle represent the additional control points compared to method 1. Since the curve is symmetric about the y-axis, an additional control point is added on the right side of the y-axis in the same way. The other NURBS control points, not circled in black, coincide with each other. The number of method 2 control points is 7, and the number of method 1 control points is 5. The NURBS splines generated by the two different methods are shown in Fig. 2a, and the curvature of the two spline curves is illustrated in Fig. 2c. It can be observed that the spline obtained by distributing the NURBS control points using method 2 exhibits a significantly greater reduction in curvature compared to method 1. Therefore, this paper primarily focuses on method 2.

Fig. 2
figure 2

Comparison of the two different methods of distributing control points.

It should be noted that although method 2 makes the locally smoothed path closer to the central control point (as shown in Fig. 2a), the addition of two control points may cause δ1 (as shown in Fig. 3) at the local machining path that exceeds the user-defined maximum approximation error δmax compared to method 1. Typically, δ2 is greater than δ1, and it is required that δ2 is also less than the maximum approximation error δmax. The subsequent optimization process ensures that max (δ2, δ1) < δmax.

Fig. 3
figure 3

The approximation error of the local corner.

The optimization process description

The optimization process in this paper can be described as finding two symmetric NURBS control points Pi+3, Pi+4 in the pink region such that the curvature is optimal within the region. Simultaneously, the constraint max (δ2, δ1) < δlim must be satisfied. The optimal control point positions should be determined without exceeding the pink region, while also ensuring that they do not coincide with the boundary (the control points Pi+3 and Pi+4, as shown in Fig. 4, cannot coincide with Pi−1, Pi, and Pi+1.). To ensure that the maximum curvature in the optimization region occurs as much as possible within the region shown in Fig. 2a, it should be ensured that Pi−2Pi−1Pi+3 and Pi−1Pi+3Pi are both smaller than Pi+3PiPi+4.

Fig. 4
figure 4

Schematic diagram of the optimization process.

In summary, the three essential components of optimization can be described as follows:

$$ \left\{ {\begin{array}{*{20}l} {To\;find\;P_{i + 3}^{i} ,P_{i + 4}^{i} } \hfill \\ {Minimize\;\left( {{\text{max}}\left( {\sum\limits_{i = 1}^{N} {cur_{i} } } \right)} \right)} \hfill \\ {s.t.\;cur_{i} < cur_{{i,{\text{max}}}} {\kern 1pt} } \hfill \\ {\quad P_{i - 1,x}^{i} < P_{i + 3,x}^{i} < P_{i,x}^{i} } \hfill \\ {\quad P_{{i - 1,{\text{y}}}}^{i} < P_{i + 3,y}^{i} < P_{i,y}^{i} } \hfill \\ {\quad P_{i,x}^{i} < P_{i + 4,x}^{i} < P_{i + 1,x}^{i} } \hfill \\ {\quad P_{i,x}^{i} < P_{i + 4,x}^{i} < P_{i + 1,x}^{i} } \hfill \\ {\quad \angle P_{i - 2}^{i} P_{i - 1}^{i} P_{i + 3} > \angle P_{i + 3}^{i} P_{i} P_{i + 4}^{i} } \hfill \\ {\quad \angle P_{i - 1}^{i} P_{i + 3}^{i} P_{i} > \angle P_{i + 3}^{i} P_{i} P_{i + 4}^{i} } \hfill \\ {\quad {\text{max}}\left( {\delta_{2}^{i} ,\delta_{1}^{i} } \right) \le \delta_{{{\text{max}}}} } \hfill \\ {\quad \frac{{N_{i + 3,p} w{}_{i + 3}P_{i + 3}^{i} + N_{i,p} w{}_{i}P_{i}^{i} + N_{i + 4,p} w{}_{i + 4}P_{i + 4}^{i} }}{{N_{i + 3,p} w{}_{i + 3} + N_{i,p} w{}_{i} + N_{i + 4,p} w{}_{i + 4}}} - P_{i}^{i} = \delta_{\lim } } \hfill \\ \end{array} } \right. $$
(7)

where \({\text{max}}\left( {\sum\nolimits_{i = 1}^{N} {cur_{i} } } \right)\) is each local corner that needs to be smoothed, and \(cur_{{i,{\text{max}}}}\) is the maximum curvature of each local corner. If the curvature of a local corner is too large, the feedrate needs to be reduced to satisfy certain constraints. Therefore, in order to ensure machining efficiency, it is required that the curvature should be less than \(cur_{{i,{\text{max}}}}\). \(P_{i + 3,x}^{i}\) denotes the x-coordinate of the Pi+3 control point for the ith local corner requiring smoothing, as shown in Fig. 3. Similarly, \(P_{i + 3,y}^{i}\) represents the y-coordinate of the \(P_{i + 3}\) control point for the ith local corner requiring smoothing. In order to avoid δ2 exceeding δmax, the last constraint uses the form of solving an equation to limit δ2. Since the location of the NURBS main control point cannot be guaranteed, (the main control point is the control point that has the most influence on the point of the NURBS spline curve), it is necessary to discuss the above situation separately: when the main control point is \(P_{i}^{i}\), set wi to 1, and set \(w{}_{i + 3}\) and \(w{}_{i + 4}\) to equal values before solving. When the main control point is \(P_{i + 3}^{i}\) or \(P_{i + 4}^{i}\), set \(w{}_{i + 3}\) and \(w{}_{i + 4}\) to 1, and then solve for wi.

The selection of optimization algorithm

The Particle Swarm Algorithm (PSO)22 is a heuristic and intelligent optimization algorithm. It is particularly effective in addressing multi-dimensional nonlinear problems and has a strong global search capability. Therefore, the Particle Swarm Optimization (PSO) algorithm is selected as the optimization agent model. The update strategy of the PSO algorithm is as follows:

$$ \begin{aligned} v_{id}^{k + 1} & = wv_{id}^{k} + c_{1} r_{1} (p_{id,pbest}^{k} - x_{id}^{k} ) + c_{2} r_{2} (p_{id,gbest}^{k} - x_{id}^{k} ) \\ & \quad + c_{3} r_{3} (p_{id,pbest}^{k} - p_{id,gbest}^{k} ) \\ \end{aligned} $$
(8)

where \(c_{1}\) represents the weight of the particle's next step being influenced by its own experience, which accelerates the particle towards the individual’s best position. \(c_{2}\) signifies the weight of the particle's next step being influenced by the experience of other particles, accelerating the particle towards the global best position. \(c_{3}\) denotes the weight that reflects the contribution of the group's experience in relation to the particle's own experience. The positions of the two additional control points (Pi+3 and Pi+3) in Fig. 4 are treated as particles to be optimized. Under the constraints of Eq. (7), the positions of the particles are iteratively updated within the feasible domain until the algorithm converges. The NURBS control point positions and the corresponding NURBS weights are stored as datasets for subsequent deep learning training.

Deep learning optimization

Intelligent optimization algorithms are affected by the population size, the search space, and the computational resources, which can lead to inefficient optimization. Deep learning, because of its efficient feature extraction capabilities and powerful autonomous learning efficiency, has made significant progress in applications. Deep learning relies on feature transformation between network layers and layer-by-layer training mechanisms to significantly improve the efficiency of complex data and special tasks. The Resnet23 architecture defines the desired underlying mapping as H(x) and modifies the stacked nonlinear layers to fit another mapping, f(x) = H(x) − x. The original mapping is then reformulated as f(x) + x. The expression f(x) + x can be effectively utilized through “shortcut connection” to prevent the vanishing gradient, which often leads to poor model training performance as the depth of deep learning layers increases, The network structure is shown in Fig. 5.

Fig. 5
figure 5

Shortcut connection.

The selection of a loss function

In this paper, the Double- Resnet local corner smoothing algorithm is employed, including FDLS (First-Double-ResNet local smoothing algorithm) and SDLS (Second-Double-ResNet local smoothing algorithm). The above two algorithms are respectively used to predict the control point positions and the weight w of the NURBS curve. The overall structure of the FDLS is shown in Fig. 6. The FDLN inputs are the five NURBS spline control points before smoothing (as shown in Fig. 6 (1)), and its outputs are the two control points added after smoothing (as shown in Fig. 6 (2)). The input of the SDLS consists of all control points after smoothing (as shown in Fig. 6 (3)), and the output is the weights (as shown in Fig. 6 (4)) obtained by solving the equations that prevent the error δ2 from being excessively large under the last constraint.

Fig. 6
figure 6

DRLS network structure.

The residual block is shown in Fig. 7, which performs a shortcut connection every 8 layers, and the whole structure has 64 fully connected layers, with dropout interspersed among them to prevent overfitting. The dropout is to reduce co-adaptation among neurons. Specifically, it prevents certain neurons from becoming overly reliant on others. Dropout enables the network to better adapt to varying input data by decreasing the dependencies between neurons.

Fig. 7
figure 7

The residual block.

The selection of a loss function and the selection of optimization algorithm

In deep learning, a loss function is needed to evaluate how good the model is. Since the task is a typical regression task, the MSE (Mean Squared Error) loss function is selected.

$$ {\text{MSE}} = \frac{1}{s}\sum\limits_{i = 1}^{s} {(y_{i} - \widehat{y}_{i} )^{2} } $$
(9)

where s denotes the number of samples, which is the total count of data points in the dataset. \(y_{i}\) is the true value of the i-th sample. \(\widehat{y}_{i}\) is the predicted value for the i-th sample.

Since the optimization process of finding the optimal NURBS control points to achieve the optimal curvature will have high nonlinearity, this paper adopts the optimization algorithm Adam. The algorithm improves the global search capability and accelerates the convergence speed. Its optimization process can be expressed as follows24:

$$ m_{t} = \beta_{1} m_{t - 1} + (1 - \beta_{1} )g_{t} $$
(10)
$$ v_{t} = \beta_{2} v_{t - 1} + (1 - \beta_{2} )g_{t}^{2} $$
(11)
$$ \mathop {m_{t} }\limits^{ \wedge } = \frac{{m_{t} }}{{1 - \beta_{1}^{t} }} $$
(12)
$$ \mathop {v_{t} }\limits^{ \wedge } = \frac{{v_{t} }}{{1 - \beta_{2}^{t} }} $$
(13)
$$ \theta_{t + 1} = \theta_{t} - \frac{\eta }{{\sqrt {\mathop {v_{t} }\limits^{ \wedge } } + \varepsilon }}\mathop {m_{t} }\limits^{ \wedge } $$
(14)

where \(m_{t}\) and \(v_{t}\) denote the first-order and second-order matrices of the gradient in momentum form at time t, respectively. \(\mathop {m_{t} }\limits^{ \wedge }\) represents the bias-corrected momentum first-order matrix, and \(\mathop {v_{t} }\limits^{ \wedge }\) denotes the bias-corrected momentum second-order matrix. \(\beta_{1}\) and \(\beta_{2}\) indicate the exponential decay rate of the first-order moment estimation and the exponential decay rate of the second-order moment estimation, respectively. Moreover, \(\beta_{1}^{t}\) and \(\beta_{2}^{t}\) refer to the t-th power of \(\beta_{1}\) and \(\beta_{2}\), respectively. \(\varepsilon\) denotes a constant close to 0 and is designed to ensure the stability of the numeric.

After extensive experiments, the settings for FDLS and SDLS in this paper are as follows: FDLS: batch size = 16, learning rate = 0.005; SDLS: batch size = 64, learning rate = 0.001. The training process is illustrated in Figs. 8 and 9. FDLS achieves a final convergence magnitude of 10−4 after 1000 iterations, while SDLS reaches a final convergence magnitude of 10−2 after 10,000 iterations. Both models reach the expected error magnitudes well before the maximum preset number of iterations. The FDLS model approaches convergence after around 2000 iterations. The SDLS model nears convergence after approximately 15,000 iterations. Neither appears overfitting. The detailed results of the training process can be found in ESM Appendix A.

Fig. 8
figure 8

FDLS training process.

Fig. 9
figure 9

SDLS training process.

Feedrate planning based on multiple constraints

The geometric error and the contour error

The chord error can be calculated by Eq. (15):

$$ \Delta = \rho - \sqrt {\rho^{2} - \left( \frac{vT}{2} \right)^{2} } $$
(15)

where \(\rho\) represents the radius of curvature, \(T\) denotes the interpolation period. Consequently, the feedrate constrained by the chord error can be determined:

$$ v_{\Delta } (\rho ) = \frac{{2\sqrt {2\rho \vartriangle_{\lim } - \vartriangle_{\lim }^{2} } }}{T} $$
(16)

The relationship between contour error and feed rate can be expressed as Eq. (17):

$$ v_{c} = \rho w_{n} \sqrt {1 - 2\zeta^{2} + \sqrt {(2\zeta^{2} - 1)^{2} - \frac{{\varepsilon_{\lim }^{2} - 2\varepsilon_{\lim } \rho }}{{(\rho - \varepsilon_{\lim } )^{2} }}} } $$
(17)

where \(\varepsilon_{\lim }\) is the contour-error limitation, \(\zeta\) is Damping ratio, \(w_{n} = \sqrt {K_{p} /J}\), \(\zeta = B/(2\sqrt {JK_{p} } )\), \(w_{n}\) is undamped natural frequency, Kp is the position-loop proportional gain, J is the equivalent rotational inertia of the feed-drive system, and B is the equivalent damping factor.

The feedrate constrained by normal acceleration and normal jerk

When the feedrate is v, the normal acceleration can be derived from Eq. (18)19:

$$ a_{n} = \frac{{v^{2} }}{\rho } $$
(18)

Therefore, the feedrate under the normal acceleration constraint is described as follows:

$$ v_{a} (\rho ) = \sqrt {\rho a_{n,\lim } } $$
(19)

where \(a_{n,\lim }\) is the limit of the normal acceleration.

The jerk can be expressed by the rate of change of acceleration as shown in Fig. 10:

$$ \left\{ {\begin{array}{*{20}c} {\vartriangle a_{n} = \left\| {{\mathbf{a}}_{n + } } \right. - \left. {{{\varvec{\upalpha}}}_{n - } } \right\| = 2\frac{{v^{2} }}{\rho }\sin \left( {\frac{\vartriangle \theta }{2}} \right)} \\ {\vartriangle t = \frac{\rho \vartriangle \theta }{v}} \\ {j_{n} = \mathop {\lim }\limits_{\vartriangle t \to 0} \frac{{\vartriangle a_{n} }}{\vartriangle t} = \mathop {\lim }\limits_{\vartriangle \theta \to 0} \frac{{2\frac{{v^{2} }}{\rho }\sin \left( {\frac{\vartriangle \theta }{2}} \right)}}{{\frac{\rho \vartriangle \theta }{v}}} = \frac{{v^{3} }}{{\rho^{2} }}} \\ \end{array} } \right. $$
(20)
Fig. 10
figure 10

The calculation of normal jerk.

The relationship between the jerk limit and the radius of curvature can be expressed as:

$$ v_{j} (\rho ) = \sqrt[3]{{\rho^{2} j_{n,\lim } }} $$
(21)

where \(j_{n,\lim }\) is the limit of the jerk.

Based on the above constraints, the feedrate at each point on the NURBS spline can be determined:

$$ v_{f} = \min [v_{\vartriangle } ,v_{a} ,v_{j} ,v_{c} ] $$
(22)

where \(v_{f}\) is the maximum feedrate for each point.

The feed rate planning method adopts the method described in reference19. The NURBS machining path is discretized into a finite number of sample points. The curvature is converted to feed speed based on Eq. (22). The point less than the maximum feedrate is defined as the sensitive interval point. The minimum speed within the sensitive interval is defined as the feedrate of the sensitive interval. Set the points greater than or equal to the maximum feedrate to the maximum feedrate and define them as non-sensitive intervals. Since the feedrate changes continuously during the machining process, acceleration and deceleration should be performed in the non-sensitive intervals to smoothly reach the feedrate for each sensitive interval. Considering the limited drive capabilities of each axis, certain sensitive interval speeds need to be updated to ensure that acceleration and deceleration can be completed within the non-sensitive intervals. The parameters are set as shown in Table 1.

Table 1 The parameter setting for simulations.

Simulation results

In this section, three simulations are performed to verify the effectiveness of the proposed methods. Example 1 is introduced in this paper to demonstrate that the proposed method can indeed optimize the machining path in terms of curvature. Example 2 and Example 3 are presented to show that the proposed method, compared to other methods20,21, is more applicable to paths with both large and small curvatures (as in Example 2), as well as paths with a higher frequency of large curvature segments that are close each other (as in Example 3). To further verify the generality and optimization efficiency of the proposed deep learning model, the comparative data between PSO algorithm and the proposed deep learning model can be found in ESM Appendix B.

Case 1

Case 1 selects the W-shaped machining path shown in Fig. 11. As indicated in Table 2 and Fig. 12a, the approximation error in each local corner did not exceed the maximum local approximation error. Additionally, the curvature after optimization decreased to varying degrees compared to before optimization, with a maximum reduction of up to 92.7%. The above results demonstrate that the proposed method can optimize the curvature of each local corner while maintaining the shape of the machining path to a certain extent. After optimizing the curvature, the tool tip can pass through the local corners at a higher federate (as shown in Fig. 12b). Due to the large curvature of the local corners and the shortness of the arc lengths of the neighbor corner, the feedrate is low (as shown in Fig. 12b). It can be observed that none of the processing parameters exceeds the limit values (guaranteeing machining quality) through Fig. 12c–h, proving that the constraints proposed in this paper are effective.

Fig. 11
figure 11

W-shaped machining path.

Table 2 The data about the machining path “W”.
Fig. 12
figure 12figure 12figure 12

Comparing result.

Case 2

The “Dress” machining path is selected as case 2, which is characterized by both large and small curvatures. Due to the short arc length left for acceleration and deceleration, neither the methods proposed in references20,21 nor the method presented in this paper were able to reach the maximum feedrate vp. The feedrate distributions of the three methods are shown in Fig. 13a. It can be seen by the processing time shown in Fig. 13b that the method proposed in this paper achieved the shortest machining time, followed by the method in Ref.20, while the method in reference21 resulted in the longest machining time. The proposed method reduced machining time by 8% and 36.9% compared to the methods in Refs.20,21, respectively. Although the maximum values of chord error, normal acceleration, normal jerk, contour error, tangential acceleration and tangential jerk of the method proposed in this paper are still larger (e.g., Fig. 13c–h), they are all smaller than the limit values, which proves that the aforementioned constraints are effective. The approximation error for case 2 is shown in Fig. 13i. It should be noted that the machining path in case 2 has symmetric geometry. Therefore, only the approximation error for each local corner in half of the symmetric structure is shown. The approximation error at each local corner for all three methods is less than the maximum value 0.1 mm.

Fig. 13
figure 13figure 13

Comparison of data related for the “clothes” shaped machining paths.

Case 3

In case 3, the “torch” machining path shape is selected as shown in Fig. 14a, and the local corner of the path with large curvature is close to each other. The overall value of the path curvature is larger, so the feedrate is lower than in case 2. Given that the path's beginning and ending portions are made up of arcs with a curvature of 0 and comparatively lengthy arc lengths, the feedrate can be accelerated to the maximum feedrate vp, as shown in Fig. 14b. It also can be observed that the proposed method results in relatively high values for chord error, normal acceleration, normal jerk, contour error, tangential acceleration and tangential jerk (e.g., Fig. 14c–h), but none of these exceed the limit value. It is clear from Fig. 14i that the method proposed in this paper achieves smaller approximation errors compared to Refs.21,22, indicating that our deep learning approach offers better optimization for machining paths with multiple closely spaced high-curvature features. Based on the analysis of case 2 and case 3, it can be seen that the proposed method not only ensures the accuracy of the machining process but also optimizes curvature to a certain extent. It improves processing efficiency while maintaining processing quality and meeting the specified constraints.

Fig. 14
figure 14figure 14

Comparison of data related for the “Torch” shaped machining paths.

Conclusion

This paper proposes a novel method that transforms the local corner smoothing problem into an optimization problem. The optimization objective, design variables, and constraints are defined, and the Particle Swarm Optimization (PSO) algorithm is employed to solve it. Considering the impact of population size and computational resources on intelligent optimization algorithms, the deep learning method is employed to establish the mapping between inputs and outputs to improve optimization efficiency. The deep learning network FDLS is used to optimize the positions of NURBS control points, while the network SDLS is utilized to optimize NURBS weights. To ensure processing quality, this paper considers chord error, normal acceleration, normal jerk, and contour error as constraints for limiting the federate. Finally, it is demonstrated through the cases that the proposed method can improve processing efficiency while maintaining processing quality. The future research is to extend the method to 5-axis CNC machining.