Introduction

The space time adaptive processing (STAP) technology achieves clutter suppression and moving target detection by accurately estimating the clutter covariance matrix (CCM)1,2,3,4,5,6,7,8,9,10,11,12,13. However, the performance of the uniform linear array (ULA) STAP degrades due to the small element-spacing, which results in stronger element coupling effects14,15,16,17. To improve higher system degrees of freedom (DOF) are typically obtained by increasing the number of elements and pulses. However, this approach inevitably increases hardware requirements, leading to higher power consumption, increased weight, and reduced payload capacity, which severely limits the overall detection performance of the radar.

By utilizing difference operation, the sparse structures can effectively address issues such as element coupling and hardware resource limitations18,19,20,21,22,23. The adaptive processing method based on nested array structure can form deep notch suppression areas in the clutter direction, achieve narrow directional beam in the target direction, and increase the signal processing resolution23. However, this method requires prior knowledge of the frequency information of all interference and clutter sources. Additionally, the minimum redundancy arrays24,25 can improve space–time resolution and the detection capability for slowly moving targets. However, this approach becomes very complex in large-scale arrays and multi-pulse systems. In contrast, the design and implementation of coprime structures are much simplified.

However, most STAP with coprime sampling structures26 are only applicable to ideal array models, which are difficult to achieve in practical applications. Therefore, it is crucial to design a STAP method suitable for coprime sampling structures in the presence of gain-phase errors. Researchers have transformed the gain-phase uncertainty issues in the array reception model into a variable error model and treated it as an additive error matrix27. Thus, an orthogonal matching pursuit least squares algorithm utilizing variable error model is designed. However, this method has not fully utilized the DOF advantage of sparse arrays. Ref28 proposed a blind calibration technique based on compressed sensing for reconstructing multiple discrete signals. This technique effectively estimates discrete signals and corrects gain-phase biases caused by insufficient calibration, demonstrating good performance under small errors. Recently, some scholars have also proposed the covariance matrix eigenvalue decomposition method, whose core lies in the covariance matrix composed of the output vectors of the array and its conjugates dot product. By minimizing the atomic norm of non-uniform linear arrays, the joint optimization problem of clutter subspace and signal subspace is derived, and the alternating direction multiplier method is used for iterative updating29. This algorithm is designed for the special structure of non-uniform linear arrays. However, this method requires a large number of matrix selections during implementation, thus increasing computational complexity. Ref30 have established a channel gain-phase errors model for single carrier frequency and multi carrier frequency modes to study the impression of gain-phase errors on the optimal weight vector. However, these studies did not fully exploit multi-carrier frequency systems. An enhanced model separates the array gain-phase errors coefficients from clutter responses and CCM by estimating array phase error information, thereby improving the estimation accuracy of the CCM. It is worth noting that this method requires solving semi-definite programming problems, making computational problems a considerable challenge and difficult to meet real-time requirements31. A discrete angle-Doppler plane can be utilized to establish signal models in the gain-phase errors. However, the performance of this method may deteriorate due to off-grid issues32,33.

To overcome some problems, a robust STAP algorithm with the coprime sampling structure based on optimal singular value thresholding is proposed. While maintaining the coprime structure, four calibrated auxiliary elements and four auxiliary pulses are introduced into the original coprime structure. By sorting and recombining the amplitude and phase errors of the subarray, the CCM is compensated for errors. Subsequently, the compensated matrix is expanded and filled into a high-dimensional array. Finally, the singular value threshold algorithm is used to solve the nuclear norm convex optimization problem, thereby obtaining the CCM.

The structure of this article is as follows: "The gain-phase errors model with coprime sampling structure" introduces the gain-phase errors model with coprime sampling structure. The STAP based on optimal singular value thresholding is provided in "The STAP with singular value thresholding". "Simulation results" verifies the performance of the proposed algorithm. Finally, "Conclusion" provides a summary of this article.

The gain-phase errors model with coprime sampling structure

Assuming \(N_{1}\), \(N_{2}\), \(M_{1}\), and \(M_{2}\) are coprime integers that satisfy the conditions \(N_{1} < N_{2}\) and \(M_{1} < M_{2}\). The dense sub-ULA of the side-looking pulse-Doppler radar contains \(N_{2}\) elements positioned at \(\{ N_{1} p_{2} d,0 \le p_{2} \le N_{2} - 1\}\) and the sparser sub-ULA comprises \(2N_{1} - 1\) elements located at \(\{ N_{2} p_{1} d,1 \le p_{1} \le 2N_{1} - 1\}\), \(d\) represents the minimum element spacing, as illustrated in Fig. 1(a). Similar to the coprime array, coprime PRI contains \(M_{2}\) and \(2M_{1} - 1\) pulses in a coherent processing interval (CPI), with position coordinates of \(\{ M_{1} q_{2} T_{r} ,0 \le q_{2} \le M_{2} - 1\}\) and \(\{ M_{2} q_{1} T_{r} ,1 \le q_{1} \le 2M_{1} - 1\}\), where \(T_{r}\) represents the minimum pulse repetition interval (PRI), as shown in Fig. 1(b).

Fig. 1
figure 1

Coprime array and coprime PRI configuration.

Without the range ambiguities, the snapshot received by a range cell is

$${\mathbf{x}} = a_{t} {\mathbf{v}}(\varphi_{t} ,f_{t} ) + {\mathbf{x}}_{u}$$
(1)

where \(a_{t}\) represents the target amplitude, \({\mathbf{v}}(\varphi_{t} ,f_{t} ) = {\mathbf{v}}(\varphi_{t} ) \otimes {\mathbf{v}}(f_{t} )\) symbolizes the target space time steering vector, the target spatial and temporal steering vectors are

$${\mathbf{v}}(\varphi_{t} ) = [1,e^{{2\pi jn_{1} \varphi_{t} }} , \ldots ,e^{{2\pi jn_{N - 1} \varphi_{t} }} ]^{T}$$
(2)

and

$${\mathbf{v}}(f_{t} ) = [1,e^{{2\pi jm_{1} f_{t} }} , \ldots ,e^{{2\pi jm_{M - 1} f_{t} }} ]^{T}$$
(3)

respectively, where \(\varphi_{t} = d\cos (\theta )/\lambda\) and \(f_{t} = 2v_{r} T_{r} \cos (\theta )/\lambda\). Here, \(\lambda\) is the radar wavelength, \(v_{r}\) is the radar velocity, and \(\theta\) is the target direction. Assuming clutter in a range cell can be decomposed into \(N_{c}\) independent clutter patches, then the clutter plus noise \({\mathbf{x}}_{u}\) is

$${\mathbf{x}}_{u} = \sum\limits_{i = 1}^{{N_{c} }} {a_{c,i} } {\mathbf{v}}(\varphi_{c,i} ,f_{c,i} ) + {\mathbf{n}} = \sum\limits_{i = 1}^{{N_{c} }} {a_{c,i} } {\mathbf{v}}(\varphi_{c,i} ) \otimes {\mathbf{v}}(f_{c,i} ) + {\mathbf{n}}$$
(4)

where \(\varphi_{c,i}\) and \(f_{c,i}\) stand for the normalized spatial and temporal frequencies of the \(i\) clutter respectively, while \(a_{c,i}\) represents clutter gain. And \({\mathbf{n}}\) signifies the thermal noise.

The \(i\) clutter spatial and temporal steering vectors can be represented respectively by Eqs. (5) and (6), namely

$${\mathbf{v}}(\varphi_{c,i} ) = [1,e^{{2\pi jn_{1} \varphi_{c,i} }} , \ldots ,e^{{2\pi jn_{N - 1} \varphi_{c,i} }} ]^{T}$$
(5)
$${\mathbf{v}}(f_{c,i} ) = [1,e^{{2\pi jm_{1} f_{c,i} }} , \ldots ,e^{{2\pi jm_{M - 1} f_{c,i} }} ]^{T}$$
(6)

The corresponding spatial–temporal steering vector is

$${\mathbf{v}}(\varphi_{c,i} ,f_{c,i} ) = {\mathbf{v}}(\varphi_{c,i} ) \otimes {\mathbf{v}}(f_{c,i} ){ = }\left[ {\begin{array}{*{20}c} 1 \\ {e^{{2\pi jn_{1} \varphi_{c,i} }} } \\ \vdots \\ {e^{{2\pi jn_{N - 1} \varphi_{c,i} }} } \\ \end{array} } \right] \otimes \left[ {\begin{array}{*{20}c} 1 \\ {e^{{2\pi jm_{1} f_{c,i} }} } \\ \vdots \\ {e^{{2\pi jm_{M - 1} f_{c,i} }} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {v_{0,i} } \\ {v_{1,i} } \\ \vdots \\ {v_{NM - 1,i} } \\ \end{array} } \right]$$
(7)

where \(v_{lM + r - 1,i} = e^{{2\pi j(n_{l} \varphi_{c,i} + m_{r - 1} f_{c,i} )}}\), \(l = 0, \cdots ,N - 1\), \(r = 1, \cdots ,M\), \(i = 1, \cdots ,N_{c}\). Therefore, the clutter plus noise covariance matrix (CNCM) \({\mathbf{R}}_{u}\) can be represented as

$$\begin{gathered} {\mathbf{R}}_{u} = E\left[ {{\mathbf{x}}_{u} {\mathbf{x}}_{u}^{H} } \right] \hfill \\ \, = \sum\limits_{i = 1}^{{N_{c} }} {E(\left| {a_{c,i} } \right|^{2} } ){\mathbf{v}}(\varphi_{c,i} ,f_{c,i} ){\mathbf{v}}^{H} (\varphi_{c,i} ,f_{c,i} ) + {\mathbf{Q}} \hfill \\ \, = {\mathbf{VPV}}^{H} + {\mathbf{Q}} \hfill \\ \, = {\mathbf{R}}_{c} + {\mathbf{Q}}, \hfill \\ \end{gathered}$$
(8)

where \({\mathbf{V}} = [{\mathbf{v}}(f_{c,1} ,\varphi_{c,1} ),{\mathbf{v}}(f_{c,2} ,\varphi_{c,2} ),...,{\mathbf{v}}(f_{{c,N_{c} }} ,\varphi_{{c,N_{c} }} )]\) symbolizes the clutter spatial–temporal steering matrix, \({\mathbf{P}} = diag([p_{1} ,p_{2} ,...,p_{{N_{c} }} ]^{T} ), \, p_{k} = E(\left| {a_{c,k} } \right|^{2} )\) indicates the clutter power matrix, \({\mathbf{R}}_{c}\) means the CCM, and \({\mathbf{Q}}\) signifies the noise covariance matrix (NCM).

In real radar working scenarios, the element gain-phase errors can change in the spatial–temporal steering vectors. Here, \(\gamma_{i}\) and \(\phi_{i}\) represent the gain-phase errors of the \(i\) element respectively, \(\Upsilon = {\text{diag}}[\gamma_{1} e^{{j\phi_{1} }} ,\gamma_{2} e^{{j\phi_{2} }} , \cdots \gamma_{N} e^{{j\phi_{N} }} ]\) serves as the corresponding gain-phase errors matrix. In this case, the spatial–temporal steering vector becomes

$${\hat{\mathbf{v}}}(\varphi_{t} ,f_{t} ) = \Upsilon {\mathbf{v}}(\varphi_{t} ) \otimes {\mathbf{v}}(f_{t} )$$
(9)

The assumed space–time steering vector does not match the actual one under the gain-phase errors conditions, leading to partial signal cancellation and thus reducing the output signal-to-noise ratio (SNR). At present, the CNCM becomes

$${\hat{\mathbf{R}}}_{u} = \Gamma {\mathbf{R}}_{c} \Gamma^{H} + {\hat{\mathbf{Q}}}$$
(10)

where \(\Gamma = \Upsilon \otimes {\text{I}}\), and \({\text{I}}\) denotes the unit matrix.

The STAP with singular value thresholding

In order to accurately estimate CCM, the gain-phase errors are corrected by introducing array elements, and an estimation method for the gain-phase errors matrix is provided.

The estimation method for the gain-phase errors matrix

In order to correct the gain-phase errors, two additional elements are added to the left of the dense and sparse uniform linear sub-arrays, respectively. The new dense and sparse uniform linear sub-array share the third element, which is set as the reference element and marked as 0. The corrected array still maintains the coprime structure. The set of element positions for the new array can be represented as

$${\mathbb{P}} = \{ p_{2} N_{1} d, - 2 \le p_{2} \le N_{2} - 1\} \cup \{ p_{1} N_{2} d, - 2 \le p_{1} \le 2N_{1} - 1\}$$
(11)

At this point, the total number of array elements is \(N\).

In order to preserve the coprime structure of the pulse sequence, two additional pulses are added to the left of the dense and sparse sub-pulses, respectively. Assuming the total number of pulses is \(M\). According to Eq. (8), the corresponding CCM can be obtained as

$${\tilde{\mathbf{R}}}_{c} = \left[ {\begin{array}{*{20}c} {R_{0,0} } \\ {R_{1,0} } \\ \vdots \\ {R_{NM - 1,0} } \\ \end{array} \, \begin{array}{*{20}c} {R_{0,1} } \\ {R_{1,1} } \\ \vdots \\ {R_{NM - 1,1} } \\ \end{array} \, \begin{array}{*{20}c} \cdots \\ \cdots \\ \ddots \\ \cdots \\ \end{array} \, \begin{array}{*{20}c} {R_{0,NM - 1} } \\ {R_{1,NM - 1} } \\ \vdots \\ {R_{NM - 1,NM - 1} } \\ \end{array} } \right]$$
(12)

Here,

$$R_{{(l_{1} M + r_{1} - 1),(l_{2} M + r_{2} - 1)}} = \sum\limits_{k = 1}^{{N_{c} }} {p_{k} e^{{2\pi j[(n_{{l_{1} }} - n_{{l_{2} }} )\varphi_{c,k} + (m_{{r_{1} - 1}} - m_{{r_{2} - 1}} )f_{c,k} ]}} }$$
(13)

where \(l_{1} ,l_{2} = 0, \cdots ,N - 1, \, r_{1} ,r_{2} = 1, \cdots ,M\). \({\mathbf{Q}} = diag(\sigma_{1}^{2} ,\sigma_{2}^{2} ,...,\sigma_{MN}^{2} )\) is the corresponding NCM without gain-phase errors. In this case, Eq. (10) becomes

$${\tilde{\mathbf{R}}} = \tilde{\Gamma }{\tilde{\mathbf{R}}}_{c} \tilde{\Gamma }^{H} + {\tilde{\mathbf{Q}}}$$
(14)

where \(\tilde{\Gamma } = \tilde{\Upsilon } \otimes {\text{I}}_{M}\),\(\tilde{\Upsilon } = {\text{diag}}[\tilde{\gamma }_{1} e^{{j\tilde{\phi }_{1} }} ,\tilde{\gamma }_{2} e^{{j\tilde{\phi }_{2} }} , \cdots \tilde{\gamma }_{N} e^{{j\tilde{\phi }_{N} }} ]\),\(\tilde{\gamma }_{1} e^{{j\tilde{\phi }_{1} }} = \tilde{\gamma }_{2} e^{{j\tilde{\phi }_{2} }} = \tilde{\gamma }_{3} e^{{j\tilde{\phi }_{3} }} = \tilde{\gamma }_{4} e^{{j\tilde{\phi }_{4} }} = 1\), the dimension of \({\tilde{\mathbf{R}}}\) is \(MN \times MN\).

In independent and identically distributed samples with no targets, the CNCM can be estimated by

$${\hat{\mathbf{R}}} = \frac{1}{L}\sum\limits_{l = 1}^{L} {{\tilde{\mathbf{x}}}(l){\tilde{\mathbf{x}}}^{H} (l)}$$
(15)

There is a corresponding relationship between the elements of the CCM and the positions of the elements. According to the elements of the CCM corresponding to the two sub-arrays, their corresponding clutter covariance sub-matrices \({\tilde{\mathbf{R}}}_{1}\) and \({\tilde{\mathbf{R}}}_{2}\) can be obtained respectively.

Assuming that \(p,q\) is the position of the elements corresponding to the dense sub-array, and the corresponding elements in matrix \({\tilde{\mathbf{R}}}\) can be described as

$$\left( {{\tilde{\mathbf{R}}}} \right)_{p,q} = \sum\limits_{k = 1}^{{N_{c} }} {p_{k} e^{{2\pi j[(p - q)\varphi_{c,k} + (m_{u} - m_{v} )f_{c,k} ]}} } + ({\mathbf{Q}})_{p,q}$$
(16)

Take the mean of all the corresponding element values as the value of element \(\left( {{\tilde{\mathbf{R}}}_{1} } \right)_{p,q}\) in matrix \({\tilde{\mathbf{R}}}_{1}\), where the matrix \({\tilde{\mathbf{R}}}_{1}\) with \((N_{2} + 2) \times (N_{2} + 2)\) is a sub-matrix obtained from \({\tilde{\mathbf{R}}}\), which corresponds to a dense su-barray. Its form can be expressed as

$${\tilde{\mathbf{R}}}_{1} = \tilde{\Upsilon }_{1} {\tilde{\mathbf{R}}}_{c1} \tilde{\Upsilon }_{1}^{H} + {\tilde{\mathbf{Q}}}_{1}$$
(17)

where \(\tilde{\Upsilon }_{1} = {\text{diag}}[\tilde{\gamma }_{1} e^{{j\tilde{\phi }_{1} }} ,\tilde{\gamma }_{2} e^{{j\tilde{\phi }_{2} }} , \cdots \tilde{\gamma }_{{N_{2} + 2}} e^{{j\tilde{\phi }_{{N_{2} + 2}} }} ]\) represents the error matrix. Since the first two elements are calibrated elements, \(\tilde{\gamma }_{1} e^{{j\tilde{\phi }_{1} }} = \tilde{\gamma }_{2} e^{{j\tilde{\phi }_{2} }} = 1\). \({\tilde{\mathbf{Q}}}_{1}\) is the noise matrix, and \({\tilde{\mathbf{R}}}_{c1}\) is the matrix corresponding to the dense sub-array without the gain-phase errors, and its elements can be obtained by

$$\tilde{R}_{1} (i,j) = \left\{ \begin{gathered} \tilde{\gamma }_{i} \tilde{\gamma }_{j} e^{{j(\tilde{\phi }_{i} - \tilde{\phi }_{j} )}} \sum\limits_{k = 1}^{{N_{c} }} {p_{k} e^{{2\pi j[(i - j)\varphi_{c,k} ]}} \, , \, i \ne j} \hfill \\ \tilde{\gamma }_{i} \tilde{\gamma }_{j} e^{{j(\tilde{\phi }_{i} - \tilde{\phi }_{j} )}} \sum\limits_{k = 1}^{{N_{c} }} {p_{k} e^{{2\pi j[(i - j)\varphi_{c,k} ]}} } + \sigma_{i}^{2} , \, i = j \hfill \\ \end{gathered} \right.$$
(18)

Through calculation, it can be obtained that

$$\frac{{\tilde{R}_{1} (p + 1,p)}}{{\tilde{R}_{1} (2,1)}} = \frac{{\tilde{\gamma }_{p + 1} \tilde{\gamma }_{p} e^{{j(\tilde{\phi }_{p + 1} - \tilde{\phi }_{p} )}} }}{{\tilde{\gamma }_{2} \tilde{\gamma }_{1} e^{{j(\tilde{\phi }_{2} - \tilde{\phi }_{1} )}} }} = \tilde{\gamma }_{p + 1} \tilde{\gamma }_{p} e^{{j(\tilde{\phi }_{p + 1} - \tilde{\phi }_{p} )}}$$
(19)

By performing the elimination operation on Eq. (19), it can be further concluded that

$$\tilde{\gamma }_{p + 1} e^{{j\tilde{\phi }_{p + 1} }} = \frac{{\tilde{R}_{1} (p + 1,p)}}{{\tilde{R}_{1} (2,1)}}\left[ {(\tilde{\gamma }_{p} e^{{j\tilde{\phi }_{p} }} )^{ * } } \right]^{ - 1}$$
(20)

where \(p \in \left[ {2,N_{2} + 2} \right]\). Subsequently, matrix \(\tilde{\Upsilon }_{1}\) can be obtained through continuous iteration of Eq. (20).

Similarly, \(p, \, q\) is the position of the elements corresponding to the sparse sub-array, and the corresponding elements in matrix \({\tilde{\mathbf{R}}}\) can be described as

$$\left( {{\tilde{\mathbf{R}}}} \right)_{p,q} = \sum\limits_{k = 1}^{{N_{c} }} {p_{k} e^{{2\pi j[(p - q)\varphi_{c,k} + (m_{u} - m_{v} )f_{c,k} ]}} } + ({\mathbf{Q}})_{p,q}$$
(21)

Take the average of all the corresponding element values as the value of element \(\left( {{\tilde{\mathbf{R}}}_{2} } \right)_{p,q}\) in matrix \({\tilde{\mathbf{R}}}_{2}\), where the matrix \({\tilde{\mathbf{R}}}_{2}\) with \((2N_{1} + 1) \times (2N_{1} + 1)\) is a sub-matrix obtained from \({\tilde{\mathbf{R}}}\), which corresponds to a sparse sub-array. Its form can be written as

$${\tilde{\mathbf{R}}}_{2} = \tilde{\Upsilon }_{2} {\tilde{\mathbf{R}}}_{c2} \tilde{\Upsilon }_{2}^{H} + {\tilde{\mathbf{Q}}}_{2}$$
(22)

where \(\tilde{\Upsilon }_{2} = {\text{diag}}[1,1, \cdots \tilde{\gamma }_{{2N_{1} + 1}} e^{{j\tilde{\phi }_{{2N_{1} + 1}} }} ]\) represents the error matrix generated by the array gain-phase errors. Since the first two elements are calibrated elements after calibration, at this time \(\tilde{\gamma }_{1} e^{{j\tilde{\phi }_{1} }} = \tilde{\gamma }_{2} e^{{j\tilde{\phi }_{2} }} = 1\). \({\tilde{\mathbf{Q}}}_{2}\) is the noise matrix, and \({\tilde{\mathbf{R}}}_{c2}\) is the matrix corresponding to the sparse sub-array when there is no array gain-phase errors, and its elements can be written as

$$\tilde{R}_{2} (i,j) = \left\{ \begin{gathered} \tilde{\gamma }_{i} \tilde{\gamma }_{j} e^{{j(\tilde{\phi }_{i} - \tilde{\phi }_{j} )}} \sum\limits_{k = 1}^{{N_{c} }} {p_{k} e^{{2\pi j[(i - j)\varphi_{c,k} ]}} \, ,i \ne j} \hfill \\ \tilde{\gamma }_{i} \tilde{\gamma }_{j} e^{{j(\tilde{\phi }_{i} - \tilde{\phi }_{j} )}} \sum\limits_{k = 1}^{{N_{c} }} {p_{k} e^{{2\pi j[(i - j)\varphi_{c,k} ]}} } + \sigma_{i}^{2} ,i = j \hfill \\ \end{gathered} \right.$$
(23)

Through calculation,

$$\frac{{\tilde{R}_{2} (p + 1,p)}}{{\tilde{R}_{2} (2,1)}} = \frac{{\tilde{\gamma }_{p + 1} \tilde{\gamma }_{p} e^{{j(\tilde{\phi }_{p + 1} - \tilde{\phi }_{p} )}} }}{{\tilde{\gamma }_{2} \tilde{\gamma }_{1} e^{{j(\tilde{\phi }_{2} - \tilde{\phi }_{1} )}} }} = \tilde{\gamma }_{p + 1} \tilde{\gamma }_{p} e^{{j(\tilde{\phi }_{p + 1} - \tilde{\phi }_{p} )}}$$
(24)

It can be inferred from (24) that

$$\tilde{\gamma }_{p + 1} e^{{j\tilde{\phi }_{p + 1} }} = \frac{{\tilde{R}_{2} (p + 1,p)}}{{\tilde{R}_{2} (2,1)}}\left[ {(\tilde{\gamma }_{p} e^{{j\tilde{\phi }_{p} }} )^{ * } } \right]^{ - 1}$$
(25)

where \(p \in \left[ {2, \, 2N_{1} + 1} \right]\), the matrix \(\tilde{\Upsilon }_{2}\) can be obtained through iterative operation of Eq. (25). The new gain-phase errors matrix \(\tilde{\Upsilon }\) is obtained by sorting and reorganizing the corresponding elements in matrix \(\tilde{\Upsilon }_{1}\) and \(\tilde{\Upsilon }_{2}\) based on the positions of the elements in the uniform linear sub-arrays. And matrix \(\tilde{\Gamma }\) becomes

$$\tilde{\Gamma } = \tilde{\Upsilon } \otimes {\text{I}}_{M}$$
(26)

STAP method with the gain-phase errors calibration

By compensating for errors in the CCM , the advantages of the coprime sampling structure are fully utilized to improve system performance. Subsequently, the compensated matrix is expanded into a high-dimensional matrix and filled with voids to increase the system DOF.

Specifically, the corrected CCM is obtained by multiplying left by \(\tilde{\Gamma }^{ - 1}\) and right by \((\tilde{\Gamma }^{H} )^{ - 1}\) respectively, Namely

$${\mathbf{R}}_{c} = \tilde{\Gamma }^{ - 1} {\hat{\mathbf{R}}}_{c} \left( {\tilde{\Gamma }^{H} } \right)^{ - 1}$$
(27)

The virtual difference coarray and copulse formed by the new coprime sampling structure still have voids. To enhance the utilization of virtual elements and pulses, the corrected \({\mathbf{R}}_{c}\) is zero-filled expanded to form the \(N^{ * } M^{ * } \times N^{ * } M^{ * }\) dimensional matrix \({\mathbf{R}}_{{\text{E}}}\), \(N^{ * } = (4N_{1} N_{2} - 2N_{2} + 1)\), \(M^{ * } = (4M_{1} M_{2} - 2M_{2} + 1)\). By the rank minimization model, the matrix \({\mathbf{R}}_{{\text{E}}}\) is obtained by

$${\text{min rank}}({\mathbf{R}}_{{\text{V}}} ) \, , \, {\text{s}} .t. \, {\mathbf{P}} \bullet {\mathbf{R}}_{\text{E}} = {\mathbf{P}} \bullet {\mathbf{R}}_{{\text{V}}}$$
(28)

where \({\text{rank}}( \bullet ) \,\) represents the matrix rank, and \({\mathbf{R}}_{{\text{V}}}\) is the matrix to be filled. The elements of the mapping matrix \({\mathbf{P}}\) are given by

$$\, P(i,j{) = }\left\{ \begin{gathered} 1, \, (i - j)d = d_{x} \hfill \\ 0, \, (i - j)d \ne d_{x} \hfill \\ \end{gathered} \right.$$
(29)

For ease of solution, the nuclear norm is used instead of the rank function for convex relaxation. Equation (28) becomes

$${\text{min }}\left\| {{\mathbf{R}}_{{\text{V}}} } \right\|_{ * } \, , \, {\text{s}} .t. \, {\mathbf{P}} \bullet {\mathbf{R}}_{\text{E}} = {\mathbf{P}} \bullet {\mathbf{R}}_{{\text{V}}}$$
(30)

where \(\left\| \bullet \right\|_{ * }\) represents the nuclear norm operation. According to the singular value threshold (SVT)34, Eq. (30) can be approximated as

$${\text{min }}\mu \left\| {{\mathbf{R}}_{{\text{V}}} } \right\|_{ * } { + }\frac{1}{2}\left\| {{\mathbf{R}}_{{\text{V}}} } \right\|_{{\text{F}}}^{2} \, , \, {\text{s}} .t. \, {\mathbf{P}} \bullet {\mathbf{R}}_{\text{E}} = {\mathbf{P}} \bullet {\mathbf{R}}_{{\text{V}}}$$
(31)

As the \(\, \mu\) increases, the solution of Eq. (31) gets closer to the solution of Eq. (30), and when \(\, \mu \to \infty\), they are almost identical.

The singular value decomposition of matrix \({\mathbf{R}}_{{\text{V}}} \in {\mathbb{R}}^{{N^{ * } M^{ * } \times N^{ * } M^{ * } }}\) is given by

$$\left\{ \begin{gathered} {\mathbf{R}}_{{\text{V}}} = {\mathbf{U}}\Sigma {\mathbf{V}}^{ * } \hfill \\ \Sigma = {\text{diag}} (\{ \sigma_{i} \}_{1 \le i \le r} ) \hfill \\ \end{gathered} \right.$$
(32)

where \(r\) is the rank of matrix \({\mathbf{R}}_{{\text{V}}}\), and the dimensions of the singular value vector \({\mathbf{U}}\) and \({\mathbf{V}}\) are both \(N^{ * } M^{ * } \times r\). \(\sigma_{i}\) represents the non-negative singular values of \({\mathbf{R}}_{{\text{V}}}\). When \(\, \mu > 0\), the singular value shrinkage operator \(D_{\mu }\) can be expressed as

$$\left\{ \begin{gathered} D_{\mu } ({\mathbf{R}}_{{\text{V}}} ) \triangleq {\mathbf{U}}D_{\mu } (\Sigma ){\mathbf{V}}^{ * } \hfill \\ D_{\mu } (\Sigma ) = {\text{diag}} (\{ \sigma_{i} - \mu \}_{ + } ) \hfill \\ \end{gathered} \right.$$
(33)

where \((\sigma_{i} - \mu )_{ + }\) represents the non-negative part of \((\sigma_{i} - \mu )\), i.e., \((\sigma_{i} - \mu )_{ + } = \max (0, \, \sigma_{i} - \mu )\). The operator sets the smaller singular values in matrix \({\mathbf{R}}_{{\text{V}}}\) to zero. When most of its singular values are less than the threshold \(\, \mu\), the rank of \(D_{\mu } ({\mathbf{R}}_{{\text{V}}} )\) will be smaller than that of matrix \({\mathbf{R}}_{{\text{V}}}\).

When \(\, \mu > 0\), \(D_{\mu } ({\mathbf{Y}})\) can be demonstrated as

$$D_{\mu } ({\mathbf{Y}}) = \arg \mathop {\min }\limits_{{{\mathbf{R}}_{{\text{V}}} }} \frac{1}{2}\left\| {{\mathbf{R}}_{{\text{V}}} - {\mathbf{Y}}} \right\|_{\text{F}}^{2} + \mu \left\| {{\mathbf{R}}_{{\text{V}}} } \right\|_{ * }$$
(34)

where \({\mathbf{Y}} \in {\mathbb{R}}^{{N^{ * } M^{ * } \times N^{ * } M^{ * } }}\). Assuming \(f_{\mu } ({\mathbf{R}}_{{\text{V}}} ) = \mu \left\| {{\mathbf{R}}_{{\text{V}}} } \right\|_{ * } + \frac{1}{2}\left\| {{\mathbf{R}}_{{\text{V}}} } \right\|_{\text{F}}^{2}\), when \(\, \mu > 0\), the Lagrangian function corresponding to Eq. (31) is

$$L({\mathbf{R}}_{{\text{V}}} ,{\mathbf{Y}}) = \mu \left\| {{\mathbf{R}}_{{\text{V}}} } \right\|_{ * } + \frac{1}{2}\left\| {{\mathbf{R}}_{{\text{V}}} } \right\|_{\text{F}}^{2} + \left\langle {\left. {{\mathbf{Y}},{\mathbf{P}} \bullet ({\mathbf{R}}_{{\text{E}}} - {\mathbf{R}}_{{\text{V}}} )} \right\rangle } \right.$$
(35)

According to strong duality, and

$$\mathop {\sup }\limits_{{\mathbf{Y}}} \mathop {\inf }\limits_{{{\mathbf{R}}_{{\text{V}}} }} L({\mathbf{R}}_{{\text{V}}} ,{\mathbf{Y}}) = L({\mathbf{R}}_{{\text{V}}}^{ * } ,{\mathbf{Y}}^{ * } ) = \mathop {\inf }\limits_{{{\mathbf{R}}_{{\text{V}}} }} \mathop {\sup }\limits_{{\mathbf{Y}}} L({\mathbf{R}}_{{\text{V}}} ,{\mathbf{Y}})$$
(36)

where \({\text{g}}({\mathbf{Y}}) = \mathop {\inf }\limits_{{{\mathbf{R}}_{{\text{V}}} }} L({\mathbf{R}}_{{\text{V}}} ,{\mathbf{Y}})\) is the dual function, we have

$$\left\{ \begin{gathered} L({\mathbf{R}}_{{\text{V}}}^{k} ,{\mathbf{Y}}^{k - 1} ) = \mathop {\min }\limits_{{{\mathbf{R}}_{{\text{V}}} }} L({\mathbf{R}}_{{\text{V}}} ,{\mathbf{Y}}^{k - 1} ) \hfill \\ {\mathbf{Y}}^{k} = {\mathbf{Y}}^{k - 1} + \delta_{k} {\mathbf{P}} \bullet ({\mathbf{R}}_{{\text{E}}} - {\mathbf{R}}_{{\text{V}}}^{k} ) \hfill \\ \end{gathered} \right.$$
(37)

where \({\mathbf{Y}}^{0} = {\mathbf{0}}\), with a step size of \(\delta_{k}\), Eq. (37) is equivalent to:

$$\arg \min f_{\mu } ({\mathbf{R}}_{{\text{V}}} ) + \left\langle {\left. {{\mathbf{Y}},{\mathbf{P}} \bullet ({\mathbf{R}}_{{\text{E}}} - {\mathbf{R}}_{{\text{V}}} )} \right\rangle } \right. = \arg \min \mu \left\| {{\mathbf{R}}_{{\text{V}}} } \right\|_{ * } + \frac{1}{2}\left\| {{\mathbf{R}}_{{\text{V}}} - {\mathbf{P}} \bullet {\mathbf{Y}}} \right\|_{\text{F}}^{2}$$
(38)

where the right-hand side of the equation is \(D_{\mu } ({\mathbf{P}} \bullet {\mathbf{Y}})\), and when \(k \ge 0\), \({\mathbf{Y}}^{k} = {\mathbf{P}} \bullet {\mathbf{Y}}^{k}\), we can obtain

$$\left\{ \begin{gathered} {\mathbf{R}}_{{\text{V}}}^{k} = D_{\mu } ({\mathbf{Y}}^{k - 1} ) \hfill \\ {\mathbf{Y}}^{k} = {\mathbf{Y}}^{k - 1} + \delta_{k} {\mathbf{P}} \bullet ({\mathbf{R}}_{{\text{E}}} - {\mathbf{R}}_{{\text{V}}}^{k} ) \hfill \\ \end{gathered} \right.$$
(39)

where \({\mathbf{Y}}\) is the transition matrix. To ensure that the sequence \(\{ {\mathbf{R}}_{{\text{V}}}^{k} \}\) converges to the solution in Eq. (30), values of \(\, \mu\) are chosen relatively large. There is a large amount of literature on the method of step size selection, but for simplicity, we will use a fixed step size, i.e. \(\delta_{k} = \delta ,k = 1,2 \cdots\). According to Ref 34, when \(0 < \delta_{k} < 2\), Eq. (39) converges to the unique solution of (30). To accelerate convergence, set \(\delta = 1.2\frac{{n^{2} }}{L}\), where \(n\) is the dimension of matrix \({\mathbf{R}}_{{\text{V}}}\), and \(L\) represents the number of samples.

In the \(k\) iteration, the singular values number of \({\mathbf{Y}}^{k - 1}\) is denoted by

$$z_{k} = r_{k - 1} + 1$$
(40)
$$r_{k} = rank({\mathbf{Y}}^{k - 1} )$$
(41)

If the singular value is less than \(\, \mu\) during the iteration, then \(z_{k}\) is correct. If it is greater than or equal to \(\, \mu\), then increase \(z_{k}\).

When

$$\frac{{\left\| {{\mathbf{P}} \bullet ({\mathbf{R}}_{{\text{V}}}^{k} - {\mathbf{R}}_{{\text{E}}} )} \right\|_{{\text{F}}} }}{{\left\| {{\mathbf{P}} \bullet {\mathbf{R}}_{{\text{E}}} } \right\|_{{\text{F}}} }} \le \varepsilon$$
(42)

Stop iteration and output the matrix \({\mathbf{R}}_{{\text{V}}}^{opt}\), where \(\varepsilon\) is the stopping iteration threshold. The optimal weight vector of the filter can be obtained by \({\mathbf{R}}_{{\text{V}}}^{opt}\). Due to filling voids in the virtual matrix, the algorithm greatly increased computation complexity.

Simulation results

This section will compare the performance of the proposed SVT-RC-STAP with traditional STAP (T-STAP) and traditional coprime STAP (C-STAP). The parameters are assumed that \(\lambda = 0.2{\text{m}}\), \(T_{r} = 0.5{\text{ms}}\), \(v = 100{\text{m/s}}\), \(\sigma_{{\text{n}}}^{2} = 1\), \(N_{c} = 361\), \(\beta = 1\), \(N_{1} = M_{1} = 3\), \(N_{2} = M_{2} = 5\). The clutter-to-noise ratio (CNR) is 30 dB, and the signal-to-noise ratio (SNR) is 10 dB. The original number of elements is \(2N_{1} + N_{2} - 1 = 10\), and the corrected number of elements is 14. The normalized spatial and Doppler frequency of the target are 0.1 and -0.2, respectively. Other parameters are set as \(\rho = 1.5\), \(\mu_{\max } = 10^{10}\), \(\mu_{0} = {{1.25} \mathord{\left/ {\vphantom {{1.25} {\left\| {{\mathbf{R}}_{E} } \right\|}}} \right. \kern-0pt} {\left\| {{\mathbf{R}}_{E} } \right\|}}_{2}\).

Clutter Eigenspectrum

The comparison of the eigenvalues of the CCM for the three algorithms is shown in Fig. 2. Without the gain-phase errors, the number of virtual elements and pulses of the C-STAP reach 40. The proposed SVT-RC-STAP effectively fills the holes generated by difference operation, increasing to 64. When the element gain error is 5% and the element phase error is 0.05π \(\left( {5\% ,0.05\pi } \right)\), a large number of pseudo clutter are generated in the clutter subspace, resulting in increasing the large eigenvalues number and decreasing in the CCM estimation accuracy. The proposed SVT-RC-STAP introduces correction elements to adjust the CCM, making its clutter DOF almost identical to those without errors.

Fig. 2
figure 2

Eigenspectrum of CCM.

Output SINR

Figure 3 shows the output SINR of three algorithms under different element gain-phase errors with a sample size of 200. From Fig. 3(a), the output SINR of the proposed SVT-RC-STAP under three different gain-phase errors are almost identical to those without errors. This indicates that the proposed algorithm has strong error adaptability and can effectively suppress signal distortion and performance degradation caused by element gain-phase errors. Figure 3(b) and 3(c) show the output SINR of the T-STAP and the C-STAP, respectively. It can be observed from the plots that the output SINR of the above two algorithms decreases in the gain-phase errors.

Fig. 3
figure 3

Output SINR.

To further verify the convergence of the algorithm, Fig. 4 presents the relationship between the number of training samples and the output SINR under gain-phase errors \(\left( {10\% ,0.1\pi } \right)\). From the simulation results, the proposed SVT-RC-STAP has the best SINR performance. The C-STAP typically divides multiple equivalent sub-array signals of the same structure into virtual equivalent snapshot to calculate CCM. However, the C-STAP is difficult to improve clutter suppression performance using virtual DOF with gain-phase errors In contrast, the proposed SVT-RC-STAP still achieves good performance by using sparse recovery.

Fig. 4
figure 4

The relationship between output SINR and training sample numbers.

Spatial Doppler clutter spectrum

This section further analyzes the proposed SVT-RC-STAP by comparing the spatial Doppler clutter spectrum. The comparison results are shown in Fig. 5 with gain-phase errors \(\left( {5\% ,0.05\pi } \right)\). Figure 5 clearly demonstrates the widening of spatial Doppler clutter spectrum caused by gain-phase errors in the T-STAP and C-STAP, along with the significant interference signals generated in clutter regions. The proposed SVT-RC-STAP corrects the impact of gain-phase errors by adding correction elements and calculating the correction matrix, thereby obtaining better spatial Doppler clutter spectrum. The T-STAP and C-STAP are highly sensitive to errors in the gain-phase errors, resulting in significantly degraded performance. In contrast, the SVT-RC-STAP is not sensitive to errors and maintains excellent performance.

Fig. 5
figure 5

Spatial Doppler clutter spectrum.

Spatial Doppler beam pattern

As shown in Fig. 6, the proposed SVT-RC-STAP achieves maximum gain at the target direction and forms narrow and deep notches in the main clutter field, effectively suppressing clutter. Due to the inability to completely eliminate the gain-phase errors influence , the T-STAP and C-STAP have a certain deviation in the direction of the main lobe of the beam, resulting in a decrease in gain in the target direction. And the clutter notches of T-STAP and C-STAP are relatively widened, failing to form sufficiently deep notches.

Fig. 6
figure 6

Spatial Doppler patterns.

Target detection

This experiment evaluates the detection performance of the proposed SVT-RC-STAP by the detection probability (Pd), with false alarm probability set to 10–3. Figure 7 presents the detection performance results of the three algorithms as the target SNR. As shown in Fig. 7, the performance of T-STAP, C-STAP, and the proposed SVT-RC-STAP improves with the increase of system DOF without gain-phase errors. However, the proposed SVT-RC-STAP exhibits approximately optimal detection performance compared to the other two algorithms at various SNR values in gain-phase errors. This is because the proposed SVT-RC-STAP accurately compensates for the effects of model mismatch on the steering vector, which ensures that the filtering weight vector is correct and forms narrow and deep notch in the main clutter field, ultimately achieving approximately optimal detection performance.

Fig. 7
figure 7

Pd.

Conclusion

The focus of this study is to address the detrimental impacts on system performance arising from amplitude and phase errors in airborne radar arrays. By incorporating four calibrated array elements within the original array, error compensation is performed on their clutter covariance matrix, which is subsequently extended and filled with missing values. Following this, the singular value threshold optimization algorithm is utilized to restore the clutter covariance matrix. Through comprehensive simulations comparing various aspects such as clutter feature spectrum, algorithm robustness, algorithm convergence, clutter space–time spectrum, space–time beam pattern, and target detection performance, the superiority of the proposed algorithm in complex scenarios is demonstrated. However, the computational complexity of the algorithm poses a challenge for real-time processing in engineering applications. Future research can delve deeper into methods to reduce this complexity.