Introduction

The Allen-Cahn (AC) equation, which emerged to simulate phase separation phenomena in binary alloys, governs the spatial and temporal evolution of the order parameter \(u(t,\varvec{x})\)1,2,3. This parameter serves as a phase field variable that characterizes different phases in diverse physical systems. This equation is widely used to model phase transitions and the dynamics of interfaces in fluid dynamics, materials science, and biological systems1,2,4,5. In two dimensions, the AC equation is expressed as:

$$\begin{aligned} \frac{\partial u}{\partial t} = - \frac{F'(u)}{\varepsilon ^2} + \Delta u, \quad t \in (0, \infty ), \quad \varvec{x}\in \Omega , \end{aligned}$$
(1.1)

where \(F(u) = 0.25(1 - u^2)^2\) denotes the double well potential, \(u(t, \varvec{x})\) indicates the phase field at \(\varvec{x}\), and \(\Omega \subset \mathbb {R}^2\) represents the two-dimensional spatial domain. The term \(\frac{F'(u)}{\varepsilon ^2}\) captures the phase transition, with \(\varepsilon\) governing the width of the interface between the two stages.

The AC equation is mathematically characterized as a gradient flow within the \(L^2\)-inner product space, corresponding to the minimization of the Ginzburg–Landau free energy functional:

$$\begin{aligned} E(u) = \int _{\Omega } \left( \frac{F(u)}{\varepsilon ^2} + \frac{1}{2} |\nabla u|^2 \right) d\varvec{x}. \end{aligned}$$
(1.2)

The system’s total energy, represented by the functional \(E(u)\), decreases over time until it reaches a minimum at equilibrium. This is shown by taking the time derivative of \(E(u)\), which reveals that the energy never increases as follows:

$$\begin{aligned} \frac{d}{dt} E(u) = \int _{\Omega } \left( \frac{F'(u)}{\varepsilon ^2} - \Delta u \right) u_t \, d\varvec{x}= - \int _{\Omega } (u_t)^2 \, d\varvec{x}\le 0. \end{aligned}$$

To solve the AC equation, we begin by defining the initial condition for the phase field \(u(0, \varvec{x})\), representing the system’s state at the starting time. This condition is typically expressed through a predefined function \(u_0(\varvec{x})\) that specifies the spatial distribution of the phase at \(t = 0\). The initial configuration is formally given by:

$$\begin{aligned} u(0, \varvec{x}) = u_0(\varvec{x}), \end{aligned}$$
(1.3)

where \(u_0(\varvec{x})\) is selected based on the physical problem being modeled.

Despite the significant interest in finding analytical solutions, a general analytical solution for this nonlinear equation remains unavailable. This gap highlights the importance of developing efficient numerical methods in solving the AC equation. In addition, obtaining numerical solutions for the AC equation is challenging. Therefore, advancing numerical techniques for this problem is essential to address theoretical challenges and facilitate practical applications in science and engineering.

In recent studies, many numerical techniques have been proposed by researchers. For instance, Poochinapan and Wongsaijai in6 developed a fourth-order compact difference scheme for one- and two-dimensional Allen–Cahn problems, treating the nonlinearity via a hybrid Crank–Nicolson and Adams–Bashforth framework. Lee et al.7 and Kim et al.8 proposed explicit finite difference methods, with the latter integrating graphics processing unit (GPU) acceleration and convolutional operations to boost computational efficiency. In9, the authors examined numerical solutions that are energy-dissipative and mass-conservative for simulating the evolution of phase separation dynamics, energy decay, and mass preservation. Lan et al.10 addressed the mass-conserving convective Allen–Cahn equation that is central to multiphase fluid modeling, by devising structure-preserving operator-splitting schemes. Additionally, Lee et al.11 formulated a high-order, unconditionally energy-stable method using a nonlocal Lagrange multiplier to enforce mass conservation and energy dissipation simultaneously. These efforts collectively advance the accuracy, stability, and applicability of numerical solutions for the Allen–Cahn equation.

The design of numerical methods that strictly preserve physical principles, such as energy dissipation and structure preservation, remains a central challenge, especially for complex gradient flows and dynamic systems12,13,14,15,16,17,18. Recently, various techniques were proposed to solve the AC equation, including the meshless RBF method19, localized RBF method20, maximum principle-preserving computational algorithms21, explicit numerical methods on cubic surfaces22, high-order Runge-Kutta schemes23, dimension splitting methods for two-phase flows24, third-order accurate schemes25, stability analysis via bifurcation and perturbation26, Crank-Nicolson schemes27, and linear second-order maximum bound principle-preserving schemes28.

Radial basis function-based techniques have been developed over recent decades as flexible computational tools for addressing diverse classes of partial differential equations (PDEs). These methods are recognized as rigorous tools for tackling high-dimensional problems, particularly for approximating scattered data. The growing importance of such meshless approaches in numerical methods for PDEs is due to their intrinsic advantages, including scalability to higher dimensions, adaptability to unstructured data, and the potential for spectral convergence rates. Some recent works employing RBF-based methods include the analysis of nonlinear Sobolev equations29, the investigation of generalized biharmonic equations under Cahn–Hilliard conditions30, introduction of the direct RBF partition of unity method for solving surface PDEs31, and the use of localized RBF approaches for incompressible two-phase fluid flows32. Additionally, meshless RBF-finite difference (RBF-FD) methods have been applied for 3D simulations in selective laser melting33, while inverse Cauchy problems have been solved using RBF techniques34.

Several investigations have explored local meshless techniques for solving various problems such as elliptic PDEs subject to multipoint boundary conditions35, Sobolev equations incorporating Burgers-type nonlinearity36, and coupled Klein–Gordon–Schrödinger equations37. Local methods based on the RBFs present distinct advantages over their global counterparts in the context of solving PDEs. By focusing on the local regions, these methods often require fewer computations compared to global methods that need to consider the entire domain. This efficiency is crucial for large-scale problems. Furthermore, local RBF methods allow for adaptive refinement, making them more effective in handling complex geometries and varying solution behaviors38,39.

Furthermore, the RBF-CFD method has proven effective in addressing computational challenges associated with global methods to solve differential equations40, demonstrating their versatility in applications such as solving Sobolev equations related to fluid dynamics41, and reaction-diffusion equations on surfaces42. The compact RBF-based partition of unity method based on the polyharmonic spline kernels is proposed in43.

In this study, the RBF-CFD method is applied to solve two-dimensional AC equations. The RBF-CFD method is employed for spatial discretization of the problem, while the Strang splitting method is utilized for temporal discretization. The Strang splitting strategy is an effective method for solving differential equations, particularly useful for nonlinear equations and complex systems44. This method works by decomposing the equation into simpler components and solving each part separately, enhancing both accuracy and efficiency in simulating the dynamics of systems. Its importance lies in its ability to preserve the physical characteristics of the system and reduce numerical errors, making it especially valuable for long-term and complex simulations45,46,47,48. An overview of recent studies on numerical techniques for the AC equation is presented in Table 1.

Table 1 Overview of recent studies (2020–2025) on numerical techniques for the AC equation.

Despite the significant advancements, developing highly accurate, efficient, and meshless numerical schemes that effectively handle the stiff nonlinear terms of the 2D Allen-Cahn equation while preserving critical physical properties is essential. The localized RBF-FD methods, while addressing the ill-conditioning of global RBFs, often exhibit some shortcomings in accuracy. Consequently, this study presents an innovative approach by coupling the high-order RBF-CFD method for spatial discretization with the highly efficient and accurate Strang splitting method for temporal discretization. This proposed method offers spectral-like accuracy on scattered nodes, which is a significant improvement over standard second-order local RBF-FD or conventional low-order finite difference methods, while avoiding the matrix density issues of global RBFs. Furthermore, the integration of the Strang splitting method effectively decomposes the AC equation into a linear part and a non-linear part. This approach significantly enhances computational efficiency by allowing implicit handling of the stiff linear term and explicit handling of the nonlinear term, which is crucial for long-term simulations. The proposed scheme rigorously preserves the fundamental energy decay law of the Allen-Cahn equation, a key physical characteristic.

The remainder of this paper is organized as follows: Section “RBF-CFD formulas” introduces the RBF-CFD method, focusing on the formulation of the RBF-CFD weights. In “Discrete setting: application of RBF-CFD weights and strang splitting method”, we provide a detailed description of the numerical implementation, in which the spatial domain is discretized using the RBF-CFD method, and the Strang splitting technique is employed to efficiently split and solve the nonlinear and linear components in time. Section “Test examples” presents a series of numerical experiments to evaluate the accuracy, convergence, and robustness of the proposed method. Finally, Section “Conclusion” concludes the paper by summarizing the key findings and highlighting the method’s effectiveness in capturing the dynamic behavior of solutions to the Allen–Cahn equation.

RBF-CFD formulas

In this section, we discuss the Hermite-Birkhoff interpolation and the determination of RBF-CFD weights, as presented in the references40,41,42,43.

Lagrange form of conditionally positive definite RBF interpolation

Let \(K: \mathbb {R}^d\rightarrow \mathbb {R}\) be a conditionally positive definite RBF of order m. This means that it is positive definite with respect to polynomials of degree \(m-1\) in \(\mathbb {R}^d\), which we denote as \(\Pi _{m-1}(\mathbb {R}^d)\). Suppose that \(\mathbb {X}=\{\varvec{x}_1,\varvec{x}_2,...,\varvec{x}_N\}\) is a set of scattered points in an open bounded region \(\Omega\) in \(\mathbb {R}^d\). Suppose that \(\{u(\varvec{x}_1),u(\varvec{x}_2),...,u(\varvec{x}_N)\}\) are the values of the function u that correspond to the points \(\mathbb {X}\). Then the interpolant of u on the points \(\mathbb {X}\) can be written as

$$\begin{aligned} u_N(\varvec{x})=\Sigma _{i=1}^N \alpha _i K(\varvec{x}-\varvec{x}_i)+\Sigma _{j=1}^Q \beta _j p_j(\varvec{x}), \end{aligned}$$
(2.1)

where \(\{p_j\}_{j=1}^Q\) are the basis of \(\Pi _{m-1}(\mathbb {R}^d)\) with \(Q=(m-1+d)!/(d!(m-1)!)\). Now, the unknown coefficients \(\varvec{\alpha }=\{\alpha _1,\alpha _2,...,\alpha _N\}\) and \(\varvec{\beta }=\{\beta _1,\beta _2,...,\beta _Q\}\) are identified by imposing the interpolation conditions:

$$\begin{aligned}u_N(\varvec{x}_i)=u(\varvec{x}_i),\hspace{.5 cm}i=1,2,...,N,\end{aligned}$$

combined with the side conditions:

$$\begin{aligned}\Sigma _{i=1}^N \alpha _i p_j(\varvec{x}_i)=0,\hspace{.5 cm} j=1,2,...,Q.\end{aligned}$$

In the matrix form, we have:

$$\begin{aligned} \left( \begin{array}{cc} \varvec{B}_{K} & \varvec{P} \\ \varvec{P}^T& \varvec{0} \end{array}\right) \left( \begin{array}{cc} \varvec{\alpha } \\ \varvec{\beta } \end{array}\right) =\left( \begin{array}{cc} \varvec{u}_e \\ \varvec{0} \end{array}\right) , \end{aligned}$$
(2.2)

where

$$\begin{aligned} \begin{aligned}&\textbf{u}_e = \big (u(\varvec{x}_1),\ u(\varvec{x}_2),\ \ldots ,\ u(\varvec{x}_N)\big )^T, \\&(\varvec{B}_{K})_{ij} = K(\varvec{x}_j - \varvec{x}_i), \quad i,j = 1,2,\ldots ,N, \\&\varvec{P}_{ij} = p_j(\varvec{x}_i), \quad i = 1,2,\ldots ,N,\quad j = 1,2,\ldots ,Q. \end{aligned} \end{aligned}$$
(2.3)

Let \(\mathbb {X}\) be a \(\Pi _{m-1}(\mathbb {R}^d)\)-unisolvent set. Then, the system (2.2) has a unique solution49. To obtain the Lagrange form of the interpolant (2.1), we can rewrite (2.1) using (2.2) as follows:

$$\begin{aligned} \begin{array}{ll} u_N(\varvec{x})& =\left( \varvec{K}^T(\varvec{x}) \hspace{.3 cm} \varvec{p}^T(\varvec{x}) \right) \left( \begin{array}{cc} \varvec{\alpha } \\ \varvec{\beta } \end{array}\right) \\ & =\left( \varvec{K}^T(\varvec{x}) \hspace{.3 cm} \varvec{p}^T(\varvec{x}) \right) \left( \begin{array}{cc} \varvec{B}_{K} & \varvec{P} \\ \varvec{P}^T& \varvec{0} \end{array}\right) ^{-1} \left( \begin{array}{cc} \varvec{u}_e \\ \varvec{0} \end{array}\right) \\ & =:\left( \varvec{\vartheta }(\varvec{x}) \hspace{.3 cm} \varvec{\theta }(\varvec{x}) \right) \left( \begin{array}{cc} \varvec{u}_e \\ \varvec{0} \end{array}\right) \\ & =\varvec{\vartheta }(\varvec{x}) \varvec{u}_e=\Sigma _{i=1}^N \vartheta _i(\varvec{x}) u_e(\varvec{x}_i), \\ \end{array} \end{aligned}$$
(2.4)

where

$$\begin{aligned} \begin{array}{ll} \varvec{K}^T(\varvec{x})& =\left( K(\varvec{x}-\varvec{x}_1), K(\varvec{x}-\varvec{x}_2),...,K(\varvec{x}-\varvec{x}_N)\right) , \\ \varvec{p}^T(\varvec{x})& = \left( p_1(\varvec{x}), p_2(\varvec{x}),...,p_Q(\varvec{x})\right) ,\\ \varvec{\vartheta }(x) & = \left( \vartheta _1(\varvec{x}), \vartheta _2(\varvec{x}),...,\vartheta _N(\varvec{x}) \right) ,\\ \varvec{\theta }(\varvec{x})& =\left( \theta _1(\varvec{x}),\theta _2(\varvec{x}),...,\theta _Q(\varvec{x})\right) .\\ \end{array} \end{aligned}$$
(2.5)

Let \(\mathbb {L}\) be a linear operator. By applying \(\mathbb {L}\) to both sides of (2.4), we obtain an approximation of \(\mathbb {L}u(\varvec{x}_0)\) at a specific test point \(\varvec{x}_0\) as

$$\begin{aligned} \mathbb {L}u(\varvec{x}_0)\approx \mathbb {L}u_N(\varvec{x}_0)= \Sigma _{i=1}^N \mathbb {L}\vartheta _i(\varvec{x}_0) u_e(\varvec{x}_i). \end{aligned}$$
(2.6)

The Hermite-Birkhoff interpolation

Let \(\varvec{\eta } = \{\eta _1, \eta _2, \dots , \eta _N \} \subset \mathbb {N}_0^d\) be a set of multi-indices such that \(|\eta _i| \le n_0\), where \(n_0\) is a predetermined integer in \(\mathbb {N}_0\). The Hermite-Birkhoff interpolation of a function \(u\) can then be expressed as50:

$$\begin{aligned} u_N(\varvec{x})=\Sigma _{i=1}^N \alpha _i \mathbb {D}^{\eta _i}_2 K(\varvec{x}-\varvec{x}_i). \end{aligned}$$
(2.7)

The subscript 2 on a differential operator \(\mathbb {D}^{\eta _i}\) indicates that the operator is applied to the kernel K with respect to its second argument. The unknown coefficients in the expression above can be determined by the following conditions:

$$\begin{aligned} \mathbb {D}^{\eta _j} u_N(\varvec{x}_j)=\mathbb {D}^{\eta _j} u(\varvec{x}_j), \hspace{.5 cm} j=1,2,...,N. \end{aligned}$$
(2.8)

It should be noted that the conditions must be distinct, at least at the points or with respect to the differential operators. The matrix formulation of this interpolation system can be expressed as:

$$\begin{aligned} \varvec{B}_{K,\varvec{\eta }} \varvec{\alpha }=\mathbb {D}^{\varvec{\eta }}u|_{\mathbb {X}}, \end{aligned}$$
(2.9)

where

$$\begin{aligned} \begin{aligned}&(\varvec{B}_{K,\varvec{\eta }})_{ij} = \mathbb {D}_1^{\eta _i} \mathbb {D}_2^{\eta _j} K(\varvec{x}_i - \varvec{x}_j), \quad i,j = 1,2,\ldots ,N, \\&\mathbb {D}^{\varvec{\eta }}u|_{\mathbb {X}} = \big ( \mathbb {D}^{\eta _1} u(\varvec{x}_1),\ \mathbb {D}^{\eta _2} u(\varvec{x}_2),\ \ldots ,\ \mathbb {D}^{\eta _N} u(\varvec{x}_N) \big )^T, \\&\varvec{\alpha } = \big ( \alpha _1,\ \alpha _2,\ \ldots ,\ \alpha _N \big )^T. \end{aligned} \end{aligned}$$
(2.10)

The subscripts 1 and 2 denote that the differential operator is applied to the kernel K with respect to its first and second arguments, respectively. Similar to (2.1), the interpolant (2.7) can be enhanced by adding polynomial terms as follows:

$$\begin{aligned} u_N(\varvec{x})=\Sigma _{i=1}^N \alpha _i \mathbb {D}^{\eta _i}_2 K(\varvec{x}-\varvec{x}_i)+\Sigma _{j=1}^Q \beta _j p_j(\varvec{x}), \end{aligned}$$
(2.11)

with the conditions in (2.8) in conjunction with the following conditions

$$\begin{aligned}\Sigma _{i=1}^N \alpha _i \mathbb {D}^{\eta _i}p_j(\varvec{x}_i)=0,\hspace{.5 cm} j=1,2,...,Q.\end{aligned}$$

Therefore, the matrix system would be as follows:

$$\begin{aligned} \left( \begin{array}{cc} \varvec{B}_{K,\varvec{\eta }} & \varvec{P}_{\varvec{\eta }} \\ \varvec{P}_{\varvec{\eta }}^T& \varvec{0} \end{array}\right) \left( \begin{array}{cc} \varvec{\alpha } \\ \varvec{\beta } \end{array}\right) =\left( \begin{array}{cc} \mathbb {D}^{\varvec{\eta }}u|_{\mathbb {X}} \\ \varvec{0} \end{array}\right) , \end{aligned}$$
(2.12)

where

$$\begin{aligned}\left( \varvec{P}_{\varvec{\eta }}\right) _{ij}=\mathbb {D}^{\eta _i}p_j(\varvec{x}_i), \hspace{.5 cm},i=1,2,...,N, j=1,2,...,Q.\end{aligned}$$

It can be proven that for an m-order conditionally positive definite function K and linearly independent operators \(\mathbb {D}^{\eta _i}\) for \(i=1,2,...,N\), if \(\mathbb {D}^{\eta _i} p = 0\) for all \(i=1,2,\ldots ,N\) and \(p \in \Pi _{m-1}(\mathbb {R}^d)\), implies that \(p = 0\), then the system (2.12) has a unique solution39.

Let \(\eta\) be a multi-index such that \(|\eta |\le n_0\). To approximate \(\mathbb {D}^\eta u(\varvec{x}_0)\) using the values of \(\mathbb {D}^{\eta _i} u(\varvec{x}_i)\) for \(i = 1, 2, \dots , N\), we can write:

$$\begin{aligned} \mathbb {D}^\eta u(\varvec{x}_0)\approx \Sigma _{i=1}^N \gamma _i \mathbb {D}^{\eta _i} u(\varvec{x}_i). \end{aligned}$$
(2.13)

To obtain the coefficients \(\gamma _1, \gamma _2, \dots , \gamma _N\), we operate \(\mathbb {D}^\eta\) on both sides of (2.11), and we have:

$$\begin{aligned} \mathbb {D}^\eta u_N(\varvec{x}_0)=\Sigma _{i=1}^N \alpha _i \mathbb {D}^\eta _1\mathbb {D}^{\eta _i}_2 K(\varvec{x}_0-\varvec{x}_i)+\Sigma _{j=1}^Q \beta _j \mathbb {D}^\eta p_j(\varvec{x}_0), \end{aligned}$$
(2.14)

and then, utilizing (2.12), we can get:

$$\begin{aligned} \left( \begin{array}{cc} \varvec{B}_{K,\varvec{\eta }} & \varvec{P}_{\varvec{\eta }} \\ \varvec{P}_{\varvec{\eta }}^T& \varvec{0} \end{array}\right) \left( \begin{array}{cc} \varvec{\gamma } \\ \varvec{\nu } \end{array}\right) =\left( \begin{array}{cc} \mathbb {D}^\eta _1\mathbb {D}^{\varvec{\eta }}_{2} \varvec{K}(\varvec{x}_0)\\ \mathbb {D}^\eta \varvec{p}(\varvec{x}_0)\\ \end{array}\right) , \end{aligned}$$
(2.15)

where

$$\begin{aligned} \begin{array}{cc} \mathbb {D}^\eta _1\mathbb {D}^{\varvec{\eta }}_2 \varvec{K}(\varvec{x}_0)=\left( \mathbb {D}^\eta _1\mathbb {D}^{\eta _1}_2 K(\varvec{x}_0-\varvec{x}_1), \mathbb {D}^\eta _1\mathbb {D}^{\eta _2}_2 K(\varvec{x}_0-\varvec{x}_2),..., \mathbb {D}^\eta _1\mathbb {D}^{\eta _N}_2 K(\varvec{x}_0-\varvec{x}_N)\right) ^{T},\\ \gamma = \mathbb {D}^\eta _1\mathbb {D}^{\varvec{\eta }}_2 \varvec{\vartheta }(\varvec{x}_0),\hspace{.5 cm} \varvec{\nu } = \mathbb {D}^\eta \varvec{\theta }(\varvec{x}_0).\\ \end{array} \end{aligned}$$

Determination of RBF-CFD weights

Let’s consider the partial differential equation \(\mathbb {L}u = g\) defined on a domain \(\Omega\), where \(\mathbb {L}\) is a linear operator with constant coefficients and g is a given function. Let \(\mathbb {X}=\{\varvec{x}_1,\varvec{x}_2,...,\varvec{x}_N\}\) be a set of discrete trial points within the domain \(\Omega\) and \(\mathbb {X}_0=\{\varvec{x}_1, \varvec{x}_2, \ldots , \varvec{x}_n\}\) be a stencil for a specific test point \(\varvec{x}_0\). Let \(\mathbb {I}=\{1,2,...,n\}\), \(\bar{\mathbb {I}}\) be an index set of size \(\bar{n}\) such that \(\bar{\mathbb {I}}\subseteq \mathbb {I}\), and \(\bar{\mathbb {X}}_0=\{\varvec{x}_j:j\in \bar{\mathbb {I}}\}\subseteq \mathbb {X}_0\). The RBF-CFD approximation of \(\mathbb {L}u(\varvec{x}_0)\) can be expressed as follows:

$$\begin{aligned} \begin{array}{ll} \mathbb {L}u(\varvec{x}_0)& \approx \Sigma _{i\in \mathbb {I}} \gamma _i u(\varvec{x}_i)+\Sigma _{i\in \bar{\mathbb {I}}} \bar{\gamma }_i \mathbb {L}u(\varvec{x}_i) \\ & =\Sigma _{i\in \mathbb {I}} \mathbb {L}\varrho _i(\varvec{x}_0) u(\varvec{x}_i)+\Sigma _{i\in \bar{\mathbb {I}}} \mathbb {L}\mathbb {L}\bar{\varrho }_i(\varvec{x}_0) \mathbb {L}u(\varvec{x}_i), \end{array} \end{aligned}$$
(2.16)

where \(\varrho _i\) and \(\bar{\varrho }_i\) are the Lagrange functions on sets \(\mathbb {X}_0\) and \(\bar{\mathbb {X}}_0\). The Lagrange functions are obtained using the following system

$$\begin{aligned} \left( \begin{array}{lll} \varvec{B} & \varvec{B}_{\mathbb {L}}^1 & \varvec{P} \\ \varvec{B}_{\mathbb {L}}^2 & \varvec{B}_{\mathbb {L}\mathbb {L}}& \varvec{P}_{\mathbb {L}}\\ \varvec{P}^T& \varvec{P}_{\mathbb {L}}^T& 0 \end{array}\right) \left( \begin{array}{cc} \varvec{\gamma } \\ \bar{\varvec{\gamma }}\\ \varvec{\nu }\\ \end{array}\right) = \left( \begin{array}{cc} \mathbb {L}\varvec{K} \\ \mathbb {L}\mathbb {L}\bar{\varvec{K}} \\ \mathbb {L}\varvec{p} \\ \end{array}\right) , \end{aligned}$$
(2.17)

where \(\varvec{B}_{K,\varvec{\eta }}=\left( \begin{array}{ll} \varvec{B} & \varvec{B}_{\mathbb {L}}^1 \\ \varvec{B}_{\mathbb {L}}^2 & \varvec{B}_{\mathbb {L}\mathbb {L}}\\ \end{array}\right)\) and \(\varvec{P}_{\varvec{\eta }}= \left( \begin{array}{l} \varvec{P} \\ \varvec{P}_{\mathbb {L}}\\ \end{array}\right)\) such that

$$\begin{aligned} \begin{array}{ll} (\varvec{B})_{kj} & = K(\varvec{x}_k-\varvec{x}_j), \hspace{.5 cm} k,j \in \mathbb {I},\\ (\varvec{B}_{\mathbb {L}}^1)_{kj} & = \mathbb {L}_1K(\varvec{x}_k-\varvec{x}_j),\hspace{.5 cm} k\in \bar{\mathbb {I}},j\in \mathbb {I}, \\ \varvec{B}_{\mathbb {L}}^2 & = \mathbb {L}_2K(\varvec{x}_k-\varvec{x}_j),\hspace{.5 cm} k\in \bar{\mathbb {I}},j\in \mathbb {I}, \\ (\varvec{B}_{\mathbb {L}\mathbb {L}})_{kj} & =\mathbb {L}_1 \mathbb {L}_2 K(\varvec{x}_k-\varvec{x}_j), \hspace{.5 cm} k,j\in \bar{\mathbb {I}}, \\ (\varvec{P})_{kj}& =p_k(\varvec{x}_j),\hspace{.5 cm} j\in \mathbb {I}, k=1,2,...,Q,\\ (\varvec{P}_{\mathbb {L}})_{kj}& =\mathbb {L}p_k(\varvec{x}_j),\hspace{.5 cm} j\in \mathbb {I}, k=1,2,...,Q,\\ \end{array} \end{aligned}$$
(2.18)

and the right hand side vectors are

$$\begin{aligned} \begin{array}{ll} \mathbb {L}\varvec{K}& =\left( \mathbb {L}_1 K(\varvec{x}_0-\varvec{x}_1), \mathbb {L}_1 K(\varvec{x}_0-\varvec{x}_2),...,\mathbb {L}_1 K(\varvec{x}_0-\varvec{x}_n) \right) ^T, \\ \mathbb {L}\mathbb {L}\bar{\varvec{K}}& =\left( \mathbb {L}_1 \mathbb {L}_2 K(\varvec{x}_0-\varvec{x}_{\bar{\mathbb {I}}_1}), \mathbb {L}_1 \mathbb {L}_2 K(\varvec{x}_0-\varvec{x}_{\bar{\mathbb {I}}_2}),...,\mathbb {L}_1 \mathbb {L}_2 K(\varvec{x}_0-\varvec{x}_{\bar{\mathbb {I}}_{\bar{n}}}) \right) ^T, \\ \mathbb {L}\varvec{p}& =\left( \mathbb {L}p_1(\varvec{x}_0), \mathbb {L}p_2(\varvec{x}_0),..., \mathbb {L}p_Q(\varvec{x}_0)\right) ^T. \\ \end{array} \end{aligned}$$
(2.19)

To construct a discrete version of \(\mathbb {L}u\) using the RBF-CFD method, we consider a set of test points \(\mathbb {Y} = \{y_1, y_2, ..., y_m\}\), which may differ from the trial set \(\mathbb {X}\) within the domain \(\Omega\). For each test point \(y_k\), we create its corresponding stencil \(\mathbb {X}_k \subset \mathbb {X}\) and replace \(\varvec{x}_0\) in the previous formulation with \(y_k\) to obtain the weight vectors \(\varvec{\gamma }_k\) and \(\bar{\varvec{\gamma }}_k\) by solving the local system (2.17) associated with the stencil \(\mathbb {X}_k\). These vectors contain the non-zero coefficients needed to approximate \(\mathbb {L}u(y_k)\) using the function values \(u(\textbf{x}_i)\) and the \(\mathbb {L}u(\textbf{x}_i)\) values at the stencil points. To assemble the global matrix operators, we map these local weight vectors into rows corresponding to \(y_k\). Specifically, the k-th row of the \(m \times N\) matrix \(\mathbb {M}^{(\mathbb {L})}\) is constructed by placing the n elements of \(\varvec{\gamma }_k\) into the columns corresponding to their respective indices in the global trial set \(\mathbb {X}\), and setting all other \(N-n\) entries in that row to zero. A similar procedure is followed for the matrix \(\widehat{\mathbb {M}}^{({\mathbb {L}})}\) using \(\bar{\varvec{\gamma }}_k\).

Therefore, by expanding both \(\varvec{\gamma }_k\) and \(\bar{\varvec{\gamma }}_k\) by adding zeros and organizing them into the rows of the global matrices \(\mathbb {M}^{(\mathbb {L})}\) and \(\hat{\mathbb {M}}^{({\mathbb {L}})}\), we derive the RBF-CFD approximation:

$$\begin{aligned} \left( \mathbb {L} u \right) \big |_{\mathbb {Y}} \approx \mathbb {M}^{(\mathbb {L})} \, \varvec{u}_e + \widehat{\mathbb {M}}^{(\mathbb {L})} \, \mathbb {L} \, \varvec{u}_e, \end{aligned}$$

where \(\mathbb {L} \, \varvec{u}_e=\left( \mathbb {L} u(x_1), \mathbb {L} u(x_2),...,\mathbb {L} u(x_N) \right) ^T=\left( g(x_1), g(x_2),...,g(x_N) \right) ^T\) and \(\varvec{u}_e\) is defined as before.

In the RBF-based methods, the selection of the RBF shape parameter is of significant importance, as its proper choice crucially impacts the method’s effectiveness and accuracy51,52. This sensitivity can pose a major challenge in practical implementation. In this work, we utilize the polyharmonic radial kernel in our local approximation. This choice is advantageous because polyharmonic kernels are generally parameter-free, meaning they yield efficient interpolation results for scattered data without the need to select or tune a shape parameter53. This strategy eliminates the stability and accuracy complexities associated with shape parameter optimization.

Discrete setting: application of RBF-CFD weights and strang splitting method

Let the set \(\mathbb {X}=\{ \varvec{x}_k \mid k=1, 2, \ldots , N \}\) represent a set of scattered points within the domain \(\Omega\). For a fixed point \(\varvec{x}\in \Omega\), we have:

$$\begin{aligned} \Delta u(t,\varvec{x})\approx (\varvec{m}^{(\Delta )}(\varvec{x}))^{T}\,\varvec{u}_e(t)+(\widehat{\varvec{m}}^{(\Delta )}(\varvec{x}))^{T}\,\Delta \varvec{u}_e(t),~\varvec{x}\in \Omega , \end{aligned}$$
(3.1)

where

$$\begin{aligned} \varvec{u}_e(t) =u(t,\cdot )|_{\mathbb {X}}. \end{aligned}$$

From (3.1), we derive:

$$\begin{aligned} \Delta u(t,\varvec{x}_k)\approx (\varvec{m}_{k}^{(\Delta )})^{T}\,\varvec{u}_e(t)+(\widehat{\varvec{m}}_{k}^{(\Delta )})^{T}\,\Delta \varvec{u}_e(t),~ \varvec{x}_k=1,2,\cdots ,N. \end{aligned}$$
(3.2)

As a result, we can express it as:

$$\begin{aligned} \Delta \varvec{u}_e(t)\approx (\mathbb {I}-\widehat{\mathbb {M}}^{(\Delta )})^{-1} \mathbb {M}^{(\Delta )}\, \varvec{u}_e(t), \end{aligned}$$
(3.3)

where

$$\begin{aligned} & \mathbb {M}^{(\Delta )}=\left[ \begin{array}{cccc} (\varvec{m}_{1}^{(\Delta )})^{T} |&(\varvec{m}_{2}^{(\Delta )})^{T} |&\ldots |&(\varvec{m}_{N}^{(\Delta )})^{T} \end{array} \right] ^T,\\ & \widehat{\mathbb {M}}^{(\Delta )}=\left[ \begin{array}{cccc} (\widehat{\varvec{m}}_{1}^{(\Delta )})^{T} |&(\widehat{\varvec{m}}_{2}^{(\Delta )})^{T} |&\ldots |&(\widehat{\varvec{m}}_{N}^{(\Delta )})^{T} \end{array} \right] ^T. \end{aligned}$$

Thus, the approach for solving (1.1)-(1.3) can be expressed as follows:

$$\begin{aligned} \frac{d}{dt} \varvec{u}(t) - (\mathbb {A}^{(\Delta )})^{-1} \mathbb {M}^{(\Delta )} \varvec{u}(t) = -\frac{1}{\varepsilon ^2} F^{'}(\varvec{u}(t)), \quad t \in (0, \infty ), \end{aligned}$$
(3.4)

with the following initialization:

$$\begin{aligned} \varvec{u}(0) = \varvec{u}_0, \end{aligned}$$
(3.5)

where \(\mathbb {A}^{(\Delta )} := \mathbb {I} - \mathbb {M}^{(\Delta )}\).

We now apply the Strang splitting method to (3.4). For a PDE of the form \(\frac{\partial u}{\partial t} = (A + B) u\), the Strang splitting method advances the solution from \(u^n\) to \(u^{n+1}\) over a time step \(\Delta t\) as follows44,54:

$$\begin{aligned} u^{n+1}=\left( A^{\Delta t/2}O B^{\Delta t}O A^{\Delta t/2}\right) u^n, \end{aligned}$$

where \(A^{\Delta t/2}\) and \(B^{\Delta t}\) are the evolution operators for \(\frac{\partial u}{\partial t} = A u\) and \(\frac{\partial u}{\partial t} = B u\), respectively.

For solving (3.4)-(3.5), the time interval \([0, t_F]\) is divided into \(N_t\) subintervals of step size \(\delta t\), where \(t_n = n\delta t\). The Strang splitting procedure for (3.4) consists of the following three steps:

  1. 1.

    First step: Solve the following equation for \(\varvec{v}(t)\) over the time interval \((t_n, t_{n+1}]\):

    $$\begin{aligned} \left\{ \begin{array}{ll} \dfrac{d}{dt}\varvec{v}(t) = -\dfrac{1}{2\varepsilon ^2}\, F'(\varvec{v}(t)), & t \in (t_n,t_{n+1}], \\[2ex] \varvec{v}(t_n) = \varvec{u}_{spl}(t_n). \end{array} \right. \end{aligned}$$
    (3.6)
  2. 2.

    Second step: Solve the following equation for \(\varvec{w}(t)\) over the same time interval:

    $$\begin{aligned} \left\{ \begin{array}{ll} \dfrac{d}{dt}\varvec{w}(t) = (\mathbb {A}^{(\Delta )})^{-1} \mathbb {M}^{(\Delta )}\, \varvec{w}(t), & t \in (t_n,t_{n+1}], \\[2ex] \varvec{w}(t_n) = \varvec{v}(t_{n+1}). \end{array} \right. \end{aligned}$$
    (3.7)
  3. 3.

    Third step: Solve the following equation for \(\varvec{u}(t)\):

    $$\begin{aligned} \left\{ \begin{array}{ll} \dfrac{d}{dt}\varvec{u}(t) = -\dfrac{1}{2\varepsilon ^2}\, F'(\varvec{u}(t)), & t \in (t_n,t_{n+1}], \\[2ex] \varvec{u}(t_n) = \varvec{w}(t_{n+1}), \\[2ex] \varvec{u}_{spl}(t_{n+1}):= \varvec{u}(t_{n+1}). \end{array} \right. \end{aligned}$$
    (3.8)

Here, \(n=0:N_t-1\), and \(\varvec{u}_{spl}(t_{n+1})= \varvec{u}(t_{n+1})\) represents the solution at time \(t_{n+1}\).

To solve equation (3.7), we employ a finite difference method based on the \(\theta\)-rule55. Further, we leverage the known solutions of equations (3.6) and (3.8) to develop a thre-step algorithm:

  1. 1.

    First step: In this step, the updated value of \(\varvec{u}^{n*}\) is calculated using the previous value \(\varvec{u}^n\):

    $$\begin{aligned} \varvec{u}^{n*} = \dfrac{\varvec{u}^{n}}{\sqrt{\exp \left( -\dfrac{\delta t}{\varepsilon ^2}\right) + (\varvec{u}^{n})^2 \left( 1 - \exp \left( -\dfrac{\delta t}{\varepsilon ^2}\right) \right) }}. \end{aligned}$$
  2. 2.

    Second step: At this stage, we solve the following system for \(\varvec{u}^{n**}\):

    $$\begin{aligned} \varvec{u}^{n**} - \theta \, \delta t \, (\mathbb {A}^{(\Delta )})^{-1} \mathbb {M}^{(\Delta )} \varvec{u}^{n**} = \varvec{u}^{n*} + (1-\theta ) \, \delta t \, (\mathbb {A}^{(\Delta )})^{-1} \mathbb {M}^{(\Delta )} \varvec{u}^{n*}. \end{aligned}$$
  3. 3.

    Third step: Finally, in this step, we compute \(\varvec{u}^{n+1}\) using the previous value \(\varvec{u}^{n**}\):

    $$\begin{aligned} \varvec{u}^{n+1} = \dfrac{\varvec{u}^{n**}}{\sqrt{\exp \left( -\dfrac{\delta t}{\varepsilon ^2}\right) + (\varvec{u}^{n**})^2 \left( 1 - \exp \left( -\dfrac{\delta t}{\varepsilon ^2}\right) \right) }}, \end{aligned}$$

where \(n=0:N_t-1\); \(\varvec{u}^n\) represents the approximate value of \(\varvec{u}(t_n)\), and \(\varvec{u}^{n*}\) and \(\varvec{u}^{n**}\) are the intermediate solutions.

For the second step, singular value decomposition (SVD) can be considered for solving the system, which is described as follows:

$$\begin{aligned} \varvec{u}^{n**} - \theta \, \delta t\, (\mathbb {A}^{(\Delta )})^{-1} \mathbb {M}^{(\Delta )} \, \varvec{u}^{n**} = \varvec{u}^{n*} + (1 - \theta )\, \delta t\, (\mathbb {A}^{(\Delta )})^{-1}\, \mathbb {M}^{(\Delta )} \varvec{u}^{n*}. \end{aligned}$$

We rewrite it in the matrix form:

$$\begin{aligned} \mathbb {W} \varvec{u}^{n**} = \varvec{b}, \end{aligned}$$

where

$$\begin{aligned} \mathbb {W} = I - \theta \, \delta t\, (\mathbb {A}^{(\Delta )})^{-1} \, \mathbb {M}^{(\Delta )}, \end{aligned}$$

and

$$\begin{aligned} \varvec{b} = \varvec{u}^{n*} + (1 - \theta ) \delta t (\mathbb {A}^{(\Delta )})^{-1} \mathbb {M}^{(\Delta )} \varvec{u}^{n*}. \end{aligned}$$

To solve for \(\varvec{u}^{n**}\) using SVD, we proceed as follows:

  1. 1.

    First step: Calculate the SVD of \(\mathbb {W}\):

    $$\begin{aligned} \mathbb {W} = \mathbb {U} \, \mathbb {S} \, \mathbb {V}^T, \end{aligned}$$

    where

    • \(\mathbb {U}\) and \(\mathbb {V}\) denote orthogonal matrices.

    • \(\mathbb {S}\) is a diagonal matrix containing the singular values.

  2. 2.

    Second step: Compute the inverse of \(\mathbb {S}\):

    $$\begin{aligned} \mathbb {S}^{-1} = \text {diag} \left( \frac{1}{\sigma _i} \right) , \end{aligned}$$

    where \(\sigma _i\) are the nonzero singular values of \(\mathbb {S}\).

  3. 3.

    Third step: Compute the pseudoinverse of \(\mathbb {W}\):

    $$\begin{aligned} \mathbb {W}^{-1} = \mathbb {V}\, \mathbb {S}^{-1} \, \mathbb {U}^T. \end{aligned}$$
  4. 4.

    Fourth step: Solve for \(\varvec{u}^{n**}\):

    $$\begin{aligned} \varvec{u}^{n**} = \mathbb {W}^{-1} \, \varvec{b}. \end{aligned}$$

Test examples

In this section, we present several numerical experiments to evaluate the performance, accuracy, and stability of the proposed method when applied to the two-dimensional Allen-Cahn equation. We examine the convergence behavior with respect to spatial and temporal discretization parameters and investigate the capability of the method to preserve qualitative properties of the solution, such as energy decay over time. In the numerical examples, we employed polyharmonic splines of degree 5, augmented by polynomials of degree 5, collectively denoted as \(PHS5+P5\).

Convergence and accuracy tests

Example 1

For \(\varepsilon =1\) and \(\Omega =[0,1]^2\), the 2D AC equation (1.1) is considered with the following initial condition:

$$\begin{aligned} u(0, x_1,x_2) = \tanh \left( x_1\right) \tanh \left( x_2\right) ,\end{aligned}$$

together with Dirichlet boundary conditions. The exact solution to this equation is represented as follows.

$$\begin{aligned} u(t, x_1,x_2) = \tanh \left( x_1 - t\right) \tanh \left( x_2 - t\right) . \end{aligned}$$

In Fig. 1, we present the root mean square error (RMSE) and maximum absolute error (MaxError) for \(t_F = 1\) with \(\delta t = 0.0005\) as functions of \(h\) on the domain \(\Omega\), using \(\text {PHS5+P5}\). The error decreases as \(h\) decreases, which confirms the accuracy of the proposed framework. Table 2 presents the MaxError and RMSE for different values of \(\delta t\) and h at the final time \(t_F=1\). The results show that as \(\delta t\) decreases, the errors decrease as well, indicating improved accuracy with smaller time step sizes.

Figure 1
figure 1

The MaxError and RMSE values as a function of \(h\) at \(t_F = 1\) with \(\delta t = 0.0005\) in Example 1.

Table 2 MaxError, RMSE, and temporal convergence rate for different values of h and \(\delta t\) at final time \(t_F=1\) in Example 1.

Example 2

For \(\varepsilon =1\) and \(\Omega = [0,1]^2\), the 2D AC equation (1.1) is considered with the following initial condition:

$$\begin{aligned} u(0,x_1,x_2) = \tanh \left( \sqrt{x_1^2 + x_2^2}\right) \sin \left( \frac{\pi x_1}{4}\right) \sin \left( \frac{\pi x_2}{4}\right) , \end{aligned}$$

together with Dirichlet boundary conditions. The exact solution to this equation is:

$$\begin{aligned} u(t, x_1,x_2) = \tanh \left( \sqrt{x_1^2 + x_2^2} - t\right) \cdot \sin \left( \frac{\pi x_1}{4}\right) \sin \left( \frac{\pi x_2}{4}\right) . \end{aligned}$$

Figure 2 illustrates the MaxError and RMSE for \(t_F = 1\) with \(\delta t = 0.0005\) as functions of \(h\), using \(\text {PHS5+P5}\). The results show that the error decreases as \(h\) is refined, validating the accuracy of the proposed method. Table 3 displays the MaxError and RMSE at the final time \(t_F=1\) for different values of \(\delta t\) and \(h\). Table 4 shows the MaxError, RMSE, and temporal convergence rate at the final time \(t_F=5\) for various values of \(\delta t\) and \(h=0.08\).

Figure 2
figure 2

The MaxError and RMSE values as a function of \(h\) for \(t_F = 1\) with \(\delta t = 0.0005\) in Example 2.

Table 3 MaxError, RMSE, and temporal convergence rate for different values of h and \(\delta t\) at final time \(t_F=1\) in Example 2.
Table 4 MaxError, RMSE, and temporal convergence rate for different values \(\delta t\) at final time \(t_F=5\) in Example 2.

Example 3

For \(\varepsilon >0\) and \(\Omega =[0,1]^2\), the 2D AC equation (1.1) is considered with the following initial condition:

$$\begin{aligned} u(0, x_1,x_2) = 0,\end{aligned}$$

together with Dirichlet boundary conditions. The exact solution to this equation is represented as follows

$$\begin{aligned} u(t, x_1,x_2) = \sin (t)\sin (\pi x_1) \sin (\pi x_2). \end{aligned}$$

We first set \(\varepsilon = 1\). Table 5 presents the \(L^2\)-norm error and CPU time at final times \(t_F = 1\) and \(t_F = 2\) for different values of \(h\). The results are compared for different time step sizes \(\delta t = 4h^2\). The table also includes the \(L^2\)-norm error values from the reference56 and the RBF-FD method for comparison. Table 6 presents the \(L^2\)-norm error and CPU time at final time \(t_F = 1\) for different values of \(\varepsilon = 0.2, 0.3, 0.6\). The table shows how the \(L^2\)-norm error and computational time vary with different values of \(h\) and the corresponding time step sizes \(\delta t = 4h^2\).

Table 5 \(L^2\)-norm error at final time \(t_F = 1,2\) with \(\varepsilon = 1\) in Example 3.
Table 6 \(L^2\)-norm error at final time \(t_F = 1\) with \(\varepsilon = 0.2,0.3,0.6\) in Example 3.

Figures 1 and 2 show that the error decreases when the spatial step size \(h\) is refined. From the slope of the error curves, it can be seen that the method reaches spatial convergence order 2 or higher. It confirms the effectiveness of the utilized spatial discretization. Tables 2, 3, and 4 present the temporal integration convergence rates (Rate) for RMSE. It can be observed that the temporal convergence rate is around 1. This linear convergence over time can be observed in different examples. It shows that the time-stepping scheme keeps stable first-order accuracy even for the nonlinear dynamics of the Allen-Cahn equation. Table 5 compares the proposed approach with the RBF-FD method and also the results reported in56. At \(t_F = 1\) with \(h=1/32\), our method give an \(L^2\)-norm error of \(1.8898 \times 10^{-4}\), which is lower than the \(3.9658 \times 10^{-4}\) from RBF-FD and the \(1.24 \times 10^{-3}\) reported in56. Although CPU times are almost similar, this comparisons show that the proposed method achieves better accuracy. Table 6 demonstrates the robustness of the method for various values of the parameter \(\varepsilon\). As mentioned in equation (1.1), \(\varepsilon\) controls the width of the interface between phases. When \(\varepsilon\) is decreased, the gradients become sharper and the interface thinner, which naturally increases the stiffness of the problem. This makes the numerical approximation more challenging. The reported results in Table 6 show this physical behavior of the problem, as \(\varepsilon\) decreases from 0.6 to 0.2, the errors increase because of the steep transitions in the solution profile. Even for small values of \(\varepsilon\), the method stays stable and gives suitable results, showing its ability to handle the sharp interface dynamics of the Allen-Cahn equation.

Dumbbell shaped

Example 4

For \(\varepsilon >0\) and \(\Omega = [-1,1]^2\), we consider the 2D AC equation (1.1) with the initial condition:

$$\begin{aligned} u(0, x_1, x_2) = {\left\{ \begin{array}{ll} \tanh \left( \frac{3}{\varepsilon } \left( (x_1 - 0.5)^2 + x_2^2 - (0.39)^2 \right) \right) , & x_1 > 0.14, \\ \tanh \left( \frac{3}{\varepsilon } \left( x_2^2 - 0.15^2 \right) \right) , & -0.3 \le x_1 \le 0.14, \\ \tanh \left( \frac{3}{\varepsilon } \left( (x_1 + 0.5)^2 + x_2^2 - (0.25)^2 \right) \right) , & x_1 < -0.3. \end{array}\right. } \end{aligned}$$
Figure 3
figure 3

Snapshots at different times with \(h=0.06\) and \(\delta t=0.0005\) in Example 4.

Figure 4
figure 4

Energy as a function of time for \(h=0.06\) and \(\delta t=0.0005\) in Example 4.

We take \(\varepsilon =0.05\). The initial condition describes a dumbbell-shaped interface that is divided into three distinct regions. The evolution of the solution over time is shown in Fig. 3, with snapshots taken at different times for \(h = 0.06\) and \(\delta t = 0.0005\). Figure 4 illustrates the steady decrease in total energy over time, indicating dissipation or energy loss.

Merging of bubbles

Example 5

For \(\varepsilon >0\) and \(\Omega = [-1,1]^2\), we consider the 2D AC equation (1.1) with the initial condition:

$$\begin{aligned} u(0, x_1, x_2) =- \tanh \left( \frac{(x_1-0.3)^2+x_2^2-(0.2)^2}{\varepsilon } \right) \tanh \left( \frac{(x_1+0.3)^2+x_2^2-(0.2)^2}{\varepsilon } \right) \\ \times \tanh \left( \frac{x_1^2-(0.2)^2+(x_2-0.3)^2}{\varepsilon } \right) \tanh \left( \frac{x_1^2-(0.2)^2+(x_2+0.3)^2}{\varepsilon } \right) . \end{aligned}$$
Figure 5
figure 5

Snapshots at different times with \(h=0.06\) and \(\delta t=0.0006\) in Example 5.

Figure 6
figure 6

Energy as a function of time with \(h=0.06\) and \(\delta t=0.0006\) in Example 5.

We take \(\varepsilon =0.06\). Simulations are then carried out to observe the evolution of the solution over time. Figure 5 illustrates the results of these simulations, where snapshots of bubble dynamics are shown at different times. In this simulation, a time step size of \(\delta t = 0.0006\), and a spatial step size of \(h = 0.06\) are used. As illustrated in the snapshots, the bubbles merge and undergo shape changes over time. Figure 6 shows the steady decrease in total energy over time, indicating dissipation or energy loss.

Star-like shaped

Example 6

For \(\Omega = [0,1]^2\), the 2D AC equation (1.1) is considered with the following initial condition:

$$\begin{aligned} u(0, x_1, x_2) = \tanh \left( \frac{0.25 + 0.1 \cos \left( 7 \cdot \arctan (x_2 - 0.5, x_1 - 0.5)\right) - \sqrt{(x_1 - 0.5)^2 + (x_2 - 0.5)^2}}{\sqrt{2} \varepsilon }\right) . \end{aligned}$$
Figure 7
figure 7

Snapshots at different times with \(h=0.03\) and \(\delta t=0.0002\) in Example 6.

Figure 8
figure 8

Energy as a function of time with \(h=0.03\) and \(\delta t=0.0002\) in Example 6.

We take \(\varepsilon = 0.05\) in this example. The evolution of the solution is displayed in Fig. 7, showing snapshots of the solution at different times. The results demonstrate the dynamics of the solution over time with \(h=0.03\) and \(\delta t=0.0002\). Figure 8 shows the steady decrease in total energy over time, indicating dissipation or energy loss.

Double axe shaped

Example 7

For \(\Omega = [-2,2]^2\), the 2D AC equation (1.1) is considered with the following initial condition:

$$\begin{aligned} u(0, x_1, x_2) = -\tanh \left( \frac{d(x_1,x_2)}{\sqrt{2} \varepsilon }\right) , \end{aligned}$$

where \(d(x_1,x_2)=\max \{-d_1(x_1,x_2),d_2(x_1,x_2),-d_3(x_1,x_2)\}\)

$$\begin{aligned} & d_1(x_1,x_2)=\sqrt{x_1^2+(x_2-2)^2}-2+\frac{3}{2}\varepsilon ,\\ & d_1(x_1,x_2)=\sqrt{x_1^2+x_2^2}-\frac{3}{2},\\ & d_1(x_1,x_2)=\sqrt{x_1^2+(x_2+2)^2}-2+\frac{3}{2}\varepsilon . \end{aligned}$$
Figure 9
figure 9

Snapshots at different times with \(h=0.1\) and \(\delta t=0.001\) in Example 7.

Figure 10
figure 10

Energy as a function of time with \(h=0.1\) and \(\delta t=0.001\) in Example 7.

In this example, we set \(\varepsilon = 0.15\). Figure 9 displays the evolution of the solution, showing snapshots at different times. The results demonstrate the dynamics of the solution over time with \(h=0.1\) and \(\delta t=0.001\). Figure 10 depicts a consistent decrease in total energy over time, indicating dissipation or energy loss.

Conclusion

This study presents a robust and accurate local meshless method for solving the two-dimensional Allen-Cahn equation, employing the RBF-CFD approach in combination with the Strang splitting technique. The Allen-Cahn equation is critical for modeling phase transitions and interface dynamics, and its numerical solution is essential for applications spanning materials science, fluid dynamics, and biology. By combining the RBF-CFD method with the Strang splitting technique, we have significantly enhanced both the accuracy and computational efficiency of the solution process. The proposed method achieves high-order spatial accuracy through the use of the Hermite RBF interpolation technique, which efficiently approximates the model operators on local stencils. This approach has proven to be especially effective in addressing nonlinear equations and complex systems, ensuring that key physical properties, such as energy decay, are preserved over time. Furthermore, the Strang splitting technique contributes to improved efficiency by decomposing the equation into simpler components, all while maintaining high accuracy. Extensive numerical simulations have been conducted to assess the performance of the method, evaluating its accuracy, stability, and convergence across different configurations. The results confirm the method’s capability to preserve crucial qualitative features, making it a powerful tool for solving complex problems related to phase transitions and pattern formation in various physical systems. In conclusion, the demonstrated effectiveness and flexibility of this approach point to its significant potential for broad practical applications in scientific computing and engineering.