Fig. 1: vRBA accelerates convergence and improves accuracy across all benchmarks. | npj Artificial Intelligence

Fig. 1: vRBA accelerates convergence and improves accuracy across all benchmarks.

From: A variational framework for residual-based adaptivity in neural PDE solvers and operator learning

Fig. 1: vRBA accelerates convergence and improves accuracy across all benchmarks.The alternative text for this image may have been generated using AI.

A Relative L2 error convergence for the PINN benchmarks. Different potentials are marked in different colors, while the black line represents the Baseline. vRBA models generally converge faster and to a lower final error. Notably, for the Korteweg-De Vries (KdV) equation, the Baseline fails to converge, whereas vRBA achieves low error. We observe that potentials lacking an exact closed-form solution for the weighting parameter ϵ can exhibit instability: the superexponential \(\Phi (r)={e}^{{r}^{2}}\) becomes unstable due to the sharp gradients in Burgers' equation, and \(\Phi (r)=\cosh (r)\) leads to divergence in the Allen-Cahn equation. B Relative L2 error convergence for the operator learning tasks. For these purely data-driven problems, vRBA models consistently demonstrate accelerated convergence and lower final errors for Bubble Growth Dynamics (DeepONet), the Sod Shock Tube (SVD-DeepONet), Navier-Stokes (FNO), and the Wave Equation (TC-UNet). C, D Mean error norms (solid lines) with standard deviation (shaded areas) evaluated over timoomene for the FNO and TC-UNet models, respectively. The vRBA-enhanced models consistently exhibit a lower mean error, smaller variance, and a significantly slower rate of error accumulation.

Back to article page