Table 1 Summary of main information for each example.

From: Enhancing high-fidelity nonlinear solver with reduced order model

Example

1

2.1

2.2

2.3

3

4

Degrees of freedom

\(\left[ 128 \times 128\right]\)

3993

3993

70602

2548

5823

Parameters \(\varvec{\mu }\)

\(\varvec{\kappa }(x, y)\)

\(\nu , \mathrm {In_{D}}\)

\(\mathrm {In_{R}}, \mathrm {In_{D}}\)

xy

\(t, \varvec{\kappa }_{\mathrm {top}}\)

\(t, \mathrm {F}\)

Training set (\(\mathrm {M_{train}}\))

9000

1600

1600

1600

13600 \((N_t \mathrm {M_{train}})\)

4509 \((N_t \mathrm {M_{train}})\)

Validation set (\(\mathrm {M_{validation}}\))

500

\(5\%\) of \(\mathrm {M_{train}}\)

\(5\%\) \(\mathrm {M_{train}}\)

\(5\%\) \(\mathrm {M_{train}}\)

\(5\%\) \(N_t \mathrm {M_{train}}\)

\(5\%\) \(N_t \mathrm {M_{train}}\)

Testing set (\(\mathrm {M_{test}}\))

500

100

100

100

1900 (\(N_t \mathrm {M_{test}}\))

1002 (\(N_t \mathrm {M_{test}}\))

Training time (h)

cGAN-ROM

4.00

\(\times\)

\(\times\)

\(\times\)

\(\times\)

\(\times\)

BT-ROM

\(\times\)

0.67

0.67

1.0

\(\times\)

\(\times\)

BBT-ROM

\(\times\)

0.50

0.50

0.92

0.75

0.45

in-ROM

\(\times\)

1.00

1.00

1.83

\(\times\)

\(\times\)

Number of nonlinear iterations (–)

Default init.

15.04

6.10

6.30

6.16

934.68

6986.00

cGAN-ROM

4.12

\(\times\)

\(\times\)

\(\times\)

\(\times\)

\(\times\)

BT-ROM

\(\times\)

neg.

neg.

5.40

\(\times\)

\(\times\)

BBT-ROM

\(\times\)

2.62

3.04

4.76

758.57

3412.00

in-ROM

\(\times\)

1.09

1.08

neg.

\(\times\)

\(\times\)

Speed up (%)

cGAN-ROM

72.63

\(\times\)

\(\times\)

\(\times\)

\(\times\)

\(\times\)

BT-ROM

\(\times\)

neg.

neg.

12.33

\(\times\)

\(\times\)

BBT-ROM

\(\times\)

57.05

51.75

22.72

18.84

49.00

in-ROM

\(\times\)

82.12

82.86

neg.

\(\times\)

\(\times\)

  1. Example 1 has heterogeneous parameters, and its FOM relies on structured grids - \(\mathrm {DOF} = [\mathrm {DOF}_x,\mathrm {DOF}_y]\). Examples 2, 3, and 4 have homogeneous parameters, and their FOMs use unstructured meshes. Speed up is calculated by the difference between a number of nonlinear iterations using ROM-assisted and default initialization, then divided by a number of nonlinear iterations of default initialization. neg. represents a case where using ROM-assisted causes an incremental cost (i.e., negative affect), default init. is shorted for default initialization, and \(\times\) represents not applicable. The prediction cost of in-ROM is much higher than the rest, which also affects the actual speed up. We discuss this effect on the actual cost saving in Example 2. Since we observe a better as well as stable performance of BBT-ROM in Example 2, we only apply BBT-ROM to Examples 3 and 4. For steady-state problems (Examples 1 and 2), we have a training set of \(\mathrm {M_{train}}\). For transient problems (Examples 3 and 4),we have a training set of \(N_t \mathrm {M_{train}}\). The same goes with validation and testing sets.