Table 3 Leading-order scaling for various properties of PMMs and a fixed-width MLP

From: Parametric matrix models

Quantity

In terms of…

MLP

AE-PMM

AO-PMM

Non-analytic points, ξ

Architecture hyperparameters

lm

n2

n2

 

Inference complexity χ

χ/m

χ/q

χ/q

 

Trainable parameters Σ

Σ/m

Σ/p

\(\Sigma /\left(p+q{r}^{2}\right)\)

Inference complexity, χ

Architecture hyperparameters

lm2

qn2

qr2n2

 

Non-analytic points ξ

mξ

qξ

qr2ξ

 

Trainable parameters Σ

Σ

Σ

Σ

Trainable parameters, Σ

Architecture hyperparameters

lm2

pn2

\(\left(p+q{r}^{2}\right){n}^{2}\)

 

Non-analytic points ξ

mξ

pξ

\(\left(p+q{r}^{2}\right)\xi\)

 

Inference complexity χ

χ

χ

χ

  1. The two constructed model PMMs considered in this work are shown: the affine eigenvalue PMM (AE-PMM, Methods section “Eigenvalue and Eigenstate Observable Emulation”) and the affine observable PMM (AO-PMM, Methods section “Regression and Classification PMMs”). All models are considered to have p input features and q output values. Each of the l hidden layers of the MLP has m neurons. The size of the matrices in the PMMs is n × n and the number of eigenvectors used in the AO-PMM is denoted by r. We assume that l p, l q, and p ~ q.