Table 2 Model details and descriptions

From: Data-driven dynamic modeling for inverter-based resources using neural networks

Model

Number of parameters

Description

LSTM-16

1922

The LSTM with 16 hidden layers.

RNN

1946

The RNN with 24 hidden layers.

GRU

1964

The GRU with 18 hidden layers.

TCN

1946

TCN with 3 convolutional layers (13 channels each).

MLP

1924

MLP with 7 layers (17 neurons each), using Hardtanh as activation function.

PINN

1924

A PINN enforcing the physics of the second-generation generic model as hard constraints, based on the architecture in ref. 51.

Transformer

1939

The Transformer with 5 encoder and decoder layers, 7 multi-head attention heads, an input feature dimension of 7, an 8-dimensional feed-forward network, and a 0.1 dropout rate.

LSTM-8

578

The LSTM with eight hidden layers.

LSTM+Inv.

588

LSTM-8 with an inverter model.

LSTM+Cro.

1207

LSTM-8 with Cross-layer.

RNN+Cro.+Inv.

1628

RNN (8 hidden layers) with cross-layer and inverter model.

LSTM+DCN+Inv.

652

LSTM-8 with DCN and inverter model.

LSTMCI

1940

LSTM-8 with Cross layer and inverter (can also be marked as LSTM+Cro.+Inv.).

  1. The inverter dynamic model is denoted as Inv., and the cross-layer is denoted as Cro. The FCs of the models are set in the same way and are omitted from the description.