Fig. 2

Overview of the deep IRK-SINDy framework: (a) The dataset is prepared for the purpose of training the neural network. (b) The inputs to the neural network are assigned into two distinct variables: time and state variables. The neurons located in the output layer of the network are partitioned into s segments, each containing d neurons. The i’th segment predicts the d stage values corresponding to the \(\chi _{i}\). (c) Through the process of forward propagation within the DNN, the stage values are predicted, and these predictions are subsequently employed in the IRK steps, i.e. eq. (8), facilitating both forward and backward predictions. (d) By comparing the predictions against the data, the loss is computed, followed by the optimization step. Upon reaching a specified number of epochs, at which point the loss is sufficiently minimized, the sparsity-promotion algorithm is exclusively applied to the coefficient matrix \(\xi\). Finally, the non-zero coefficients of the \(\xi\) denote the active terms in the nonlinear feature library.