Fig. 1
From: Jacobian Granger causality for count and binary data with applications to causal network inference

The neural network architecture for JGC: an MLP with one-to-one connections in the first hidden layer followed by subsequent layers of dense connectivity. In this paper, the network is trained using the Adam algorithm with the default learning rate at 1e-3, which we found to have sufficiently good performance across all test cases.