Table 1 Optimal structures of five neural networks.
Models | Nodes in the layer 1 | Nodes in the layer 2 | Activation function in the layer 1 | Activation function in the layer 2 | Dropout | Batch size | Learning rate |
|---|---|---|---|---|---|---|---|
Logon | 1024 | 1024 | Tanh | Tanh | 0.5 | 20 | 0.01 |
File | 1024 | 1024 | Tanh | Tanh | 0.5 | 20 | 0.01 |
Device | 492 | 262 | Relu | Relu | 0.13 | 5 | 0.01 |
1011 | 1011 | Relu | Relu | 0.49 | 19 | 0.098 | |
Http | 1024 | 1024 | Tanh | Tanh | 0.5 | 20 | 0.01 |