Table 2 Deep learning model architecture and configuration.
Model | Parameters |
|---|---|
Layer 1 | Dense [No. of nodes = 11, Activation = softmax, Kernel initializer = he_uniform, Kernel regularizer = l1(0.1), Bias regularizer = l1(0.1), Activity regularizer = l2(0.1)] |
Layer 2 | Batch Normalization |
Layer 3 | Dropout = 0.6 |
Layer 4 | Dense [No. of nodes = 6, Activation function = softmax, Kernel initializer = he_uniform, Kernel regularizer = l1(0.1), Bias regularizer = l1(0.1), Activity regularizer = l2(0.1)] |
Layer 5 | Batch Normalization |
Layer 6 | Dropout = 0.3 |
Layer 7 | Dense [Output dimensions = 1, Activation function = relu] |
Model compilation | Loss function = mean squared error |
Optimizer = Adam [learning rate = 1 × 10–2, beta_1 = 0.9, beta_2 = 0.999, epsilon = 1 × 10–7, amsgrad = False] | |
Metrics = mean squared error |