TableĀ 3 Detailed structural information of the BiLSTM-CNN-Attention model.
Layers | Input | Output | Activation function |
|---|---|---|---|
Bidirectional_1 (Bidirectional layer_1) | 2āĆā23 | 256āĆā1 | ReLU |
Bidirectional_2 | 256āĆā1 | 1024āĆā1 | ReLU |
Reshape | 1024āĆā1 | 32āĆā32āĆā1 | \ |
Conv2D_1 (Convolutional 2D layer_1) | 32āĆā32āĆā1 | 16āĆā16āĆā1 | ReLU |
Conv2D_2 | 16āĆā16āĆā1 | 9āĆā9āĆā1 | ReLU |
Pooling2D_1 | 9āĆā9āĆā1 | 2āĆā2āĆā1 | \ |
Flatten layer | 2āĆā2āĆā1 | 4āĆā1 | \ |
Attention layer | 4āĆā1 | 64āĆā1 | ReLU |
Full connected layer | 64āĆā1 | 5āĆā1 | Softmax |