Table 27 CNN-BILSTM-ATTENTION model architecture.
Layer | Output shape | Parameters |
---|---|---|
Input layer | (None, sequence_length, 1) | 0 |
Conv1D (filters = 256) | (None, sequence_length-1, 256) | 768 |
MaxPooling1D | (None, (sequence_length-1)/2, 256) | 0 |
Dropout (rate = 0.2) | (None, (sequence_length-1)/2, 256) | 0 |
Conv1D (filters = 128) | (None, (sequence_length-1)/2–1, 128) | 65,664 |
MaxPooling1D | (None, ((sequence_length-1)/2–1)/2, 128) | 0 |
Dropout (rate = 0.2) | (None, ((sequence_length-1)/2–1)/2, 128) | 0 |
Conv1D (filters = 64) | (None, (((sequence_length-1)/2–1)/2)–1, 64) | 16,448 |
MaxPooling1D | (None, ((((sequence_length-1)/2–1)/2)–1)/2, 64) | 0 |
Dropout (rate = 0.2) | (None, ((((sequence_length-1)/2–1)/2)–1)/2, 64) | 0 |
Flatten | (None, final_flatten_size) | 0 |
Dense (units = 50) | (None, 50) | 3250 |
RepeatVector | (None, sequence_length/8, 50) | 0 |
Bidirectional LSTM (units = 100) | (None, sequence_length/8, 200) | 120,800 |
Dropout (rate = 0.2) | (None, sequence_length/8, 200) | 0 |
Attention | (None, sequence_length/8, 200) | 0 |
Bidirectional LSTM (units = 50) | (None, 100) | 50,200 |
Dense (units = output_size) | (None, output_size) | 1111 |