Table 2 Symbols used and their description
From: Criminal emotion detection framework using convolutional neural network for public safety
Symbols | Description | What it actually represents in the proposed framework |
|---|---|---|
\(C_u\) | Entity representing different criminals. | Set/list of all criminals in a given area. |
\(c_1, c_2, ..., c_n\) | Individual criminal instances within \(C_u\). | Represents each identified criminal. |
\(C_r\) | Set of criminal activities. | Different types of crime activities under detection. |
\(D_E\) | Dataset used in CSV format for both crime and emotion detection. | Complete dataset, including image URLs, emotion labels, and descriptions. |
F | Function that maps an input image to classified emotions. | The criminal emotion detection function. |
I | Input image to the emotion detection system. | Criminal or suspect image fed into the CNN model. |
\(I_{img}\) | Processed input image. | Actual image data passed into the CNN. |
\(K_1\) | Kernel size of the convolution layer. | The size of the filter used for feature extraction in convolution operations. |
\(U_{i0}\) | Initial weight matrix or feature input. | The starting state of feature input or weight initialization in the network. |
E | Set of classified emotions \((e_1, e_2, ..., e_m)\). | Emotions detected from the criminal’s face. |
M | AI model used for detection. | The CNN or other neural network model. |
\(\mathcal {O}\) | Objective function to maximize detection accuracy. | Sum of accuracies over all crime and emotion classes. |
\(\alpha\) | Crime classification result. | Output shows if the image is classified as a crime or non-crime. |
NP | Non-criminal class image set. | Images labeled as non-crime (safe scenes). |
AP | Criminal class image set. | Images labeled as crime activity. |
\(\Omega\) | Set of file paths combining directory and filename. | Directory structures and filenames for all training/test images. |
\(X_{train}, X_{test}\) | Training and testing image data. | Datasets are divided into training and testing for model evaluation. |
\(X_{train\,rescaled}, X_{test\,rescaled}\) | Rescaled datasets (divided by 255). | Normalized training and test images for input into CNN. |
\(X_{train\,sheared}, X_{test\,sheared}\) | Random shearing transformations applied to training and test images. | Augmented training and testing data with geometric distortion to improve model robustness. |
\(X_{train\,zoomed}, X_{test\,zoomed}\) | Random zooming transformations applied to training and testing data. | Augmented training and testing data with different scales for learning multi-scale features. |
\(X_{train\,flipped},X_{test\,flipped}\) | Random horizontal flips applied to training and testing data. | Augmented training and testing data with mirrored images for more variety. |
\(shear(\cdot ), zoom(\cdot ), flip(\cdot )\) | Data augmentation functions applied. | Techniques used to diversify and enhance the dataset during preprocessing. |
\(Conv2D(\cdot )\) | Convolution operation used in CNN layers. | Filters applied to input images to extract feature maps. |
\(B_1, B_2\) | Bias terms added in CNN convolutions. | Model learnable parameters aiding filter responses. |
\(U_l, Eml, d\) | URL, emotion label, and description in the emotion dataset. | Each row element in the emotion dataset CSV contains a link, label, and explanation. |
\(U_{ret}\) | Image retrieval operation. | Downloading and loading an image from a provided URL. |
\(P_d\) | Preprocessing function for resizing and normalization. | Function to standardize image size and scale values. |
\(label \rightarrow int, int \rightarrow label\) | Mappings between categorical labels and integer indices. | Used to convert emotion names into numerical classes and vice versa. |
\(C_1, C_2\) | Feature maps from the first and second convolution layers. | Output intermediate layers after convolutions. |
P | MaxPooling output. | Reduced feature map dimension after pooling. |
\(D_1, D_3\) | Dropout layers output. | Regularization technique outputs to reduce overfitting. |
F (Flattened) | Flattened vector before dense layers. | Linear array from multidimensional feature maps. |
\(D_2\) | Dense layer output before the second dropout. | Fully connected layer output with ReLU activation. |
\(N_1, N_2\) | Number of neurons in Dense layers 1 and 2. | Defines layer sizes in the fully connected parts of the CNN. |
O | Final output prediction after softmax. | Probability distribution over emotion classes. |
B | Batch size. | Number of images processed per training batch. |
E (epochs) | Number of complete training cycles over the dataset. | Number of complete training cycles over the dataset. |