Table 6 Exploring the predictive relationship between facial expressions and depression.
From: A systematic review on automated clinical depression diagnosis
Author | Main findings |
---|---|
Li et al.102 | A deep residual regression model to evaluate depression levels using enhancement techniques can reduce the influence of external factors on the image, significantly improving prediction performance. |
Wang et al.62 | Facial analysis is effective in automated depression diagnosis with an accuracy of 78%, recall of 80%, and F1 score of 79%. |
Hao et al.103 | A bidirectional LSTM network with an attention mechanism achieved an accuracy of 82% and F1 score of 81%. |
Hunter et al.63 | Individuals with depressive symptomatology showed a different eye-tracking pattern in processing emotional expressions. |
Jan et al.61 | The linear regression method applied to the AVEC 2014 dataset can predict BDI score using natural facial expressions. |
Mohan et al.183 | The proposed LSTM had the highest accuracy compared to other baselines. |
Lee et al.184 | An accessible depression diagnosis system using real-time object recognition and facial expressions obtained with a smartphone camera. |
Liu et al.104 | Proposed Part-and-Relation Attention Network for depression recognition, which outperforms state-of-the-art models with smaller prediction errors and higher stability. |
Hamid et al.105 | Designed a model for depression detection using electroencephalogram (EEG) and facial features. A hybrid model is proposed, outperforming existing diagnosis systems. |
Nasir et al.106 | A multimodal classification system for depression detection using geometrical facial features. The proposed visual feature sets show potential for robust and knowledge-driven depression classification. |
Dai et al.107 | A multimodal model with high performance on the AVEC 2013, AVEC 2014, and Emotion-Gait datasets. They concluded that the visual model is accurate. |
Shangguan et al.108 | An aggregation method which achieved comparable performance to 3D models with fewer parameters. The study suggests that video stimuli can be used for automatic depression detection. |
Sumali et al.185 | Significant differences were observed in facial landmark features (e.g., average right nose (speed), median left ear top (speed), and left pupil-right pupil positions) between healthy and depressive volunteers. |
Dadiz et al.186 | The uniformed local binary pattern extracted from videos for depression detection focuses on specific facial areas. |