Table 7 Comparison of various approaches for mental health detection and sentiment analysis.
From: Multi task opinion enhanced hybrid BERT model for mental health analysis
Citation | Dataset | Problem statement | Proposed methodology | Accuracy |
---|---|---|---|---|
Kokane et al.42 | Twitter dataset, Reddit dataset | Detecting mental illness using NLP Transformers on social media. Analyzing mental health status through text analysis on Twitter and Reddit. | DistilBERT | 91% (Twitter), 84% (Reddit) |
Chen et al.43 | Hotel review datasets | Traditional CNN ignores contextual semantic information. Traditional RNN has information memory loss and vanishing gradient. | BERT + CNN + BiLSTM + Attention | 92.35% |
Selva Mary et al.44 | User-generated content from Twitter, Facebook, and Instagram | Detecting depression signs in social media content. Enhancing early intervention and support for mental health challenges. | Bi-LSTM | 98.5% |
Sowbarnigaa et al.45 | English language social media postings. Shared task introduced by ACL 2022. | Detecting signs of depression from social media postings. Utilizing sentiment analysis to categorize depression indicators. | CNN-LSTM | Precision: 93% |
Atapattu et al.46 | EmoMent corpus (2802 Facebook posts from Sri Lanka and India) | Detect mental health issues from text using NLP techniques. Develop emotion-annotated mental health corpus from South Asian countries. | RoBERTa | F1: 0.76, Macro F1: 0.77 |
Wu et al.47 | NLPCC 2020 Shared Task 2 MAMS dataset | Re-formalize ABSA as a multi-aspect sentiment analysis task. Address the complexity of the MAMS dataset with Transformer-based Multi-aspect Modeling. | RoBERTa-TMM ensemble | F1: 85.24% (ATSA), F1: 79.41% (ACSA) |
Our work | Mental health | The model aims to classify mental health-related text into status categories and sentiment labels using a multi-input neural network combining token embeddings, CNN, BiGRU, Transformer blocks, and attention mechanisms. | Opinion-BERT | Sentiment 96.25%, Status 93.74% |