Table 5 Overall performance comparison of various models.
Model | Precision | Recall | F1-Score | Accuracy | Key strengths |
|---|---|---|---|---|---|
ConvoseqNet (Proposed) | 0.84 | 0.84 | 0.84 | 0.84 | Excellent at capturing complex language patterns through CNN-LSTM architecture, achieving highest overall performance. |
MetaFusionNetwork (Proposed) | 0.71 | 0.73 | 0.70 | 0.73 | Strong accuracy with a blend of interpretability and classification power from Convoseqnet random forest |
Conventional LSTM | 0.72 | 0.74 | 0.73 | 0.73 | Effective at capturing sequential data, offering strong performance on sentiment analysis tasks. |
Conventional CNN | 0.68 | 0.67 | 0.68 | 0.69 | Good for local feature extraction, but limited by lack of sequential data handling. |
K-Nearest Neighbors (KNN) | 0.56 | 0.62 | 0.57 | 0.62 | Simple implementation; moderate recall performance, serving as a baseline for more complex models. |
Decision Tree Classifier | 0.58 | 0.56 | 0.57 | 0.56 | High interpretability, valuable for feature importance insights. |
Naive Bayes | 0.60 | 0.17 | 0.08 | 0.17 | Simple and fast; effective with large datasets but limited in complex, nuanced data handling. |