Table 1 Comparison of selected fake news detection models based on key features and Accuracy.
Ref. | Authors | Model Type | Core Technique | Dataset Used | Accuracy/Performance | Limitation | Novelty Level |
|---|---|---|---|---|---|---|---|
Almandouh et al. (2024) | Deep Learning Ensemble | Ensemble DL | Custom/Not stated | High | Risk of overfitting | High | |
Praseed et al. (2024) | Survey | Graph Neural Networks (GNN) | Multiple | Not applicable | No benchmarking | Medium | |
Liu et al. (2024) | Graph Fusion | Inter-modal fusion + GNN | Fakeddit | Strong F1-score | High complexity | High | |
Sudhakar & Kaliyamurthie (2023) | Ensemble ML | Voting-based classifiers | Good | Feature selection bias | Medium | ||
Song et al. (2022) | Dynamic GNN | Time-aware GNN | BuzzFeed | High accuracy | Sparsity in graph | High | |
Wang et al. (2022) | Multimodal Transformer | Visual + Text transformer fusion | Fakeddit | Robust | GPU-intensive | High | |
Jing et al. (2023) | Fusion DL | Progressive multimodal fusion | Twitter/PolitiFact | High | Complex training | High | |
Xu et al. (2023) | Multi-view GCN | Graph convolution from views | Gossipcop | Consistent accuracy | Requires large graph construction | Medium | |
Luo & Xie (2023) | GNN multi-task | Joint learning of tasks | Gossipcop | High accuracy | Task-level overfitting | High | |
Zhang & Zhao (2023) | Survey | GNN architectures | Multiple | Not applicable | No experimentation | Medium | |
Fu et al. (2023) | Survey | Method trends | Various | Not applicable | Generalized view | Medium | |
Wei & Zhang (2023) | Hybrid DL | Transformer + GCN fusion | LIAR/Twitter | High | Integration complexity | High | |
Zhang & Li (2022) | CNN-RNN Hybrid | Sequential + local features | Good | Limited generalizability | Medium | ||
Li et al. (2022) | Transformer | Attention-focused Transformer | LIAR | Strong results | Resource heavy | High | |
Patel & Gupta (2022) | Graph + Text Fusion | Combined textual and graph data | Gossipcop | Accurate | Feature selection intensive | Medium | |
Xu et al. (2022) | Attention DL | Focused neural attention | Good | Model interpretability | Medium | ||
Jiang & Liu (2022) | Survey | DL Techniques overview | Broad datasets | Not applicable | Conceptual only | Medium | |
Lee & Kim (2022) | GNN | Social media graph inference | Accurate | Dependency on social structure | Medium | ||
Yang & Lee (2022) | Hybrid DL | DNN + CNN integration | Stable | Complex design | Medium | ||
Zhang & Chen (2022) | Attention DL | Neural attention mechanisms | Strong | Input dependency | Medium | ||
Roumeliotis et al. (2025) | CNN vs. LLM | Comparative analysis | Multiple | Varies per model | Evaluation-focused only | Medium | |
Papageorgiou et al. (2025) | LLM + DNN | Hybrid DL | Fakeddit | Strong | High training costs | High | |
Singhania et al. (2023) | Hierarchical Attention | 3HAN deep attention levels | LIAR | Very high accuracy | Complexity of levels | High | |
Alzahrani & Aljuhani (2024) | Embedding + DL | Word embedding with DL | ISOT/LIAR | High | Vocabulary limitations | Medium | |
Harris et al. (2024) | Meta Review | Framework + dataset review | Various | Not applicable | Broad scope | Medium | |
Dixit et al. (2023) | Optimized CNN | Levy Flight + CNN | LIAR | High | Algorithm tuning required | High | |
Folino et al. (2024) | Active Learning + LLM | Pre-trained + AL pipeline | LIAR/Twitter | Energy-efficient | Limited to labeled samples | High | |
Abduljaleel & Ali (2024) | DL Review | Multimodal DL approaches | Multiple | Varies | Broad review | Medium | |
Kikon & Bania (2024) | ML + Sentiment | Classifier w/sentiment scoring | Good | Mixed feature signals | Medium | ||
Zamani et al. (2023) | Rumor Detection DL | DL for rumor + fake classification | Twitter/News | Stable detection | Deployment complexity | Medium | |
GETE (Proposed Model) | Graph-Augmented Transformer Ensemble Framework | Robust and scalable fake news detection | Graph-integrated transformer ensemble | LIAR/FakeNews | High accuracy & scalability | Emerging threats, minor overhead | Minimal |