Table 1 Comparison of previous studies.
Ref. | Year | Dataset | Learning type | Proposed methodology | Accuracy score % |
---|---|---|---|---|---|
2020 | Pan competition bases dataset- English tweets | Machine Learning, Deep Learning | Bert based | 83 | |
2021 | Tweepfake dataset | Deep Learning | RoBERTa | 90 | |
2021 | Scrapped-Tweets dataset | Machine Learning, Deep Learning | SBi+LSTM | 92 | |
2022 | Twitter Social bot | Deep Learning | GANBOT | 95 | |
2022 | Publicly available dataset from Kaggle | Deep Learning | VGG16 CNN | 94 | |
2023 | Self made dataset | Deep Learning | CNN | 93 | |
2023 | Fake NewsNet dataset | Machine Learning, Deep Learning | SVM | 93 | |
2023 | ChatGPTquery dataset, ChatGPTrephrase dataset | Machine Learning | Transformer-based ML model DistilBERT | 79 | |
2023 | COVID-19 and vaccination datasets | Deep Learning | GRU | 93 | |
2023 | CACD, Caltech datasets | Deep Learning | DLBAL-MS | 95, 86 | |
2024 | RFF, RFFD dataset | Deep Learning | Shallow ViT | 92 | |
2024 | FF++, DFDC-p dataset | Deep Learning | HCiT | 96 | |
2024 | IMDB, Twitter US Airline, Sentimet140 datasets | Deep Learning | RoBERTa-BiLSTM | 92 | |
2025 | Sentiment140 and IMDb datasets | Transformer-based models | Hybrid (BERT, GPT-2, RoBERTa, XLNet,and DistilBERT) | 94,95 |