Abstract
Generative Artificial Intelligence (AI) tools are increasingly deployed across social media platforms, yet their implications for user behavior and experience remain understudied, particularly regarding two critical dimensions: (1) how AI tools affect the behaviors of content producers in a social media context, and (2) how content generated with AI assistance is perceived by users. To fill this gap, we conduct a controlled experiment with a representative sample of 680 U.S. participants in a realistic social media environment. The participants are randomly assigned to small discussion groups, each consisting of five individuals in one of five distinct experimental conditions: a control group and four treatment groups, each employing a unique AI intervention—Chat assistance, Conversation Starters, Feedback on comment drafts, and reply Suggestions. Our findings highlight a complex duality: some AI-tools increase user engagement and volume of generated content, but at the same time decrease the perceived quality and authenticity of discussion, and introduce a negative spill-over effect on conversations. Based on our findings, we propose four design principles and recommendations aimed at social media platforms, policymakers, and stakeholders: ensuring transparent disclosure of AI-generated content, designing tools with user-focused personalization, incorporating context-sensitivity to account for both topic and user intent, and prioritizing intuitive user interfaces. These principles aim to guide an ethical and effective integration of generative AI into social media.
Data availability
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Code availability
The code to run the experiment and fully reproduce the analyses described in this work is publicly available as archived releases. The experimental platform code is available at Zenodo (DOI: 10.5281/zenodo.18539373). The analysis code to reproduce figures and statistical tests is available at Zenodo (DOI: 10.5281/zenodo.18537773).
References
Ziems, C. et al. Can large language models transform computational social science?. Comput. Linguist. 50(1), 237–291. https://doi.org/10.1162/coli_a_00502 (2024).
Xi, Z. et al. The rise and potential of large language model based agents: A survey. Sci. China Inf. Sci. 68(2), 121101. https://doi.org/10.1007/s11432-024-4222-0 (2025).
Anantrasirichai, N. & Bull, D. Artificial intelligence in the creative industries: A review. Artif. Intell. Rev. 55(1), 589–656. https://doi.org/10.1007/s10462-021-10039-7 (2022).
Pavlik, J. V. Collaborating with chatgpt: Considering the implications of generative artificial intelligence for journalism and media education. J. Mass Commun. Educat. 78(1), 84–93. https://doi.org/10.1177/10776958221149577 (2023).
Yan, L., Greiff, S., Teuber, Z. & Gaševic, D. Promises and challenges of generative artificial intelligence for human learning. Nat. Hum. Behav. 8(10), 1839–1850. https://doi.org/10.1038/s41562-024-02004-5 (2024).
Jennings, F. J., Suzuki, V. P. & Hubbard, A. Social media and democracy: Fostering political deliberation and participation. West. J. Commun. 85(2), 147–167. https://doi.org/10.1080/10570314.2020.1728369 (2021).
Jakesch, M., Bhat, A., Buschek, D., Zalmanson, L. & Naaman, M. Co-writing with opinionated language models affects users’ views. In: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. CHI ’23, pp. 1–15. Association for Computing Machinery, New York, NY, USA (2023). https://doi.org/10.1145/3544548.3581196. https://dl.acm.org/doi/10.1145/3544548.3581196.
Bail, C. A. Can generative AI improve social science?. Proc. Natl. Acad. Sci. 121(21), 2314021121. https://doi.org/10.1073/pnas.2314021121 (2024).
Shin, D. Artificial Misinformation: Exploring human-algorithm interaction online. Springer, Cham (2024). https://doi.org/10.1007/978-3-031-52569-8. https://link.springer.com/10.1007/978-3-031-52569-8.
Shin, D. Debiasing AI: Rethinking the intersection of innovation and sustainability. Routledge, New York https://doi.org/10.1201/9781003530244 (2025).
Hancock, J. T., Naaman, M. & Levy, K. Ai-mediated communication: Definition, research agenda, and ethical considerations. J. Comput.-Mediat. Commun. 25(1), 89–100. https://doi.org/10.1093/jcmc/zmz022 (2020).
Reif, J. A., Larrick, R. P. & Soll, J. B. Evidence of a social evaluation penalty for using AI. Proc. Natl. Acad. Sci. 122(19), 2426766122. https://doi.org/10.1073/pnas.2426766122 (2025).
Wingström, R., Hautala, J. & Lundman, R. Redefining creativity in the era of AI? perspectives of computer scientists and new media artists. Creat. Res. J. 36(2), 177–193. https://doi.org/10.1080/10400419.2022.2107850 (2024).
Rattanasevee, P., Akarapattananukul, Y. & Chirawut, Y. Direct democracy in the digital age: Opportunities, challenges, and new approaches. Humanit. Social Sci. Commun. 11(1), 1–9. https://doi.org/10.1057/s41599-024-04245-1 (2024).
Mikhaylovskaya, A. Enhancing deliberation with digital democratic innovations. Philos. Technol. 37(1), 3. https://doi.org/10.1007/s13347-023-00692-x (2024).
Tessler, M. H. et al. AI can help humans find common ground in democratic deliberation. Science 386(6719), 2852. https://doi.org/10.1126/science.adq2852 (2024).
Doshi, A. R. & Hauser, O. P. Generative AI enhances individual creativity but reduces the collective diversity of novel content. Sci. Adv. 10(28), 5290. https://doi.org/10.1126/sciadv.adn5290 (2024).
Drolsbach, C. P., Solovev, K. & Pröllochs, N. Community notes increase trust in fact-checking on social media. PNAS Nexus 3(7), 217. https://doi.org/10.1093/pnasnexus/pgae217 (2024).
Osborne, M. R. & Bailey, E. R. Me vs. the machine? subjective evaluations of human- and AI-generated advice. Sci. Rep. 15(1), 3980. https://doi.org/10.1038/s41598-025-86623-6 (2025).
Russo, D., Baltes, S., Berkel, N., Avgeriou, P., Calefato, F., Cabrero-Daniel, B., Catolino, G., Cito, J., Ernst, N., Fritz, T., Hata, H., Holmes, R., Izadi, M., Khomh, F., Kjærgaard, M.B., Liebel, G., Lafuente, A.L., Lambiase, S., Maalej, W., Murphy, G., Moe, N.B., O’Brien, G., Paja, E., Pezzè, M., Persson, J.S., Prikladnicki, R., Ralph, P., Robillard, M., Silva, T.R., Stol, K.-J., Storey, M.-A., Stray, V., Tell, P., Treude, C., & Vasilescu, B. Generative AI in software engineering must be human-centered: The copenhagen manifesto. J. Syst. Softw. https://doi.org/10.1016/j.jss.2024.112115 (2024) .
Yang, K., Singh, D., & Menczer, F (2024) Characteristics and prevalence of fake social media profiles with ai-generated faces. J. Online Trust Safety, https://doi.org/10.54501/jots.v2i4.197.
Argyle, L. P. et al. Leveraging AI for democratic discourse: Chat interventions can improve online political conversations at scale. Proc. Natl. Acad. Sci.. Publisher: Proceedings of the National Academy of Sciences. Accessed 2024-07-24120(41), 2311627120. https://doi.org/10.1073/pnas.2311627120 (2023).
Di Fede, G., Rocchesso, D., Dow, S.P. & Andolina, S. The idea machine: Llm-based expansion, rewriting, combination, and suggestion of ideas. In: Proceedings of the 14th Conference on Creativity and Cognition. C&C ’22, pp. 623–627. Association for Computing Machinery, New York, NY, USA (2022). https://doi.org/10.1145/3527927.3535197. https://dl.acm.org/doi/10.1145/3527927.3535197.
Ziegenbein, T., Skitalinskaya, G., Bayat Makou, A. & Wachsmuth, H. LLM-based Rewriting of Inappropriate Argumentation using Reinforcement Learning from Machine Feedback. In: Ku, L.-W., Martins, A., Srikumar, V. (eds.) Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 4455–4476. Association for Computational Linguistics, Bangkok, Thailand (2024). 10.18653/v1/2024.acl-long.244. Accessed 2024–10-30.
Do, H.J., Kong, H.-K., Lee, J. & Bailey, B.P. How should the agent communicate to the group? communication strategies of a conversational agent in group chat discussions. Proc. ACM Hum.-Comput. Interact. 6(CSCW2), 387–138723 (2022) https://doi.org/10.1145/3555112.
Lee, M., Gero, K.I., Chung, J.J.Y., Shum, S.B., Raheja, V., Shen, H., Venugopalan, S., Wambsganss, T., Zhou, D., Alghamdi, E.A., August, T., Bhat, A., Choksi, M.Z., Dutta, S., Guo, J.L.C., Hoque, M.N., Kim, Y., Knight, S., Neshaei, S.P., Shibani, A., Shrivastava, D., Shroff, L., Sergeyuk, A., Stark, J., Sterman, S., Wang, S., Bosselut, A., Buschek, D., Chang, J.C., Chen, S., Kreminski, M., Park, J., Pea, R., Rho, E.H.R., Shen, Z. & Siangliulue, P. A design space for intelligent and interactive writing assistants. In: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. CHI ’24, pp. 1–35. Association for Computing Machinery, New York, NY, USA https://doi.org/10.1145/3613904.3642697. https://dl.acm.org/doi/10.1145/3613904.3642697 (2024).
Horvitz, E. Principles of mixed-initiative user interfaces. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’99, pp. 159–166. Association for Computing Machinery, New York, NY, USA (1999). https://doi.org/10.1145/302979.303030. https://dl.acm.org/doi/10.1145/302979.303030.
Dhillon, P.S., Molaei, S., Li, J., Golub, M., Zheng, S. & Robert, L.P. Shaping human-AI collaboration: Varied scaffolding levels in co-writing with language models. In: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. CHI ’24, pp. 1–18. Association for Computing Machinery, New York, NY, USA (2024). https://doi.org/10.1145/3613904.3642134.https://dl.acm.org/doi/10.1145/3613904.3642134.
Floridi, L. & Chiriatti, M. Gpt-3: Its nature, scope, limits, and consequences. Mind. Mach. 30, 681–694 (2020).
Chatterji, A., Cunningham, T., Deming, D.J., Hitzig, Z., Ong, C., Shan, C.Y. & Wadman, K. How people use chatgpt. Working Paper 34255, National Bureau of Economic Research (September 2025). https://doi.org/10.3386/w34255. http://www.nber.org/papers/w34255.
Feuerriegel, S. et al. Research can help to tackle AI-generated disinformation. Nat. Hum. Behav. 7(11), 1818–1821. https://doi.org/10.1038/s41562-023-01726-2 (2023).
Yang, K.-C. & Menczer, F. Anatomy of an ai-powered malicious social botnet. J. Quantit. Descript. : Digital Media https://doi.org/10.51685/jqd.2024.icwsm.7 (2024).
Wei, Y. & Tyson, G. Understanding the impact of ai-generated content on social media: The pixiv case. In: Proceedings of the 32nd ACM International Conference on Multimedia. MM ’24, pp. 6813–6822. Association for Computing Machinery, New York, NY, USA (2024). https://doi.org/10.1145/3664647.3680631. https://doi.org/10.1145/3664647.3680631.
Afroogh, S., Akbari, A., Malone, E., Kargar, M. & Alambeigi, H. Trust in AI: Progress, challenges, and future directions. Humanit. Social Sci. Commun. 11(1), 1568. https://doi.org/10.1057/s41599-024-04044-8 (2024).
Gamage, D., Sewwandi, D., Zhang, M. & Bandara, A. Labeling synthetic content: User perceptions of warning label designs for ai-generated content on social media. In: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, pp. 1–29 (2025). https://doi.org/10.1145/3706598.3713171. arXiv:2503.05711 [cs].
Gallegos, I.O., Shani, C., Shi, W., Bianchi, F., Gainsburg, I., Jurafsky, D. & Willer, R. Labeling messages as ai-generated does not reduce their persuasive effects (arXiv:2504.09865) (2025) arXiv:2504.09865 [cs].
Kirkby, A., Baumgarth, C. & Henseler, J. To disclose or not disclose, is no longer the question - effect of AI-disclosed brand voice on brand authenticity and attitude. J. Prod. Brand Manag. 32(7), 1108–1122. https://doi.org/10.1108/JPBM-02-2022-3864 (2023).
Glickman, M. & Sharot, T. How human-AI feedback loops alter human perceptual, emotional and social judgements. Nat. Hum. Behav. 9(2), 345–359. https://doi.org/10.1038/s41562-024-02077-2 (2025).
Doshi, A. R. & Hauser, O. P. Generative AI enhances individual creativity but reduces the collective diversity of novel content. Sci. Adv. 10(28), 5290. https://doi.org/10.1126/sciadv.adn5290 (2024).
Polyportis, A. A longitudinal study on artificial intelligence adoption: Understanding the drivers of chatgpt usage behavior change in higher education. Front. Artif. Intell. https://doi.org/10.3389/frai.2023.1324398 (2024).
Jain, G., Pareek, S. & Carlbring, P. Revealing the source: How awareness alters perceptions of AI and human-generated mental health responses. Intern. Intervent. 36, 100745. https://doi.org/10.1016/j.invent.2024.100745 (2024).
Jose, B., Cherian, J., Verghis, A.M., Varghise, S.M., S, & M., Joseph, S. The cognitive paradox of AI in education: Between enhancement and erosion 16 https://doi.org/10.3389/fpsyg.2025.1550621 (2025).
Bjork, E. & Bjork, R. Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning. Psychology and the Real World: Essays Illustrating Fundamental Contributions to Society, 56–64 (2011).
Almaatouq, A. et al. Empirica: A virtual lab for high-throughput macro-level experiments. Behav. Res. Methods 53(5), 2158–2171. https://doi.org/10.3758/s13428-020-01535-9 (2021).
chrisdh79, Sufficient-Cover5956, Anticitizen-Zero: Discussion on cats and dogs. https://www.reddit.com/r/changemyview/comments/106ybxf/cmv_cats_are_smarter_than_dogs_on_average/. Accessed: 2024–07-31 (2023).
chrisdh79, Sufficient-Cover5956, Anticitizen-Zero: Discussion on health benefits of oats. https://www.reddit.com/r/science/comments/1e9anm6/weightloss_power_of_oats_naturally_mimics_popular/. Accessed: 2024–07-31 (2024).
whatisgoingon123422, CutieHeartgoddess: Discussion on Universal Basic Income. https://www.reddit.com/r/changemyview/comments/tdmuae/cmv_universal_basic_income_is_the_way_of_the/. Accessed: 2024–07-31 (2022).
Acknowledgements
We thank the Network, Data, and Society (NERDS) group at IT University of Copenhagen, the Blablablab, and the Romero group at the School of Information, University of Michigan, for valuable feedback during internal testing.
Funding
We acknowledge the support from the Carlsberg Foundation through the COCOONS project (CF21-0432) and the National Science Foundation through Grant No. IIS-2143529.
Author information
Authors and Affiliations
Contributions
A.G.M, D.R., D.J., and L.M.A. designed the research. A.G.M. developed the platform, and collected and analyzed the data. A.G.M, D.R., D.J., and L.M.A. wrote the paper.
Corresponding author
Ethics declarations
Competing Interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Møller, A.G., Romero, D.M., Jurgens, D. et al. The impact of generative AI on social media: an experimental study. Sci Rep (2026). https://doi.org/10.1038/s41598-026-40110-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-026-40110-8