Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
The impact of generative AI on social media: an experimental study
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 17 February 2026

The impact of generative AI on social media: an experimental study

  • Anders Giovanni Møller1,
  • Daniel M. Romero2,3,4,
  • David Jurgens2,4 &
  • …
  • Luca Maria Aiello1,5 

Scientific Reports , Article number:  (2026) Cite this article

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Business and management
  • Cultural and media studies
  • Information systems and information technology
  • Mathematics and computing
  • Science, technology and society

Abstract

Generative Artificial Intelligence (AI) tools are increasingly deployed across social media platforms, yet their implications for user behavior and experience remain understudied, particularly regarding two critical dimensions: (1) how AI tools affect the behaviors of content producers in a social media context, and (2) how content generated with AI assistance is perceived by users. To fill this gap, we conduct a controlled experiment with a representative sample of 680 U.S. participants in a realistic social media environment. The participants are randomly assigned to small discussion groups, each consisting of five individuals in one of five distinct experimental conditions: a control group and four treatment groups, each employing a unique AI intervention—Chat assistance, Conversation Starters, Feedback on comment drafts, and reply Suggestions. Our findings highlight a complex duality: some AI-tools increase user engagement and volume of generated content, but at the same time decrease the perceived quality and authenticity of discussion, and introduce a negative spill-over effect on conversations. Based on our findings, we propose four design principles and recommendations aimed at social media platforms, policymakers, and stakeholders: ensuring transparent disclosure of AI-generated content, designing tools with user-focused personalization, incorporating context-sensitivity to account for both topic and user intent, and prioritizing intuitive user interfaces. These principles aim to guide an ethical and effective integration of generative AI into social media.

Data availability

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Code availability

The code to run the experiment and fully reproduce the analyses described in this work is publicly available as archived releases. The experimental platform code is available at Zenodo (DOI: 10.5281/zenodo.18539373). The analysis code to reproduce figures and statistical tests is available at Zenodo (DOI: 10.5281/zenodo.18537773).

References

  1. Ziems, C. et al. Can large language models transform computational social science?. Comput. Linguist. 50(1), 237–291. https://doi.org/10.1162/coli_a_00502 (2024).

    Google Scholar 

  2. Xi, Z. et al. The rise and potential of large language model based agents: A survey. Sci. China Inf. Sci. 68(2), 121101. https://doi.org/10.1007/s11432-024-4222-0 (2025).

    Google Scholar 

  3. Anantrasirichai, N. & Bull, D. Artificial intelligence in the creative industries: A review. Artif. Intell. Rev. 55(1), 589–656. https://doi.org/10.1007/s10462-021-10039-7 (2022).

    Google Scholar 

  4. Pavlik, J. V. Collaborating with chatgpt: Considering the implications of generative artificial intelligence for journalism and media education. J. Mass Commun. Educat. 78(1), 84–93. https://doi.org/10.1177/10776958221149577 (2023).

    Google Scholar 

  5. Yan, L., Greiff, S., Teuber, Z. & Gaševic, D. Promises and challenges of generative artificial intelligence for human learning. Nat. Hum. Behav. 8(10), 1839–1850. https://doi.org/10.1038/s41562-024-02004-5 (2024).

    Google Scholar 

  6. Jennings, F. J., Suzuki, V. P. & Hubbard, A. Social media and democracy: Fostering political deliberation and participation. West. J. Commun. 85(2), 147–167. https://doi.org/10.1080/10570314.2020.1728369 (2021).

    Google Scholar 

  7. Jakesch, M., Bhat, A., Buschek, D., Zalmanson, L. & Naaman, M. Co-writing with opinionated language models affects users’ views. In: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. CHI ’23, pp. 1–15. Association for Computing Machinery, New York, NY, USA (2023). https://doi.org/10.1145/3544548.3581196. https://dl.acm.org/doi/10.1145/3544548.3581196.

  8. Bail, C. A. Can generative AI improve social science?. Proc. Natl. Acad. Sci. 121(21), 2314021121. https://doi.org/10.1073/pnas.2314021121 (2024).

    Google Scholar 

  9. Shin, D. Artificial Misinformation: Exploring human-algorithm interaction online. Springer, Cham (2024). https://doi.org/10.1007/978-3-031-52569-8. https://link.springer.com/10.1007/978-3-031-52569-8.

  10. Shin, D. Debiasing AI: Rethinking the intersection of innovation and sustainability. Routledge, New York https://doi.org/10.1201/9781003530244 (2025).

  11. Hancock, J. T., Naaman, M. & Levy, K. Ai-mediated communication: Definition, research agenda, and ethical considerations. J. Comput.-Mediat. Commun. 25(1), 89–100. https://doi.org/10.1093/jcmc/zmz022 (2020).

    Google Scholar 

  12. Reif, J. A., Larrick, R. P. & Soll, J. B. Evidence of a social evaluation penalty for using AI. Proc. Natl. Acad. Sci. 122(19), 2426766122. https://doi.org/10.1073/pnas.2426766122 (2025).

    Google Scholar 

  13. Wingström, R., Hautala, J. & Lundman, R. Redefining creativity in the era of AI? perspectives of computer scientists and new media artists. Creat. Res. J. 36(2), 177–193. https://doi.org/10.1080/10400419.2022.2107850 (2024).

    Google Scholar 

  14. Rattanasevee, P., Akarapattananukul, Y. & Chirawut, Y. Direct democracy in the digital age: Opportunities, challenges, and new approaches. Humanit. Social Sci. Commun. 11(1), 1–9. https://doi.org/10.1057/s41599-024-04245-1 (2024).

    Google Scholar 

  15. Mikhaylovskaya, A. Enhancing deliberation with digital democratic innovations. Philos. Technol. 37(1), 3. https://doi.org/10.1007/s13347-023-00692-x (2024).

    Google Scholar 

  16. Tessler, M. H. et al. AI can help humans find common ground in democratic deliberation. Science 386(6719), 2852. https://doi.org/10.1126/science.adq2852 (2024).

    Google Scholar 

  17. Doshi, A. R. & Hauser, O. P. Generative AI enhances individual creativity but reduces the collective diversity of novel content. Sci. Adv. 10(28), 5290. https://doi.org/10.1126/sciadv.adn5290 (2024).

    Google Scholar 

  18. Drolsbach, C. P., Solovev, K. & Pröllochs, N. Community notes increase trust in fact-checking on social media. PNAS Nexus 3(7), 217. https://doi.org/10.1093/pnasnexus/pgae217 (2024).

    Google Scholar 

  19. Osborne, M. R. & Bailey, E. R. Me vs. the machine? subjective evaluations of human- and AI-generated advice. Sci. Rep. 15(1), 3980. https://doi.org/10.1038/s41598-025-86623-6 (2025).

    Google Scholar 

  20. Russo, D., Baltes, S., Berkel, N., Avgeriou, P., Calefato, F., Cabrero-Daniel, B., Catolino, G., Cito, J., Ernst, N., Fritz, T., Hata, H., Holmes, R., Izadi, M., Khomh, F., Kjærgaard, M.B., Liebel, G., Lafuente, A.L., Lambiase, S., Maalej, W., Murphy, G., Moe, N.B., O’Brien, G., Paja, E., Pezzè, M., Persson, J.S., Prikladnicki, R., Ralph, P., Robillard, M., Silva, T.R., Stol, K.-J., Storey, M.-A., Stray, V., Tell, P., Treude, C., & Vasilescu, B. Generative AI in software engineering must be human-centered: The copenhagen manifesto. J. Syst. Softw. https://doi.org/10.1016/j.jss.2024.112115 (2024) .

  21. Yang, K., Singh, D., & Menczer, F (2024) Characteristics and prevalence of fake social media profiles with ai-generated faces. J. Online Trust Safety, https://doi.org/10.54501/jots.v2i4.197.

  22. Argyle, L. P. et al. Leveraging AI for democratic discourse: Chat interventions can improve online political conversations at scale. Proc. Natl. Acad. Sci.. Publisher: Proceedings of the National Academy of Sciences. Accessed 2024-07-24120(41), 2311627120. https://doi.org/10.1073/pnas.2311627120 (2023).

  23. Di Fede, G., Rocchesso, D., Dow, S.P. & Andolina, S. The idea machine: Llm-based expansion, rewriting, combination, and suggestion of ideas. In: Proceedings of the 14th Conference on Creativity and Cognition. C&C ’22, pp. 623–627. Association for Computing Machinery, New York, NY, USA (2022). https://doi.org/10.1145/3527927.3535197. https://dl.acm.org/doi/10.1145/3527927.3535197.

  24. Ziegenbein, T., Skitalinskaya, G., Bayat Makou, A. & Wachsmuth, H. LLM-based Rewriting of Inappropriate Argumentation using Reinforcement Learning from Machine Feedback. In: Ku, L.-W., Martins, A., Srikumar, V. (eds.) Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 4455–4476. Association for Computational Linguistics, Bangkok, Thailand (2024). 10.18653/v1/2024.acl-long.244. Accessed 2024–10-30.

  25. Do, H.J., Kong, H.-K., Lee, J. & Bailey, B.P. How should the agent communicate to the group? communication strategies of a conversational agent in group chat discussions. Proc. ACM Hum.-Comput. Interact. 6(CSCW2), 387–138723 (2022) https://doi.org/10.1145/3555112.

  26. Lee, M., Gero, K.I., Chung, J.J.Y., Shum, S.B., Raheja, V., Shen, H., Venugopalan, S., Wambsganss, T., Zhou, D., Alghamdi, E.A., August, T., Bhat, A., Choksi, M.Z., Dutta, S., Guo, J.L.C., Hoque, M.N., Kim, Y., Knight, S., Neshaei, S.P., Shibani, A., Shrivastava, D., Shroff, L., Sergeyuk, A., Stark, J., Sterman, S., Wang, S., Bosselut, A., Buschek, D., Chang, J.C., Chen, S., Kreminski, M., Park, J., Pea, R., Rho, E.H.R., Shen, Z. & Siangliulue, P. A design space for intelligent and interactive writing assistants. In: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. CHI ’24, pp. 1–35. Association for Computing Machinery, New York, NY, USA https://doi.org/10.1145/3613904.3642697. https://dl.acm.org/doi/10.1145/3613904.3642697 (2024).

  27. Horvitz, E. Principles of mixed-initiative user interfaces. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’99, pp. 159–166. Association for Computing Machinery, New York, NY, USA (1999). https://doi.org/10.1145/302979.303030. https://dl.acm.org/doi/10.1145/302979.303030.

  28. Dhillon, P.S., Molaei, S., Li, J., Golub, M., Zheng, S. & Robert, L.P. Shaping human-AI collaboration: Varied scaffolding levels in co-writing with language models. In: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. CHI ’24, pp. 1–18. Association for Computing Machinery, New York, NY, USA (2024). https://doi.org/10.1145/3613904.3642134.https://dl.acm.org/doi/10.1145/3613904.3642134.

  29. Floridi, L. & Chiriatti, M. Gpt-3: Its nature, scope, limits, and consequences. Mind. Mach. 30, 681–694 (2020).

    Google Scholar 

  30. Chatterji, A., Cunningham, T., Deming, D.J., Hitzig, Z., Ong, C., Shan, C.Y. & Wadman, K. How people use chatgpt. Working Paper 34255, National Bureau of Economic Research (September 2025). https://doi.org/10.3386/w34255. http://www.nber.org/papers/w34255.

  31. Feuerriegel, S. et al. Research can help to tackle AI-generated disinformation. Nat. Hum. Behav. 7(11), 1818–1821. https://doi.org/10.1038/s41562-023-01726-2 (2023).

    Google Scholar 

  32. Yang, K.-C. & Menczer, F. Anatomy of an ai-powered malicious social botnet. J. Quantit. Descript. : Digital Media https://doi.org/10.51685/jqd.2024.icwsm.7 (2024).

  33. Wei, Y. & Tyson, G. Understanding the impact of ai-generated content on social media: The pixiv case. In: Proceedings of the 32nd ACM International Conference on Multimedia. MM ’24, pp. 6813–6822. Association for Computing Machinery, New York, NY, USA (2024). https://doi.org/10.1145/3664647.3680631. https://doi.org/10.1145/3664647.3680631.

  34. Afroogh, S., Akbari, A., Malone, E., Kargar, M. & Alambeigi, H. Trust in AI: Progress, challenges, and future directions. Humanit. Social Sci. Commun. 11(1), 1568. https://doi.org/10.1057/s41599-024-04044-8 (2024).

    Google Scholar 

  35. Gamage, D., Sewwandi, D., Zhang, M. & Bandara, A. Labeling synthetic content: User perceptions of warning label designs for ai-generated content on social media. In: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, pp. 1–29 (2025). https://doi.org/10.1145/3706598.3713171. arXiv:2503.05711 [cs].

  36. Gallegos, I.O., Shani, C., Shi, W., Bianchi, F., Gainsburg, I., Jurafsky, D. & Willer, R. Labeling messages as ai-generated does not reduce their persuasive effects (arXiv:2504.09865) (2025) arXiv:2504.09865 [cs].

  37. Kirkby, A., Baumgarth, C. & Henseler, J. To disclose or not disclose, is no longer the question - effect of AI-disclosed brand voice on brand authenticity and attitude. J. Prod. Brand Manag. 32(7), 1108–1122. https://doi.org/10.1108/JPBM-02-2022-3864 (2023).

    Google Scholar 

  38. Glickman, M. & Sharot, T. How human-AI feedback loops alter human perceptual, emotional and social judgements. Nat. Hum. Behav. 9(2), 345–359. https://doi.org/10.1038/s41562-024-02077-2 (2025).

    Google Scholar 

  39. Doshi, A. R. & Hauser, O. P. Generative AI enhances individual creativity but reduces the collective diversity of novel content. Sci. Adv. 10(28), 5290. https://doi.org/10.1126/sciadv.adn5290 (2024).

    Google Scholar 

  40. Polyportis, A. A longitudinal study on artificial intelligence adoption: Understanding the drivers of chatgpt usage behavior change in higher education. Front. Artif. Intell. https://doi.org/10.3389/frai.2023.1324398 (2024).

    Google Scholar 

  41. Jain, G., Pareek, S. & Carlbring, P. Revealing the source: How awareness alters perceptions of AI and human-generated mental health responses. Intern. Intervent. 36, 100745. https://doi.org/10.1016/j.invent.2024.100745 (2024).

    Google Scholar 

  42. Jose, B., Cherian, J., Verghis, A.M., Varghise, S.M., S, & M., Joseph, S. The cognitive paradox of AI in education: Between enhancement and erosion 16 https://doi.org/10.3389/fpsyg.2025.1550621 (2025).

  43. Bjork, E. & Bjork, R. Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning. Psychology and the Real World: Essays Illustrating Fundamental Contributions to Society, 56–64 (2011).

  44. Almaatouq, A. et al. Empirica: A virtual lab for high-throughput macro-level experiments. Behav. Res. Methods 53(5), 2158–2171. https://doi.org/10.3758/s13428-020-01535-9 (2021).

    Google Scholar 

  45. chrisdh79, Sufficient-Cover5956, Anticitizen-Zero: Discussion on cats and dogs. https://www.reddit.com/r/changemyview/comments/106ybxf/cmv_cats_are_smarter_than_dogs_on_average/. Accessed: 2024–07-31 (2023).

  46. chrisdh79, Sufficient-Cover5956, Anticitizen-Zero: Discussion on health benefits of oats. https://www.reddit.com/r/science/comments/1e9anm6/weightloss_power_of_oats_naturally_mimics_popular/. Accessed: 2024–07-31 (2024).

  47. whatisgoingon123422, CutieHeartgoddess: Discussion on Universal Basic Income. https://www.reddit.com/r/changemyview/comments/tdmuae/cmv_universal_basic_income_is_the_way_of_the/. Accessed: 2024–07-31 (2022).

Download references

Acknowledgements

We thank the Network, Data, and Society (NERDS) group at IT University of Copenhagen, the Blablablab, and the Romero group at the School of Information, University of Michigan, for valuable feedback during internal testing.

Funding

We acknowledge the support from the Carlsberg Foundation through the COCOONS project (CF21-0432) and the National Science Foundation through Grant No. IIS-2143529.

Author information

Authors and Affiliations

  1. Data Science Section, IT University of Copenhagen, Rued Langgaards Vej 7, 2300, Copenhagen, Denmark

    Anders Giovanni Møller & Luca Maria Aiello

  2. School of Information, University of Michigan, 2200 Hayward Street, Ann Arbor, MI, 48109, USA

    Daniel M. Romero & David Jurgens

  3. Center for the Study of Complex Systems, University of Michigan, 500 Church Street, Ann Arbor, 48109, USA

    Daniel M. Romero

  4. Computer Science and Engineering Division, University of Michigan, 2260 Hayward Street, Ann Arbor, 48109, USA

    Daniel M. Romero & David Jurgens

  5. Pioneer Centre for AI, Øster Voldgade 3, 1350, Copenhagen, Denmark

    Luca Maria Aiello

Authors
  1. Anders Giovanni Møller
    View author publications

    Search author on:PubMed Google Scholar

  2. Daniel M. Romero
    View author publications

    Search author on:PubMed Google Scholar

  3. David Jurgens
    View author publications

    Search author on:PubMed Google Scholar

  4. Luca Maria Aiello
    View author publications

    Search author on:PubMed Google Scholar

Contributions

A.G.M, D.R., D.J., and L.M.A. designed the research. A.G.M. developed the platform, and collected and analyzed the data. A.G.M, D.R., D.J., and L.M.A. wrote the paper.

Corresponding author

Correspondence to Anders Giovanni Møller.

Ethics declarations

Competing Interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary Information.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Møller, A.G., Romero, D.M., Jurgens, D. et al. The impact of generative AI on social media: an experimental study. Sci Rep (2026). https://doi.org/10.1038/s41598-026-40110-8

Download citation

  • Received: 27 October 2025

  • Accepted: 10 February 2026

  • Published: 17 February 2026

  • DOI: https://doi.org/10.1038/s41598-026-40110-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Generative artificial intelligence
  • Human-computer interaction
  • Controlled experiment
  • Large language models
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com sitemap

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics