Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Perspective
  • Published:

The impact of advanced AI systems on democracy

Abstract

Advanced artificial intelligence (AI) systems capable of generating humanlike text and multimodal content are now widely available. Here we ask what impact this will have on the democratic process. We consider the consequences of AI for citizens’ ability to make educated and competent choices about political representatives and issues (epistemic impacts). We explore how AI might be used to destabilize or support the mechanisms, including elections, by which democracy is implemented (material impacts). Finally, we discuss whether AI will strengthen or weaken the principles on which democracy is based (foundational impacts). The arrival of new AI systems clearly poses substantial challenges for democracy. However, we argue that AI systems also offer new opportunities to educate and learn from citizens, strengthen public discourse, help people to find common ground, and reimagine how democracies might work better.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Randomized controlled trial estimates of political persuasion with LLMs.

Similar content being viewed by others

References

  1. Coeckelbergh, M. Why AI Undermines Democracy and What to Do About It (Polity, 2024).

  2. Jungherr, A. Artificial intelligence and democracy: a conceptual framework. Soc. Media Soc.https://doi.org/10.1177/20563051231186353 (2023).

    Article  Google Scholar 

  3. Risse, M. in The Cambridge Handbook of Responsible Artificial Intelligence (eds Voeneky, S. et al.) 85–103 (Cambridge Univ. Press, 2022).

  4. Duberry, J. Artificial intelligence and democracy: risks and promises of AI-mediated citizen-government relations. IP 28, 435–438 (2023).

    Article  Google Scholar 

  5. Kreps, S. & Kriner, D. How AI threatens democracy. J. Democracy 34, 122–131 (2023).

    Article  Google Scholar 

  6. Seger, E. Generative AI and democracy impacts and interventions. demos.co.uk https://demos.co.uk/wp-content/uploads/2024/04/Generative-AI-and-Democracy_Briefing-Paper.pdf (2024).

  7. Landemore, H. in Conversations in Philosophy, Law and Politics (eds Chang, R. & Srinivasan, A.) 39–69 (Oxford Univ. Press, 2024).

  8. Landemore, H. Open Democracy: Reinventing Popular Rule for the Twenty-First Century (Princeton Univ. Press, 2020).

  9. Fishkin, J. S. Democracy and Deliberation: New Directions for Democratic Reform (Yale Univ. Press, 1993).

  10. Aral, S. The Hype Machine (Currency, 2020).

  11. Ziegler, D. M. et al. Fine-tuning language models from human preferences. Preprint at https://arxiv.org/pdf/1909.08593 (2019).

  12. Motoki, F., Pinho Neto, V. & Rodrigues, V. More human than human: measuring ChatGPT political bias. Public Choice 198, 3–23 (2024).

    Article  Google Scholar 

  13. Santurkar, S. et al. Whose opinions do language models reflect? In Proc. 40th International Conference on Machine Learning (eds Krause, A. et al.) 29971–30004 (JMLR, 2023).

  14. Perez, E. et al. Discovering language model behaviors with model-written evaluations. In Proc. Findings of the Association for Computational Linguistics: ACL 2023 (eds Rogers, A. et al.) 13387–13434 (Association for Computational Linguistics, 2023).

  15. Argyle, L. P. et al. Out of one, many: using language models to simulate human samples. In Proc. 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 819–862 (Association for Computational Linguistics, 2022).

  16. Park, J. S. et al. Social simulacra: creating populated prototypes for social computing systems. In Proc. 35th Annual ACM Symposium on User Interface Software and Technology 1–18 (Association for Computing Machinery, 2022).

  17. Röttger, P. et al. Political compass or spinning arrow? Towards more meaningful evaluations for values and opinions in large language models. In Proc. 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 15295–15311 (Association for Computational Linguistics, 2024).

  18. Moats, D. & Ganguly, C. Bringing AI participation down to scale. Patterns 6, 101241 (2025).

  19. Huang, S. et al. Collective constitutional AI: aligning a language model with public input. In Proc. 2024 ACM Conference on Fairness, Accountability and Transparency 1395–1417 (Association for Computing Machinery, 2024).

  20. DeepSeek-AI et al. DeepSeek-R1: incentivizing reasoning capability in LLMs via reinforcement learning. Preprint at https://doi.org/10.48550/arXiv.2501.12948 (2025).

  21. Ibrahim, L., Huang, S., Ahmad, L. & Anderljung, M. Beyond static AI evaluations: advancing human interaction evaluations for LLM harms and risks. Preprint at http://arxiv.org/abs/2405.10632 (2024).

  22. Schroeder, D. T. et al. How malicious AI swarms can threaten democracy. Preprint at https://doi.org/10.48550/arXiv.2506.06299 (2025).

  23. Goldstein, J. A. et al. Generative language models and automated influence operations: emerging threats and potential mitigations. Preprint at http://arxiv.org/abs/2301.04246 (2023).

  24. Marcellino, W., Beauchamp-Mustafaga, N., Kerrigan, A., Navarre Chao, L. & Smith, J. The Rise of Generative AI and the Coming Era of Social Media Manipulation 3.0 (RAND Corporation, 2023).

  25. Goel, N. et al. Artificial influence: comparing the effects of AI and human source cues in reducing certainty in false beliefs. Preprint at https://doi.org/10.31219/osf.io/2vh4k (2024).

  26. Luciano, F. Hypersuasion—on AI’s persuasive power and how to deal with it. Philos. Technol 37, 64 (2024).

    Article  Google Scholar 

  27. Durmus, E. et al. Measuring the persuasiveness of language models. Anthropic https://www.anthropic.com/news/measuring-model-persuasiveness (2024).

  28. Hackenburg, K., Ibrahim, L., Tappin, B. M. & Tsakiris, M. Comparing the persuasiveness of role-playing large language models and human experts on polarized US political issues. AI Soc. https://doi.org/10.1007/s00146-025-02464-x (2025).

    Article  Google Scholar 

  29. Bai, H., Voelkel, J. G., Muldowney, S., Eichstaedt, J. C. & Willer, R. LLM-generated messages can persuade humans on policy issues. Preprint at https://doi.org/10.31219/osf.io/stakv_v8 (2025).

  30. Broockman, D. & Kalla, J. Durably reducing transphobia: a field experiment on door-to-door canvassing. Science 352, 220–224 (2016).

    Article  CAS  PubMed  Google Scholar 

  31. Salvi, F., Horta Ribeiro, M., Gallotti, R. & West, R. On the conversational persuasiveness of GPT-4. Nat. Hum. Behav. https://doi.org/10.1038/s41562-025-02194-6 (2025).

    Article  PubMed  PubMed Central  Google Scholar 

  32. Costello, T. H., Pennycook, G. & Rand, D. G. Durably reducing conspiracy beliefs through dialogues with AI. Science 385, eadq1814 (2024).

    Article  CAS  PubMed  Google Scholar 

  33. Hackenburg, K. et al. The levers of political persuasion with conversational AI. Preprint at https://doi.org/10.48550/arXiv.2507.13919 (2025).

  34. Boissin, E., Costello, T. H., Alonso, D. M., Rand, D. G. & Pennycook, G. AI reduces conspiracy beliefs even when presented as a human expert. Preprint at https://doi.org/10.31234/osf.io/apmb5_v1 (2025).

  35. Gallegos, I. O. et al. Labeling messages as AI-generated does not reduce their persuasive effects. Preprint at https://doi.org/10.48550/ARXIV.2504.09865 (2025).

  36. Chen, Z. et al. A framework to assess the persuasion risks large language model chatbots pose to democratic societies. Preprint at https://doi.org/10.48550/ARXIV.2505.00036 (2025).

  37. El-Sayed, S. et al. A mechanism-based approach to mitigating harms from persuasive generative AI. Preprint at http://arxiv.org/abs/2404.15058 (2024).

  38. Tappin, B. M., Wittenberg, C., Hewitt, L. B., Berinsky, A. J. & Rand, D. G. Quantifying the potential persuasive returns to political microtargeting. Proc. Natl Acad. Sci. USA 120, e2216261120 (2023).

    Article  PubMed  PubMed Central  Google Scholar 

  39. Matz, S. C. et al. The potential of generative AI for personalized persuasion at scale. Sci. Rep. 14, 4692 (2024).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  40. Matthes, J. et al. Understanding the democratic role of perceived online political micro-targeting: longitudinal effects on trust in democracy and political interest. J. Inf. Technol. Politics 19, 435–448 (2022).

    Article  Google Scholar 

  41. Zuiderveen Borgesius, F. J. et al. Online political microtargeting: promises and threats for democracy. ULR 14, 82–96 (2018).

    Article  Google Scholar 

  42. Heseltine, M. & Clemm Von Hohenberg, B. Large language models as a substitute for human experts in annotating political text. Res. Politics https://doi.org/10.1177/20531680241236239 (2024).

    Article  Google Scholar 

  43. Simchon, A., Edwards, M. & Lewandowsky, S. The persuasive effects of political microtargeting in the age of generative AI. PNAS Nexus 3, pgae035 (2024).

  44. Coppock, A. Persuasion in Parallel: How Information Changes Minds about Politics (Univ. of Chicago Press, 2022).

  45. Jakesch, M., Bhat, A., Buschek, D., Zalmanson, L. & Naaman, M. Co-writing with opinionated language models affects users’ views. In Proc. 2023 CHI Conference on Human Factors in Computing Systems 1–15 (Association for Computing Machinery, 2023).

  46. Williams-Ceci, S. et al. Bias in AI autocomplete suggestions leads to attitude shift on societal issues. Preprint at https://doi.org/10.31234/osf.io/mhjn6 (2024).

  47. Sharma, M. et al. Towards understanding sycophancy in language models. In Proc. Twelfth International Conference on Learning Representations https://openreview.net/forum?id=tvhaxkMKAn (2024).

  48. OpenAI. Memory and new controls for ChatGPT. Open AI https://openai.com/blog/memory-and-new-controls-for-chatgpt (2024).

  49. Kirk, H. R., Vidgen, B., Röttger, P. & Hale, S. A. The benefits, risks and bounds of personalizing the alignment of large language models to individuals. Nat. Mach. Intell. 6, 383–392 (2024).

    Article  Google Scholar 

  50. Van Bavel, J. J., Rathje, S., Harris, E., Robertson, C. & Sternisko, A. How social media shapes polarization. Trends Cogn. Sci. 25, 913–916 (2021).

    Article  PubMed  Google Scholar 

  51. Bail, C. A. et al. Exposure to opposing views on social media can increase political polarization. Proc. Natl Acad. Sci. USA 115, 9216–9221 (2018).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  52. Google JIgsaw Team. Announcing experimental bridging attributes in perspective API. Medium https://medium.com/jigsaw/announcing-experimental-bridging-attributes-in-perspective-api-578a9d59ac37 (2024).

  53. Ahler, D. J. & Sood, G. The parties in our heads: misperceptions about party composition and their consequences. J. Politics 80, 964–981 (2018).

    Article  Google Scholar 

  54. Cheng, M., Piccardi, T. & Yang, D. CoMPosT: characterizing and evaluating caricature in LLM simulations. In Proc. 2023 Conference on Empirical Methods in Natural Language Processing 10853–10875 (Association for Computational Linguistics, 2023).

  55. Sætra, H. S. A shallow defence of a technocracy of artificial intelligence: examining the political harms of algorithmic governance in the domain of government. Technol. Soc. 62, 101283 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  56. Argyle, L. P. et al. Leveraging AI for democratic discourse: chat interventions can improve online political conversations at scale. Proc. Natl Acad. Sci. USA 120, e2311627120 (2023).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  57. Hadfi, R. et al. Conversational agents enhance women’s contribution in online debates. Sci. Rep. 13, 14534 (2023).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  58. Small, C. T. et al. Opportunities and risks of LLMs for scalable deliberation with Polis. Preprint at https://arxiv.org/abs/2306.11932 (2023).

  59. Lin, I. W. et al. IMBUE: improving interpersonal effectiveness through simulation and just-in-time feedback with human-language model interaction. Preprint at http://arxiv.org/abs/2402.12556 (2024).

  60. Mercier, H. & Landemore, H. Reasoning is for arguing: understanding the successes and failures of deliberation. Polit. Psychol. 33, 243–258 (2012).

    Article  Google Scholar 

  61. Bakker, M. A. et al. Fine-tuning language models to find agreement among humans with diverse preferences. In Proc. 36th International Conference on Neural Information Processing Systems (eds Oh, A. H. et al.) https://openreview.net/forum?id=G5ADoRKiTyJ (Curran Associates, 2024).

  62. Fish, S. et al. Generative social choice. In Proc. 25th ACM Conference on Economics and Computation 985–985 (Association for Computing Machinery, 2024).

  63. Boehmer, N., Fish, S. & Procaccia, A. D. Generative social choice: the next generation. Preprint at https://doi.org/10.48550/arXiv.2505.22939 (2025).

  64. Van Opheusden, B. et al. Methodology for analyzing the individual comments to the NTIA’s AI RFC. Imbue https://imbue.com/perspectives/ntia-rfc-analysis-individual-methodology/ (2023).

  65. Brandt, F., Conitzer, V., Endriss, U., Lang, J. & Procaccia, A. D. Handbook of Computational Social Choice (Cambridge Univ. Press, 2016).

  66. De, S., Bakker, M. A., Baxter, J. & Saveski, M. Supernotes: driving consensus in crowd-sourced fact-checking. In WWW '25: Proc. of the ACM on Web Conf. 2025, 3751–3761 (ACM, 2025).

  67. Konya, A. et al. Using collective dialogues and AI to find common ground between Israeli and Palestinian peacebuilders. In Proc. 2025 ACM Conference on Fairness, Accountability and Transparency 312–333 (Association for Computing Machinery, 2025).

  68. Masood Alavi, D., Wählisch, M., Irwin, C. & Konya, A. Using artificial intelligence for peacebuilding. J. Peacebuilding Dev. 17, 239–243 (2022).

    Article  Google Scholar 

  69. Augenstein, I. et al. Factuality challenges in the era of large language models and opportunities for fact-checking. Nat. Mach. Intell. 6, 852–863 (2024).

    Article  Google Scholar 

  70. Yang, K. & Menczer, F. Anatomy of an AI-powered malicious social botnet. JQD https://doi.org/10.51685/jqd.2024.icwsm.7 (2024).

  71. Cooke, D., Edwards, A., Barkoff, S. & Kelly, K. As good as a coin toss: human detection of AI-generated images, videos, audio and audiovisual stimuli. Preprint at http://arxiv.org/abs/2403.16760 (2024).

  72. Nightingale, S. J. & Farid, H. AI-synthesized faces are indistinguishable from real faces and more trustworthy. Proc. Natl Acad. Sci. USA 119, e2120481119 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  73. Spitale, G., Biller-Andorno, N. & Germani, F. AI model GPT-3 (dis)informs us better than humans. Sci. Adv. 9, eadh1850 (2023).

    Article  PubMed  PubMed Central  Google Scholar 

  74. Christ, M., Gunn, S. & Zamir, O. Undetectable watermarks for language models. In Proc. Thirty Seventh Conference on Learning Theory (eds Agrawal, S. & Roth, A.) 1125–1139 (PMLR, 2024).

  75. Sadasivan, V. S., Kumar, A., Balasubramanian, S., Wang, W. & Feizi, S. Can AI-generated text be reliably detected? In Proc. ICLR 2024 https://openreview.net/forum?id=NvSwR4IvLO (2024).

  76. Dathathri, S. et al. Scalable watermarking for identifying large language model outputs. Nature 634, 818–823 (2024).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  77. Leibowicz, C. R., McGregor, S. & Ovadya, A. The deepfake detection dilemma: a multistakeholder exploration of adversarial dynamics in synthetic media. In Proc. 2021 AAAI/ACM Conference on AI, Ethics and Society 736–744 (Association for Computing Machinery, 2021).

  78. Zhang, H. et al. Watermarks in the sand: impossibility of strong watermarking for language models. In Proc. 41st International Conference on Machine Learning 58851–58880 (JMLR, 2024).

  79. Hasher, L., Goldstein, D. & Toppino, T. Frequency and the conference of referential validity. J. Verbal Learn. Verbal Behav. 16, 107–112 (1977).

    Article  Google Scholar 

  80. Goldstein, J. A. & Long, A. Deepfakes, elections and shrinking the liar’s dividend. Brennan Center https://www.brennancenter.org/our-work/research-reports/deepfakes-elections-and-shrinking-liars-dividend (2024).

  81. Vaccari, C. & Chadwick, A. Deepfakes and disinformation: exploring the impact of synthetic political video on deception, uncertainty and trust in news. Soc. Media Soc. https://doi.org/10.1177/205630512090340 (2020).

  82. Huang, S. & Siddarth, D. Generative AI and the digital commons. Preprint at http://arxiv.org/abs/2303.11074 (2023).

  83. Chow, A. R. AI’s underwhelming impact on the 2024 elections. Time https://time.com/7131271/ai-2024-elections (30 October 2024).

  84. Nakano, R. et al. WebGPT: browser-assisted question-answering with human feedback. Preprint at http://arxiv.org/abs/2112.09332 (2022).

  85. Menick, J. et al. Teaching language models to support answers with verified quotes. Preprint at http://arxiv.org/abs/2203.11147 (2022).

  86. Adair, B., Li, C., Yang, J. & Yu, C. Progress toward ‘the Holy Grail’: the continued quest to automate fact-checking. northwestern.edu https://cj2017.northwestern.edu/documents/progress-cj2017-paper-18.pdf (2017).

  87. Hoes, E., Altay, S. & Bermeo, J. Leveraging ChatGPT for efficient fact-checking. Preprint at https://doi.org/10.31234/osf.io/qnjkf (2023).

  88. DeVerna, M. R., Yan, H. Y., Yang, K.-C. & Menczer, F. Fact-checking information from large language models can decrease headline discernment. Proc. Natl Acad. Sci. USA 121, e2322823121 (2024).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  89. Sidoti, O. & McClain, C. 34% of U.S. adults have used ChatGPT, about double the share in 2023. Pew Research Center https://www.pewresearch.org/short-reads/2025/06/25/34-of-us-adults-have-used-chatgpt-about-double-the-share-in-2023/#chatgpt-and-the-2024-presidential-election (2024).

  90. Mumford, L. Authoritarian and democratic technics. Technol. Cult. 5, 1–8 (1964).

    Article  Google Scholar 

  91. Simon, F. & Altay, S. Don’t panic (yet): assessing the evidence and discourse around generative AI and elections. knightcolumbia.org https://knightcolumbia.org/content/dont-panic-yet-assessing-the-evidence-and-discourse-around-generative-ai-and-elections (2025).

  92. Russian FSB cyber actor star Blizzard continues worldwide spear-phishing campaigns. ncsc.gov.uk https://www.ncsc.gov.uk/news/star-blizzard-continues-spear-phishing-campaigns (2023).

  93. Hazell, J. Spear phishing with large language models. Preprint at http://arxiv.org/abs/2305.06972 (2023).

  94. Heiding, F., Lermen, S., Kao, A., Schneier, B. & Vishwanath, A. Evaluating large language models’ capability to launch fully automated spear phishing campaigns: validated on human subjects. Preprint at https://doi.org/10.48550/arXiv.2412.00586 (2024).

  95. Angwin, J., Nelson, A. & Palta, R. Seeking reliable election information? Don’t trust AI. Proofnews https://www.proofnews.org/seeking-election-information-dont-trust-ai/ (27 February 2024).

  96. Matza, M. Fake Biden robocall tells voters to skip New Hampshire primary election. BBC News https://www.bbc.com/news/world-us-canada-68064247 (22 January 2024).

  97. Panditharatne, M. Preparing to fight AI-backed voter suppression. Brennan Center https://www.brennancenter.org/our-work/research-reports/preparing-fight-ai-backed-voter-suppression (2024).

  98. Berzon, A. & Corasaniti, N. Georgia County signs up to use voter database backed by election deniers. The New York Times https://www.nytimes.com/2023/12/01/us/politics/georgia-county-election-deniers-trump.html (1 December 2023).

  99. Layne, N. Insight: pro-Trump activists swamp election officials with sprawling records requests. Reuters https://www.reuters.com/world/us/pro-trump-activists-swamp-election-officials-with-sprawling-records-requests-2022-08-03/ (3 August 2022).

  100. Rheault, L., Rayment, E. & Musulan, A. Politicians in the line of fire: incivility and the treatment of women on social media. Res. Politics https://doi.org/10.1177/2053168018816228 (2019).

    Article  Google Scholar 

  101. Di Meco, L. & Brechenmacher, S. Tackling online abuse and disinformation targeting women in politics. Carnegie Endowment for International Peace https://carnegieendowment.org/2020/11/30/tackling-online-abuse-and-disinformation-targeting-women-in-politics-pub-83331 (30 November 2020).

  102. Judson, E. Generative AI: the new frontier for gendered disinformation? Demos https://demos.co.uk/blogs/generative-ai-the-new-frontier-for-gendered-disinformation/ (27 March 2024).

  103. Murgia, M. Code Dependent: Living in the Shadow of AI (Picador, 2024).

  104. Janatian, S., Westermann, H., Tan, J., Savelka, J. & Benyekhlef, K. From text to structure: using large language models to support the development of legal expert systems. Preprint at http://arxiv.org/abs/2311.04911 (2023).

  105. Casey, A. & Niblett, A. A framework for the new personalization of law. Univ. Chicago Law Rev. https://doi.org/10.2139/ssrn.3271992 (2019).

  106. Jeantet, D. & Savarese, M. Brazilian city enacts an ordinance that was secretly written by ChatGPT. AP News https://apnews.com/article/brazil-artificial-intelligence-porto-alegre-5afd1240afe7b6ac202bb0bbc45e08d4 (30 November 2023).

  107. Cerina, R. & Duch, R. Artificially intelligent opinion polling. Preprint at http://arxiv.org/abs/2309.06029 (2023).

  108. Sanders, N. E., Ulinich, A. & Schneier, B. Demonstrations of the potential of AI-based political issue polling. Harvard Data Sci. Rev. https://doi.org/10.1162/99608f92.1d3cf75d (2023).

  109. Cerina, R. & Rouméas, É. The democratic ethics of artificially intelligent polling. AI Soc. https://doi.org/10.1007/s00146-024-02150-4 (2025).

    Article  Google Scholar 

  110. Gordon, M. L. et al. Jury learning: integrating dissenting voices into machine learning models. In Proc. CHI Conference on Human Factors in Computing Systems 1–19 (Association for Computing Machinery, 2022).

  111. Lee, M. K. et al. WeBuildAI: participatory framework for algorithmic governance. Proc. ACM Hum. Comput. Interact. 3, 1–35 (2019).

    CAS  Google Scholar 

  112. Agnew, W. et al. The illusion of artificial inclusion. In Proc. CHI Conference on Human Factors in Computing Systems 1–12 (Association for Computing Machinery, 2024).

  113. Sætra, H. S. & Selinger, E. The siren song of technological remedies for social problems: defining, demarcating and evaluating techno-fixes and techno-solutionism. SSRN J. https://doi.org/10.2139/ssrn.4576687 (2023).

    Article  Google Scholar 

  114. Rawls, J. Political Liberalism (Columbia Univ. Press, 2005).

    Google Scholar 

  115. Bermeo, N. On democratic backsliding. J. Democracy 27, 5–19 (2016).

    Article  Google Scholar 

  116. Allen, D. & Weyl, E. G. The real dangers of generative AI. JoD 35, 147–162 (2024).

    Article  Google Scholar 

  117. von Hayek, F. A. The Road to Serfdom (Routledge, 1997).

  118. Harari, Y. N. Why technology favors tyranny. The Atlantic (October 2018).

  119. Acemoglu, D. Harms of AI. in The Oxford Handbook of AI Governance (ed. Bullock, J. B.) 660–706 (Oxford Univ. Press, 2023).

  120. Acemoglu, D., Autor, D., Hazell, J. & Restrepo, P. AI and Jobs: Evidence from Online Vacancies (NBER, 2020).

  121. Ekelund, H. Why There will be Plenty of Jobs in the Future—Even with Artificial Intelligence (World Economic Forum, 2024).

  122. McAfee, A. Generally faster: the economic impact of generative AI. storage.googleapis.com https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/Generally_Faster_-_The_Economic_Impact_of_Generative_AI.pdf (2024).

  123. Noy, S. & Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 381, 187–192 (2023).

    Article  CAS  PubMed  Google Scholar 

  124. Choi, J. H., Monahan, A. & Schwarcz, D. B. Lawyering in the age of artificial intelligence. SSRN J. https://doi.org/10.2139/ssrn.4626276 (2023).

    Article  Google Scholar 

  125. Schaake, M. The Tech Coup: How to Save Democracy from Silicon Valley (Princeton Univ. Press, 2024).

  126. Acemoglu, D. & Johnson, S. Power and Progress: Our Thousand-Year Struggle over Technology and Prosperity (Basic Books, 2023).

  127. Hadfield, G. K. & Clark, J. Regulatory markets: the future of AI governance. Preprint at http://arxiv.org/abs/2304.04914 (2023).

  128. Yagoda, M. Airline held liable for its chatbot giving passenger bad advice—what this means for travellers. BBC News https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know (23 February 2024).

  129. Eubanks, V. Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor (St Martin’s Press, 2017).

  130. Danaher, J. The threat of algocracy: reality, resistance and accommodation. Philos. Technol. 29, 245–268 (2016).

    Article  Google Scholar 

  131. Koster, R. et al. Human-centred mechanism design with Democratic AI. Nat. Hum. Behav. 6, 1398–1407 (2022).

    Article  PubMed  PubMed Central  Google Scholar 

  132. Ovadya, A. Reimagining democracy for AI. J. Democracy 34, 162–170 (2023).

    Article  Google Scholar 

  133. Tessler, M. H. et al. AI can help humans find common ground in democratic deliberation. Science 386, eadq2852 (2024).

    Article  CAS  PubMed  Google Scholar 

  134. Vykopal, I., Pikuliak, M., Ostermann, S. & Šimko, M. Generative large language models in automated fact-checking: a survey. Preprint at https://doi.org/10.48550/arXiv.2407.02351 (2024).

  135. Chin, J. & Lin, L. Surveillance State: Inside China’s Quest to Launch a New Era of Social Control (St Martin’s Press, 2022).

  136. Argyle, L. P. et al. Testing theories of political persuasion using AI. Proc. Natl Acad. Sci. USA 122, e2412815122 (2025).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  137. Goldstein, J. A., Chao, J., Grossman, S., Stamos, A. & Tomz, M. How persuasive is AI-generated propaganda? PNAS Nexus 3, pgae034 (2024).

    Article  PubMed  PubMed Central  Google Scholar 

  138. Hackenburg, K. & Margetts, H. Evaluating the persuasive influence of political microtargeting with large language models. Proc. Natl Acad. Sci. USA 121, e2403116121 (2024).

    Article  PubMed  PubMed Central  Google Scholar 

  139. Hackenburg, K. et al. Scaling language model size yields diminishing returns for single-message political persuasion. Proc. Natl Acad. Sci. USA 122, e2413443122 (2025).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  140. Karinshak, E., Liu, S. X., Park, J. S. & Hancock, J. T. Working with AI to persuade: examining a large language model’s ability to generate pro-vaccination messages. Proc. ACM Hum. Comput. Interact. 7, 1–29 (2023).

    Article  Google Scholar 

  141. Buchanan, B., Lohn, A., Musser, M. & Sedova, K. Truth, lies and automation: how language models could change disinformation. Center for Security and Emerging Technology https://cset.georgetown.edu/publication/truth-lies-and-automation/ (2021).

Download references

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to conceptualizing, writing, editing and revising this manuscript.

Corresponding authors

Correspondence to Christopher Summerfield or Matthew Botvinick.

Ethics declarations

Competing interests

The following authors are full- or part-time remunerated employees of commercial developers of AI technology: M. Bakker, I.G., N.M., M.H.T. and M. Botvinick (Google DeepMind), E.D. and D.G. (Anthropic) and T.E. (OpenAI), A.P. (Fundamental AI Research (FAIR), Meta). C.S. and K.H. are part-time remunerated government employees (at the UK AI Security Institute). D.S. and S.H. are employees of the non-profit organization Collective Intelligence Project. A.O. is an employee of the AI & Democracy Foundation. E.S. is an employee of Demos. None of these employers had any role in the preparation of the manuscript or the decision to publish. The remaining authors declare no competing interests.

Peer review

Peer review information

Nature Human Behaviour thanks the anonymous reviewers for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Summerfield, C., Argyle, L.P., Bakker, M. et al. The impact of advanced AI systems on democracy. Nat Hum Behav (2025). https://doi.org/10.1038/s41562-025-02309-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1038/s41562-025-02309-z

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing