Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
Democratic governance through DAO-based deliberation and voting for inclusive decision making in AI models
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 03 March 2026

Democratic governance through DAO-based deliberation and voting for inclusive decision making in AI models

  • Tanusree Sharma1,
  • Yujin Potter3,
  • Jongwon Park2,
  • Yiren Liu2,
  • Yun Huang2,
  • Sunny Liu4,
  • Dawn Song3,
  • Jeff Hancock4 &
  • …
  • Yang Wang2 

Scientific Reports , Article number:  (2026) Cite this article

  • 2086 Accesses

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Computer science
  • Software

Abstract

A major criticism of AI development is the lack of transparency, such as, inadequate documentation and traceability in its design and decision-making processes, leading to adverse outcomes including discrimination, lack of inclusivity and representation, and breaches of legal regulations. Underserved populations, in particular, are disproportionately affected by these design decisions. Furthermore, traditional social science techniques such as interviews, focus groups, and surveys struggle to adequately capture user needs and expectations in the digital era, due to their inherent limitations in deliberation, consensus-building, and providing consistent insights. We developed a democratic decision framework utilizing Decentralized Autonomous Organization (DAO) to enable underserved groups to deliberate and reach a consensus on key AI issues. To assess our proposed democratic decision mechanism, we conducted a case study on updating AI model specification based on diverse stakeholders input. We focus on reducing stereotypical biases in text-to-image systems, particularly gender bias in image generation from text prompts. We designed and experimented various governance configurations, including decision aggregation schemes and decision power, to examine how democratic processes could guide updates to AI model. Through a 2 × 2 experimental design, we tested various aggregation schemes (ranked vs. quadratic) and decision power distribution (equal vs. 20/80 differential) in a randomized online experiment (n=177) with participants from the global south and people with disabilities, to study how the varying governance mechanisms impact people’s perceptions of the decision-making processes and resulting output of the AI Model specification. Our results indicate that despite their diverse backgrounds, participants showed convergence in deliberations on several aspects, including user control over image generation, multiple output options for user selection, and the social appropriateness and accuracy of generated images. Our study underscores the importance of use of appropriate governance in democratic decision-making in AI alignment. Notably, the combination of quadratic preference aggregation method which gives minorities more voice and equal decision power distribution, was perceived as a fairer and democratic approach.

Similar content being viewed by others

Conversational AI and equity through assessing GPT-3’s communication with diverse social groups on contentious topics

Article Open access 18 January 2024

In search of a Goldilocks zone for credible AI

Article Open access 01 July 2021

AI hallucination: towards a comprehensive classification of distorted information in artificial intelligence-generated content

Article Open access 27 September 2024

Data availability

Research data are available in this link https://osf.io/q6snh/files/osfstorage?view_only=fb9fd559c9d9472bad1ae7d81dfea3b8

Code availability

Code Related to the Democratic platform can be found in https://github.com/TanusreeSharma/Inclusive.AI-DAO.

References

  1. Sambasivan, N. et al. Everyone wants to do the model work, not the data work: Data cascades in high-stakes ai. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI ’21, 2021, https://doi.org/10.1145/3411764.3445518(Association for Computing Machinery, New York, NY, USA.

  2. Brundage, M. et al. Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. arXiv preprint arXiv:2004.07213 (2020).

  3. Suresh, H. & Guttag, J. A framework for understanding sources of harm throughout the machine learning life cycle. In: Equity and Access in Algorithms, Mechanisms, and Optimization, EAAMO ’21, 2021, https://doi.org/10.1145/3465416.3483305(Association for Computing Machinery, New York, NY, USA.

  4. Crawford, K., Paglen, T. The politics of training sets for machine learning, Excavating AI. (2019).

  5. Harvey, A. & LaPlace, J. Expo. AI. (2021).

  6. Northcutt, C. G., Athalye, A. & Mueller, J. Pervasive label errors in test sets destabilize machine learning benchmarks. arXiv preprint arXiv:2103.14749 (2021).

  7. Vincent, J. Transgender Youtubers had their Videos Grabbed to Train Facial Recognition Software. The Verge.

  8. Luccioni, A. S. et al. A framework for deprecating datasets: Standardizing documentation, identification, and communication. In 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22, 199–212, 2022, https://doi.org/10.1145/3531146.3533086(Association for Computing Machinery, New York, NY, USA.

  9. Bigham, J. P. & Carrington, P. Learning from the front: People with disabilities as early adopters of AI. Proceedings of the 2018 HCIC Human-Computer Interaction Consortium (2018).

  10. Morris, M. R. Ai and accessibility. Commun. ACM 63, 35–37. https://doi.org/10.1145/3356727 (2020).

    Google Scholar 

  11. Park, J. S., Bragg, D., Kamar, E. & Morris, M. R. Designing an online infrastructure for collecting ai data from people with disabilities. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 52–63 (2021).

  12. Zhou, C. et al. Lima: Less is more for alignment. Adv. Neural Inf. Process. Syst. 36, 55006 (2024).

    Google Scholar 

  13. Ouyang, L. et al. Training language models to follow instructions with human feedback. Adv. Neural Inf. Process. Syst. 35, 27730–27744 (2022).

    Google Scholar 

  14. Schulman, J., Wolski, F., Dhariwal, P., Radford, A. & Klimov, O. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017).

  15. Rafailov, R. et al. Direct preference optimization: Your language model is secretly a reward model. Adv. Neural Inf. Process. Syst. 36, 53728 (2024).

    Google Scholar 

  16. Aher, G. V., Arriaga, R. I. & Kalai, A. T. Using large language models to simulate multiple humans and replicate human subject studies. In International Conference on Machine Learning, 337–371 (PMLR, 2023).

  17. Miotto, M., Rossberg, N. & Kleinberg, B. Who is gpt-3? an exploration of personality, values and demographics. arXiv preprint arXiv:2209.14338 (2022).

  18. Kaddour, J. et al. Challenges and applications of large language models. arXiv preprint arXiv:2307.10169 (2023).

  19. Stangl, A., Shiroma, K., Xie, B., Fleischmann, K. R. & Gurari, D. Visual content considered private by people who are blind. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility, 1–12 (2020).

  20. Ostrom, E. Beyond markets and states: Polycentric governance of complex economic systems. Am. Econ. Rev. 100, 641–672 (2010).

    Google Scholar 

  21. Weber, R. H. Realizing a new global cyberspace framework. Normative Foundations and Guiding Principles (2015).

  22. Sharma, T. et al. Unpacking how decentralized autonomous organizations (DAOS) work in practice. arXiv preprint arXiv:2304.09822 (2023).

  23. Benkler, Y., Shaw, A. & Hill, B. M. Peer production: A form of collective intelligence. Handbook of Collective Intelligence. 175 (2015).

  24. Lalley, S. P. & Weyl, E. G. Quadratic voting: How mechanism design can radicalize democracy. In AEA Papers and Proceedings, vol. 108, 33–37 (American Economic Association 2014 Broadway, Suite 305, Nashville, TN 37203, 2018).

  25. Weyl, E. G., Ohlhaver, P. & Buterin, V. Decentralized society: Finding web3’s soul. Available at SSRN 4105763 (2022).

  26. Zhang, B. & Zhou, H.-S. Brief announcement: Statement voting and liquid democracy. In Proceedings of the ACM Symposium on Principles of Distributed Computing, 359–361 (2017).

  27. Young, M. et al. Participation versus scale: Tensions in the practical demands on participatory AI. First Monday https://doi.org/10.5210/fm.v29i4.13642 (2024).

    Google Scholar 

  28. Hall, P. A. & Taylor, R. C. Political science and the three new institutionalisms. Political Stud. 44, 936–957 (1996).

    Google Scholar 

  29. Gerber, M., Bächtiger, A., Fiket, I., Steenbergen, M. & Steiner, J. Deliberative and non-deliberative persuasion: Mechanisms of opinion formation in Europolis. Eur. Union Politics 15, 410–429 (2014).

    Google Scholar 

  30. Horowitz, D. L. Electoral systems: A primer for decision makers. J. Democracy 14, 115 (2003).

    Google Scholar 

  31. Posner, E. A. & Weyl, E. G. Quadratic voting and the public good: Introduction. Public Choice 172, 1–22 (2017).

    Google Scholar 

  32. Shen, H. et al. Towards bidirectional human-ai alignment: A systematic review for clarifications, framework, and future directions, (2024). arXiv:2406.09264.

  33. Calo, R. Artificial intelligence policy: A primer and roadmap. UCDL Rev. 51, 399 (2017).

    Google Scholar 

  34. Gasser, U. & Almeida, V. A. A layered model for AI governance. IEEE Internet Comput. 21, 58–62 (2017).

    Google Scholar 

  35. Scherer, M. U. Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harv. JL & Tech. 29, 353 (2015).

    Google Scholar 

  36. Lee, M. K. et al. Webuildai: Participatory framework for algorithmic governance. Proc. ACM Hum. -Comput. Interact. 3, 1–35 (2019).

    Google Scholar 

  37. Fan, J. & Zhang, A. X. Digital juries: A civics-oriented approach to platform governance. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–14 (2020).

  38. Lee, D., Goel, A., Aitamurto, T. & Landemore, H. Crowdsourcing for participatory democracies: Efficient elicitation of social choice functions. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 2, 133–142 (2014).

  39. Erdélyi, O. J. & Goldsmith, J. Regulating artificial intelligence: Proposal for a global solution. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 95–101 (2018).

  40. Cihon, P. Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development. Future of Humanity Institute. University of Oxford. 340–342 (2019).

  41. Maas, M. M. Aligning AI regulation to sociotechnical change. Oxford Handbook on AI Governance (Oxford University Press, 2022 forthcoming) (2021).

  42. Wallach, W. & Marchant, G. E. An Agile Ethical/Legal Model for the International and National Governance of AI and Robotics. Association for the Advancement of Artificial Intelligence (2018).

  43. Schmidt, F. A. Crowdsourced production of ai training data: How human workers teach self-driving cars how to see. Tech. Rep., Working Paper Forschungsförderung (2019).

  44. Zheng, C. et al. Competent but Rigid: Identifying the Gap in Empowering AI to Participate Equally in Group Decision-Making. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–19, (2023), https://doi.org/10.1145/3544548.3581131.

  45. Wojciech Zaremba. Democratic input to ai. https://openai.com/index/democratic-inputs-to-ai/ (2023). Accessed 04 Jun 2024.

  46. Rousseau, J.-J. The Social Contract (1762). Londres (1964).

  47. Dahl, R. Democracy and its critics yale University Press (New Haven & London, 1989).

    Google Scholar 

  48. Landemore, H. Democratic Reason: Politics, Collective Intelligence, and the Rule of the Many (Princeton University Press, 2012).

    Google Scholar 

  49. De Montesquieu, C. Montesquieu. The Spirit of the Laws (Cambridge University Press, 1989).

  50. Park, P. S., Goldstein, S., O’Gara, A., Chen, M. & Hendrycks, D. Ai deception: A survey of examples, risks, and potential solutions. arXiv preprint arXiv:2308.14752 (2023).

  51. Artificial Intelligence in Society. OECD (2019).

  52. West, S. M., Whittaker, M. & Crawford, K. Discriminating systems. AI Now 1–33 (2019).

  53. Zhang, B. & Dafoe, A. Us public opinion on the governance of artificial intelligence. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 187–193 (2020).

  54. Caughey, D. & Warshaw, C. Policy preferences and policy change: Dynamic responsiveness in the American states, 1936–2014. Am. Polit. Sci. Rev. 112, 249–266 (2018).

    Google Scholar 

  55. Gutmann, A. & Thompson, D. F. Why Deliberative Democracy? (Princeton University Press, 2004).

  56. Faraj, S. & Xiao, Y. Coordination in fast-response organizations. Manag. Sci. 52, 1155–1169 (2006).

    Google Scholar 

  57. Fish, R. S., Kraut, R. E. & Leland, M. D. Quilt: A collaborative tool for cooperative writing. In Proceedings of the ACM SIGOIS and IEEECS TC-OA 1988 Conference on Office Information Systems, 30–37 (1988).

  58. Stokols, D., Misra, S., Moser, R. P., Hall, K. L. & Taylor, B. K. The ecology of team science: Understanding contextual influences on transdisciplinary collaboration. Am. J. Prev. Med. 35, S96–S115 (2008).

    Google Scholar 

  59. Ovadya, A. et al. Toward democracy levels for AI. arXiv preprint arXiv:2411.09222 (2024).

  60. Newberry, T. & Ord, T. The Parliamentary Approach to Moral Uncertainty (University of Oxford, Future of Humanity Institute, 2021).

    Google Scholar 

  61. Bakker, M. et al. Fine-tuning language models to find agreement among humans with diverse preferences. Adv. Neural Inf. Process. Syst. 35, 38176–38189 (2022).

    Google Scholar 

  62. openai. Democratic inputs to ai grant program: lessons learned and implementation plans. https://openai.com/index/democratic-inputs-to-ai-grant-program-update/ (2024). [Accessed 07–05-2025].

  63. Sharma, T. et al. Future of algorithmic organization: Large-scale analysis of Decentralized Autonomous Organizations (DAOs). arXiv preprint arXiv:2410.13095 (2024).

  64. Ahmad, S. T. et al. VaxGuard: A multi-generator, multi-type, and multi-role dataset for detecting LLM-generated vaccine misinformation. arXiv preprint arXiv:2503.09103 (2025).

  65. Xu, C. et al. MMDT: Decoding the trustworthiness and safety of multimodal foundation models. arXiv preprint arXiv:2503.14827 (2025).

  66. Potter, Y., Lai, S., Kim, J., Evans, J. & Song, D. Hidden persuaders: LLMs’ political leaning and their influence on voters. arXiv preprint arXiv:2410.24190 (2024).

  67. Fisher, J. et al. Political neutrality in AI is impossible-but here is how to approximate it. arXiv preprint arXiv:2503.05728 (2025).

  68. Introducing BigLaw Bench – harvey.ai. https://www.harvey.ai/blog/introducing-biglaw-bench. Accessed 10 May 2025.

  69. Edenberg, E. & Wood, A. Disambiguating algorithmic bias: From neutrality to justice. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 691–704 (2023).

  70. Norris, P. Public Sentinel: News media and governance reform (World Bank Publications, 2009).

    Google Scholar 

  71. Tsai, L. L. et al. Generative AI for Pro-democracy Platforms (MIT, 2024).

    Google Scholar 

  72. Seering, J. Reconsidering self-moderation: The role of research in supporting community-based models for online content moderation. Proc. ACM Hum. -Comput. Interact. 4, 1–28 (2020).

    Google Scholar 

  73. Fishkin, J. S. & Luskin, R. C. The quest for deliberative democracy. In Democratic Innovation, 31–42 (Routledge, 2003).

  74. Costanza-Chock, S., Raji, I. D. & Buolamwini, J. Who audits the auditors? recommendations from a field scan of the algorithmic auditing ecosystem. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 1571–1583 (2022).

  75. Cunningham, J. et al. The application of distributed autonomous organization governance mechanisms to civic medical data management. IET Blockchain 4, 507–525 (2024).

    Google Scholar 

  76. Ter-Minassian, L. Democratizing AI governance: Balancing expertise and public participation. arXiv preprint arXiv:2502.08651 (2025).

  77. https://docs.snapshot.org/. Accessed on 2024

  78. Eberhardt, J. & Tai, S. On or off the blockchain? Insights on off-chaining computation and data. In European Conference on Service-Oriented and Cloud Computing, 3–15 (Springer, 2017).

  79. Alawadi, A., Kakabadse, N., Kakabadse, A. & Zuckerbraun, S. Decentralized autonomous organizations (DAOS): Stewardship talks but agency walks. J. Bus. Res. 178, 114672 (2024).

    Google Scholar 

  80. Confidential DAO and governance systems using Fully Homomorphic Encryption - Zama – zama.ai. https://www.zama.ai/solutions/confidential-dao-using-fully-homomorphic-encryption (2025). Accessed 08 May 2025.

  81. Haque, A. B., Islam, A. N., Hyrynsalmi, S., Naqvi, B. & Smolander, K. GDPR compliant blockchains-a systematic literature review. IEEE Access 9, 50593–50606 (2021).

    Google Scholar 

  82. Potter, Y., Corren, E., Garrido, G. M., Hoofnagle, C. & Song, D. The gap between data rights ideals and reality. arxiv (2025).

  83. The hidden danger of re-centralization in blockchain platforms – brookings.edu. https://www.brookings.edu/articles/the-hidden-danger-of-re-centralization-in-blockchain-platforms. Accessed 08 May 2025.

  84. Kelsey, J. Implementing a sortition-based DAO for policymaking and AI governance. Sat 5, 05 (2023).

    Google Scholar 

  85. Yang, Q., Steinfeld, A., Rosé, C. & Zimmerman, J. Re-examining whether, why, and how human-ai interaction is uniquely difficult to design. In Proceedings of the 2020 Chi Conference on Human Factors in Computing Systems, 1–13 (2020).

  86. Kaufmann, N., Schulze, T. & Veit, D. More than fun and money. worker motivation in crowdsourcing-a study on mechanical turk. AISEL (2011).

  87. Theodorou, L. et al. Disability-first dataset creation: Lessons from constructing a dataset for teachable object recognition with blind and low vision data collectors. In The 23rd International ACM SIGACCESS Conference on Computers and Accessibility, 1–12 (2021).

  88. Sharma, T. et al. Disability-first design and creation of a dataset showing private visual information collected with people who are blind. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–15 (2023).

  89. Lindberg, S. I., Coppedge, M., Gerring, J. & Teorell, J. V-dem: A new way to measure democracy. J. Democracy 25, 159–169 (2014).

    Google Scholar 

  90. Fritsch, R., Müller, M. & Wattenhofer, R. Analyzing voting power in decentralized governance Who controls DAOS. Blockchain Res. Appl. 5, 100208 (2024).

    Google Scholar 

  91. Lalley, S. P., Weyl, E. G. et al. Quadratic voting. Available at SSRN (2016).

  92. Arnold, B. C. Pareto Distribution. Wiley StatsRef: Statistics Reference Online 1–10 (2014).

Download references

Acknowledgements

We thank OpenAI for supporting this research through the “Democratic Input to AI Grant 2023.” We also would like to thank Teddy Lee, Tyna Eloundou, Aviv Ovadya for their feedback during experiment design.

Author information

Authors and Affiliations

  1. College of Information Science and Technology, Pennsylvania State University, University Park, 16802, USA

    Tanusree Sharma

  2. School of Information Sciences, University of Illinois at Urbana-Champaign, Champaign, 61820, USA

    Jongwon Park, Yiren Liu, Yun Huang & Yang Wang

  3. Department of Computer Science, University of California, Berkeley, 94720, USA

    Yujin Potter & Dawn Song

  4. College of Communication, Stanford University, Stanford, 94305, USA

    Sunny Liu & Jeff Hancock

Authors
  1. Tanusree Sharma
    View author publications

    Search author on:PubMed Google Scholar

  2. Yujin Potter
    View author publications

    Search author on:PubMed Google Scholar

  3. Jongwon Park
    View author publications

    Search author on:PubMed Google Scholar

  4. Yiren Liu
    View author publications

    Search author on:PubMed Google Scholar

  5. Yun Huang
    View author publications

    Search author on:PubMed Google Scholar

  6. Sunny Liu
    View author publications

    Search author on:PubMed Google Scholar

  7. Dawn Song
    View author publications

    Search author on:PubMed Google Scholar

  8. Jeff Hancock
    View author publications

    Search author on:PubMed Google Scholar

  9. Yang Wang
    View author publications

    Search author on:PubMed Google Scholar

Contributions

Tanusree Sharma designed and led the experiment and contributed to all phases of the project including, the study design protocol, running the experiment, analyzing data of participants, User Satisfaction with the Overall Process, Factors Considered by Users; application design, paper writing; Yujin Potter runs the statistical data analysis for “Outcome of Democratic Governance Decision Process” and worked with lead author for data analysis plan Jongwon Park contributed to the development of the application from the blockchain tech stack Yiren Liu runs an analysis of text data quantitatively generated in Fig. 9. Yun Huang contributed to the study protocol design, guided the lead author for the study protocol design Sunny Liu contributed to the study protocol design Dawn Song contributed to the study protocol design Jeff Hancock contributed to the study protocol design Yang Wang contributed to the main ideation of this project, guided the lead author for study protocol design, writing the papers

Corresponding author

Correspondence to Tanusree Sharma.

Ethics declarations

Competing interests

The author(s) declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary Information (download PDF )

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sharma, T., Potter, Y., Park, J. et al. Democratic governance through DAO-based deliberation and voting for inclusive decision making in AI models. Sci Rep (2026). https://doi.org/10.1038/s41598-026-40180-8

Download citation

  • Received: 12 June 2024

  • Accepted: 10 February 2026

  • Published: 03 March 2026

  • DOI: https://doi.org/10.1038/s41598-026-40180-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Democratic
  • AI
  • Decentralized Autonomous Organizations (DAOs)
  • Voting schemes
  • Deliberation
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com footer links

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics