Abstract
A major criticism of AI development is the lack of transparency, such as, inadequate documentation and traceability in its design and decision-making processes, leading to adverse outcomes including discrimination, lack of inclusivity and representation, and breaches of legal regulations. Underserved populations, in particular, are disproportionately affected by these design decisions. Furthermore, traditional social science techniques such as interviews, focus groups, and surveys struggle to adequately capture user needs and expectations in the digital era, due to their inherent limitations in deliberation, consensus-building, and providing consistent insights. We developed a democratic decision framework utilizing Decentralized Autonomous Organization (DAO) to enable underserved groups to deliberate and reach a consensus on key AI issues. To assess our proposed democratic decision mechanism, we conducted a case study on updating AI model specification based on diverse stakeholders input. We focus on reducing stereotypical biases in text-to-image systems, particularly gender bias in image generation from text prompts. We designed and experimented various governance configurations, including decision aggregation schemes and decision power, to examine how democratic processes could guide updates to AI model. Through a 2 × 2 experimental design, we tested various aggregation schemes (ranked vs. quadratic) and decision power distribution (equal vs. 20/80 differential) in a randomized online experiment (n=177) with participants from the global south and people with disabilities, to study how the varying governance mechanisms impact people’s perceptions of the decision-making processes and resulting output of the AI Model specification. Our results indicate that despite their diverse backgrounds, participants showed convergence in deliberations on several aspects, including user control over image generation, multiple output options for user selection, and the social appropriateness and accuracy of generated images. Our study underscores the importance of use of appropriate governance in democratic decision-making in AI alignment. Notably, the combination of quadratic preference aggregation method which gives minorities more voice and equal decision power distribution, was perceived as a fairer and democratic approach.
Similar content being viewed by others
Data availability
Research data are available in this link https://osf.io/q6snh/files/osfstorage?view_only=fb9fd559c9d9472bad1ae7d81dfea3b8
Code availability
Code Related to the Democratic platform can be found in https://github.com/TanusreeSharma/Inclusive.AI-DAO.
References
Sambasivan, N. et al. Everyone wants to do the model work, not the data work: Data cascades in high-stakes ai. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI ’21, 2021, https://doi.org/10.1145/3411764.3445518(Association for Computing Machinery, New York, NY, USA.
Brundage, M. et al. Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. arXiv preprint arXiv:2004.07213 (2020).
Suresh, H. & Guttag, J. A framework for understanding sources of harm throughout the machine learning life cycle. In: Equity and Access in Algorithms, Mechanisms, and Optimization, EAAMO ’21, 2021, https://doi.org/10.1145/3465416.3483305(Association for Computing Machinery, New York, NY, USA.
Crawford, K., Paglen, T. The politics of training sets for machine learning, Excavating AI. (2019).
Harvey, A. & LaPlace, J. Expo. AI. (2021).
Northcutt, C. G., Athalye, A. & Mueller, J. Pervasive label errors in test sets destabilize machine learning benchmarks. arXiv preprint arXiv:2103.14749 (2021).
Vincent, J. Transgender Youtubers had their Videos Grabbed to Train Facial Recognition Software. The Verge.
Luccioni, A. S. et al. A framework for deprecating datasets: Standardizing documentation, identification, and communication. In 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22, 199–212, 2022, https://doi.org/10.1145/3531146.3533086(Association for Computing Machinery, New York, NY, USA.
Bigham, J. P. & Carrington, P. Learning from the front: People with disabilities as early adopters of AI. Proceedings of the 2018 HCIC Human-Computer Interaction Consortium (2018).
Morris, M. R. Ai and accessibility. Commun. ACM 63, 35–37. https://doi.org/10.1145/3356727 (2020).
Park, J. S., Bragg, D., Kamar, E. & Morris, M. R. Designing an online infrastructure for collecting ai data from people with disabilities. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 52–63 (2021).
Zhou, C. et al. Lima: Less is more for alignment. Adv. Neural Inf. Process. Syst. 36, 55006 (2024).
Ouyang, L. et al. Training language models to follow instructions with human feedback. Adv. Neural Inf. Process. Syst. 35, 27730–27744 (2022).
Schulman, J., Wolski, F., Dhariwal, P., Radford, A. & Klimov, O. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017).
Rafailov, R. et al. Direct preference optimization: Your language model is secretly a reward model. Adv. Neural Inf. Process. Syst. 36, 53728 (2024).
Aher, G. V., Arriaga, R. I. & Kalai, A. T. Using large language models to simulate multiple humans and replicate human subject studies. In International Conference on Machine Learning, 337–371 (PMLR, 2023).
Miotto, M., Rossberg, N. & Kleinberg, B. Who is gpt-3? an exploration of personality, values and demographics. arXiv preprint arXiv:2209.14338 (2022).
Kaddour, J. et al. Challenges and applications of large language models. arXiv preprint arXiv:2307.10169 (2023).
Stangl, A., Shiroma, K., Xie, B., Fleischmann, K. R. & Gurari, D. Visual content considered private by people who are blind. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility, 1–12 (2020).
Ostrom, E. Beyond markets and states: Polycentric governance of complex economic systems. Am. Econ. Rev. 100, 641–672 (2010).
Weber, R. H. Realizing a new global cyberspace framework. Normative Foundations and Guiding Principles (2015).
Sharma, T. et al. Unpacking how decentralized autonomous organizations (DAOS) work in practice. arXiv preprint arXiv:2304.09822 (2023).
Benkler, Y., Shaw, A. & Hill, B. M. Peer production: A form of collective intelligence. Handbook of Collective Intelligence. 175 (2015).
Lalley, S. P. & Weyl, E. G. Quadratic voting: How mechanism design can radicalize democracy. In AEA Papers and Proceedings, vol. 108, 33–37 (American Economic Association 2014 Broadway, Suite 305, Nashville, TN 37203, 2018).
Weyl, E. G., Ohlhaver, P. & Buterin, V. Decentralized society: Finding web3’s soul. Available at SSRN 4105763 (2022).
Zhang, B. & Zhou, H.-S. Brief announcement: Statement voting and liquid democracy. In Proceedings of the ACM Symposium on Principles of Distributed Computing, 359–361 (2017).
Young, M. et al. Participation versus scale: Tensions in the practical demands on participatory AI. First Monday https://doi.org/10.5210/fm.v29i4.13642 (2024).
Hall, P. A. & Taylor, R. C. Political science and the three new institutionalisms. Political Stud. 44, 936–957 (1996).
Gerber, M., Bächtiger, A., Fiket, I., Steenbergen, M. & Steiner, J. Deliberative and non-deliberative persuasion: Mechanisms of opinion formation in Europolis. Eur. Union Politics 15, 410–429 (2014).
Horowitz, D. L. Electoral systems: A primer for decision makers. J. Democracy 14, 115 (2003).
Posner, E. A. & Weyl, E. G. Quadratic voting and the public good: Introduction. Public Choice 172, 1–22 (2017).
Shen, H. et al. Towards bidirectional human-ai alignment: A systematic review for clarifications, framework, and future directions, (2024). arXiv:2406.09264.
Calo, R. Artificial intelligence policy: A primer and roadmap. UCDL Rev. 51, 399 (2017).
Gasser, U. & Almeida, V. A. A layered model for AI governance. IEEE Internet Comput. 21, 58–62 (2017).
Scherer, M. U. Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harv. JL & Tech. 29, 353 (2015).
Lee, M. K. et al. Webuildai: Participatory framework for algorithmic governance. Proc. ACM Hum. -Comput. Interact. 3, 1–35 (2019).
Fan, J. & Zhang, A. X. Digital juries: A civics-oriented approach to platform governance. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–14 (2020).
Lee, D., Goel, A., Aitamurto, T. & Landemore, H. Crowdsourcing for participatory democracies: Efficient elicitation of social choice functions. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 2, 133–142 (2014).
Erdélyi, O. J. & Goldsmith, J. Regulating artificial intelligence: Proposal for a global solution. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 95–101 (2018).
Cihon, P. Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development. Future of Humanity Institute. University of Oxford. 340–342 (2019).
Maas, M. M. Aligning AI regulation to sociotechnical change. Oxford Handbook on AI Governance (Oxford University Press, 2022 forthcoming) (2021).
Wallach, W. & Marchant, G. E. An Agile Ethical/Legal Model for the International and National Governance of AI and Robotics. Association for the Advancement of Artificial Intelligence (2018).
Schmidt, F. A. Crowdsourced production of ai training data: How human workers teach self-driving cars how to see. Tech. Rep., Working Paper Forschungsförderung (2019).
Zheng, C. et al. Competent but Rigid: Identifying the Gap in Empowering AI to Participate Equally in Group Decision-Making. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–19, (2023), https://doi.org/10.1145/3544548.3581131.
Wojciech Zaremba. Democratic input to ai. https://openai.com/index/democratic-inputs-to-ai/ (2023). Accessed 04 Jun 2024.
Rousseau, J.-J. The Social Contract (1762). Londres (1964).
Dahl, R. Democracy and its critics yale University Press (New Haven & London, 1989).
Landemore, H. Democratic Reason: Politics, Collective Intelligence, and the Rule of the Many (Princeton University Press, 2012).
De Montesquieu, C. Montesquieu. The Spirit of the Laws (Cambridge University Press, 1989).
Park, P. S., Goldstein, S., O’Gara, A., Chen, M. & Hendrycks, D. Ai deception: A survey of examples, risks, and potential solutions. arXiv preprint arXiv:2308.14752 (2023).
Artificial Intelligence in Society. OECD (2019).
West, S. M., Whittaker, M. & Crawford, K. Discriminating systems. AI Now 1–33 (2019).
Zhang, B. & Dafoe, A. Us public opinion on the governance of artificial intelligence. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 187–193 (2020).
Caughey, D. & Warshaw, C. Policy preferences and policy change: Dynamic responsiveness in the American states, 1936–2014. Am. Polit. Sci. Rev. 112, 249–266 (2018).
Gutmann, A. & Thompson, D. F. Why Deliberative Democracy? (Princeton University Press, 2004).
Faraj, S. & Xiao, Y. Coordination in fast-response organizations. Manag. Sci. 52, 1155–1169 (2006).
Fish, R. S., Kraut, R. E. & Leland, M. D. Quilt: A collaborative tool for cooperative writing. In Proceedings of the ACM SIGOIS and IEEECS TC-OA 1988 Conference on Office Information Systems, 30–37 (1988).
Stokols, D., Misra, S., Moser, R. P., Hall, K. L. & Taylor, B. K. The ecology of team science: Understanding contextual influences on transdisciplinary collaboration. Am. J. Prev. Med. 35, S96–S115 (2008).
Ovadya, A. et al. Toward democracy levels for AI. arXiv preprint arXiv:2411.09222 (2024).
Newberry, T. & Ord, T. The Parliamentary Approach to Moral Uncertainty (University of Oxford, Future of Humanity Institute, 2021).
Bakker, M. et al. Fine-tuning language models to find agreement among humans with diverse preferences. Adv. Neural Inf. Process. Syst. 35, 38176–38189 (2022).
openai. Democratic inputs to ai grant program: lessons learned and implementation plans. https://openai.com/index/democratic-inputs-to-ai-grant-program-update/ (2024). [Accessed 07–05-2025].
Sharma, T. et al. Future of algorithmic organization: Large-scale analysis of Decentralized Autonomous Organizations (DAOs). arXiv preprint arXiv:2410.13095 (2024).
Ahmad, S. T. et al. VaxGuard: A multi-generator, multi-type, and multi-role dataset for detecting LLM-generated vaccine misinformation. arXiv preprint arXiv:2503.09103 (2025).
Xu, C. et al. MMDT: Decoding the trustworthiness and safety of multimodal foundation models. arXiv preprint arXiv:2503.14827 (2025).
Potter, Y., Lai, S., Kim, J., Evans, J. & Song, D. Hidden persuaders: LLMs’ political leaning and their influence on voters. arXiv preprint arXiv:2410.24190 (2024).
Fisher, J. et al. Political neutrality in AI is impossible-but here is how to approximate it. arXiv preprint arXiv:2503.05728 (2025).
Introducing BigLaw Bench – harvey.ai. https://www.harvey.ai/blog/introducing-biglaw-bench. Accessed 10 May 2025.
Edenberg, E. & Wood, A. Disambiguating algorithmic bias: From neutrality to justice. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 691–704 (2023).
Norris, P. Public Sentinel: News media and governance reform (World Bank Publications, 2009).
Tsai, L. L. et al. Generative AI for Pro-democracy Platforms (MIT, 2024).
Seering, J. Reconsidering self-moderation: The role of research in supporting community-based models for online content moderation. Proc. ACM Hum. -Comput. Interact. 4, 1–28 (2020).
Fishkin, J. S. & Luskin, R. C. The quest for deliberative democracy. In Democratic Innovation, 31–42 (Routledge, 2003).
Costanza-Chock, S., Raji, I. D. & Buolamwini, J. Who audits the auditors? recommendations from a field scan of the algorithmic auditing ecosystem. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 1571–1583 (2022).
Cunningham, J. et al. The application of distributed autonomous organization governance mechanisms to civic medical data management. IET Blockchain 4, 507–525 (2024).
Ter-Minassian, L. Democratizing AI governance: Balancing expertise and public participation. arXiv preprint arXiv:2502.08651 (2025).
https://docs.snapshot.org/. Accessed on 2024
Eberhardt, J. & Tai, S. On or off the blockchain? Insights on off-chaining computation and data. In European Conference on Service-Oriented and Cloud Computing, 3–15 (Springer, 2017).
Alawadi, A., Kakabadse, N., Kakabadse, A. & Zuckerbraun, S. Decentralized autonomous organizations (DAOS): Stewardship talks but agency walks. J. Bus. Res. 178, 114672 (2024).
Confidential DAO and governance systems using Fully Homomorphic Encryption - Zama – zama.ai. https://www.zama.ai/solutions/confidential-dao-using-fully-homomorphic-encryption (2025). Accessed 08 May 2025.
Haque, A. B., Islam, A. N., Hyrynsalmi, S., Naqvi, B. & Smolander, K. GDPR compliant blockchains-a systematic literature review. IEEE Access 9, 50593–50606 (2021).
Potter, Y., Corren, E., Garrido, G. M., Hoofnagle, C. & Song, D. The gap between data rights ideals and reality. arxiv (2025).
The hidden danger of re-centralization in blockchain platforms – brookings.edu. https://www.brookings.edu/articles/the-hidden-danger-of-re-centralization-in-blockchain-platforms. Accessed 08 May 2025.
Kelsey, J. Implementing a sortition-based DAO for policymaking and AI governance. Sat 5, 05 (2023).
Yang, Q., Steinfeld, A., Rosé, C. & Zimmerman, J. Re-examining whether, why, and how human-ai interaction is uniquely difficult to design. In Proceedings of the 2020 Chi Conference on Human Factors in Computing Systems, 1–13 (2020).
Kaufmann, N., Schulze, T. & Veit, D. More than fun and money. worker motivation in crowdsourcing-a study on mechanical turk. AISEL (2011).
Theodorou, L. et al. Disability-first dataset creation: Lessons from constructing a dataset for teachable object recognition with blind and low vision data collectors. In The 23rd International ACM SIGACCESS Conference on Computers and Accessibility, 1–12 (2021).
Sharma, T. et al. Disability-first design and creation of a dataset showing private visual information collected with people who are blind. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–15 (2023).
Lindberg, S. I., Coppedge, M., Gerring, J. & Teorell, J. V-dem: A new way to measure democracy. J. Democracy 25, 159–169 (2014).
Fritsch, R., Müller, M. & Wattenhofer, R. Analyzing voting power in decentralized governance Who controls DAOS. Blockchain Res. Appl. 5, 100208 (2024).
Lalley, S. P., Weyl, E. G. et al. Quadratic voting. Available at SSRN (2016).
Arnold, B. C. Pareto Distribution. Wiley StatsRef: Statistics Reference Online 1–10 (2014).
Acknowledgements
We thank OpenAI for supporting this research through the “Democratic Input to AI Grant 2023.” We also would like to thank Teddy Lee, Tyna Eloundou, Aviv Ovadya for their feedback during experiment design.
Author information
Authors and Affiliations
Contributions
Tanusree Sharma designed and led the experiment and contributed to all phases of the project including, the study design protocol, running the experiment, analyzing data of participants, User Satisfaction with the Overall Process, Factors Considered by Users; application design, paper writing; Yujin Potter runs the statistical data analysis for “Outcome of Democratic Governance Decision Process” and worked with lead author for data analysis plan Jongwon Park contributed to the development of the application from the blockchain tech stack Yiren Liu runs an analysis of text data quantitatively generated in Fig. 9. Yun Huang contributed to the study protocol design, guided the lead author for the study protocol design Sunny Liu contributed to the study protocol design Dawn Song contributed to the study protocol design Jeff Hancock contributed to the study protocol design Yang Wang contributed to the main ideation of this project, guided the lead author for study protocol design, writing the papers
Corresponding author
Ethics declarations
Competing interests
The author(s) declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Sharma, T., Potter, Y., Park, J. et al. Democratic governance through DAO-based deliberation and voting for inclusive decision making in AI models. Sci Rep (2026). https://doi.org/10.1038/s41598-026-40180-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-026-40180-8


