Log in

Against “Democratizing AI”

  • Original Article
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

This paper argues against the call to democratize artificial intelligence (AI). Several authors demand to reap purported benefits that rest in direct and broad participation: In the governance of AI, more people should be more involved in more decisions about AI—from development and design to deployment. This paper opposes this call. The paper presents five objections against broadening and deepening public participation in the governance of AI. The paper begins by reviewing the literature and carving out a set of claims that are associated with the call to “democratize AI”. It then argues that such a democratization of AI (1) rests on weak grounds, because it does not answer to a demand of legitimization, (2) is redundant in that it overlaps with existing governance structures, (3) is resource intensive, which leads to injustices, (4) is morally myopic and thereby creates popular oversights and moral problems of its own, and finally, (5) is neither theoretically nor practically the right kind of response to the injustices that animate the call. The paper concludes by suggesting that AI should be democratized not by broadening and deepening participation but by increasing the democratic quality of the administrative and executive elements of collective decision making. In a slogan: The question is not so much whether AI should be democratized but how.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Germany)

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. These three ideas—fairness, freedom and equality—mean different things. By “freedom” I understand the capacity to see one’s will carried out and, more generally, a robust congruence between one’s actions and the conditions of one’s life on the one hand and the authentic expression of one’s values on the other. By “equality” I understand, first, the tenet that each individual has the same moral worth and, second, that this tenet finds its expression in how individuals relate to each other—that they relate to each other as equals. By “fairness” I mean an impartial appraisal of the reasons that each individual could offer on matters of common concern.

  2. Be there no doubt: Those who emphasize the participatory and populist elements of democracy are in no way to blame or even complicit in this authoritarian misappropriation.

  3. I argue for this latter claim elsewhere: Proposals, which nominally aim to improve democracy, often hollow out its values (Himmelreich 2022).

  4. Strictly speaking, algorithms are abstract objects, like theorems and arithmetic operations. It is not obvious how this vast class of abstract objects—and not just their implementations—are supposed to give rise to ethical problems.

  5. A related set of claims is defended for the practice of scientific research across the board by Kitcher (2011).

  6. Because of this condition to increase or introduce direct democratic powers, the so-called Moral Machine experiment is not a form of democratizing AI. Some proponents of such surveys—and they are usually just surveys and not experiments—argue that public attitudes about ethics and technology must be studied, identified, and articulated to be “cognizant of public morality” (Awad et al. 2018). The idea is that the hence elicited public attitudes are to limit policymaking, because, otherwise, “societal push-back will drastically slow down the adoption of intelligent machines” (Awad et al. 2020). This approach is flawed (Jaques 2019; Himmelreich 2020). The overall idea contrasts with the call to “democratize AI” because it 1) aims mainly to inform and 2) sees individuals as subjects in an investigation. By contrast, the demand to democratize AI seeks to empower individuals, to endow marginalized groups with novel ways to make their voices heard (although in the wrong way as I argue here), and to give citizens greater direct influence—collective power—over decisions.

  7. This proposal by Mills (2019) can be distinguished into a proposal about organizational function (the data trust) and a proposal about the trust’s mode of governance (deeper participation). My argument is only about the latter.

  8. Kitcher writes (2011, 127): “Current scientific research neglects the interests of a vast number of people, except insofar as their interests coincide with those of people in the affluent world.”.

  9. Kitcher also writes (2011, 126): “Privatization of scientific research will probably matters worse.” Given that much research on AI is privatized and proprietary, the problems that animate Kitcher are amplified in the case of AI.

  10. Admittedly, some of the relevant experts in cases of AI injustice are those who suffer the injustice. It is their expertise that must find its way into our deliberation and collective decision-making. But I disagree that broadening and deepening participation is the right way of doing so.

  11. On the tension between participation and deliberation see Cohen (2009, sec. 5).

  12. The distinction intrinsic vs. instrumental value conflates a distinction about values’ location (intrinsic vs. extrinsic) with a distinction about their relations (final vs. instrumental). See Korsgaard (1983).

  13. Questions about the legitimacy of the state: Why should you respect what the state asks you to do? Why can some demands of the state be enforced, even coercively? Questions about justifying democracy: Why should you value, and perhaps choose, democracy over alternative systems?

  14. The distinction between autonomy as sovereignty and autonomy as non-alienation in these terms is due to Enoch (Enoch 2017; 2020).

  15. Many associations are governed democratically. Labor unions, recreational clubs, or church parish administrations are examples. In addition to exhibiting triggers of legitimatization requirements, these associations can also be seen as essential parts of a democratic society. In other words, they might be part of a state democracy and part of meeting legitimization requirements that are triggered by the state.

  16. This assumes, of course, that there are feasible alternatives that have fewer of the costs outlined above.

  17. I here argue against the second assumption of Sclove’s argument presented earlier.

  18. Structural injustice are systematic violations of particularly important moral claims or liberties, the maintenance of which is explained by non-individual entities such as cultures, norms, or practices.

  19. Although some argue that the distinction between procedure and substance collapses ( Cohen 1993, 1997). Thus, there might be no such thing as a purely proceduralist conception of democracy.

  20. The question, of course, is whether a rejection of material equality is reasonable.

  21. Some deliberative democrats want public reasons. That is, they demand that this justification should be based on reasons that everyone can accept. Roughly, the same justification should be offered to everyone. By contrast, others argue that each individual can be offered a different justification as long as each can be offered some reasons. Roughly, they contend that different reasons can be offered to different people.

  22. Free expression is the broader category, it includes, for example, artistic expression.

References

Download references

Acknowledgements

In writing this paper I benefited greatly from early discussions with Iason Gabriel who suggested objections, their names, and relevant literature. I thank Tina Nabatchi for hel** me with references to literature on participation and collaborative governance. I also am grateful for the discussions and the feedback I received at the exploratory seminar on The Ethics of Technology: beyond Privacy and Safety at the Radcliffe Institute for Advanced Study at Harvard University, the Embedding AI in Society Symposium at NC State University, the brownbag seminar of the Information Society Project at Yale Law School, and the workshop Business Ethics in the 6ix.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Johannes Himmelreich.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Himmelreich, J. Against “Democratizing AI”. AI & Soc 38, 1333–1346 (2023). https://doi.org/10.1007/s00146-021-01357-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-021-01357-z

Keywords

Navigation