Abstract
This paper argues against the call to democratize artificial intelligence (AI). Several authors demand to reap purported benefits that rest in direct and broad participation: In the governance of AI, more people should be more involved in more decisions about AI—from development and design to deployment. This paper opposes this call. The paper presents five objections against broadening and deepening public participation in the governance of AI. The paper begins by reviewing the literature and carving out a set of claims that are associated with the call to “democratize AI”. It then argues that such a democratization of AI (1) rests on weak grounds, because it does not answer to a demand of legitimization, (2) is redundant in that it overlaps with existing governance structures, (3) is resource intensive, which leads to injustices, (4) is morally myopic and thereby creates popular oversights and moral problems of its own, and finally, (5) is neither theoretically nor practically the right kind of response to the injustices that animate the call. The paper concludes by suggesting that AI should be democratized not by broadening and deepening participation but by increasing the democratic quality of the administrative and executive elements of collective decision making. In a slogan: The question is not so much whether AI should be democratized but how.
Similar content being viewed by others
Notes
These three ideas—fairness, freedom and equality—mean different things. By “freedom” I understand the capacity to see one’s will carried out and, more generally, a robust congruence between one’s actions and the conditions of one’s life on the one hand and the authentic expression of one’s values on the other. By “equality” I understand, first, the tenet that each individual has the same moral worth and, second, that this tenet finds its expression in how individuals relate to each other—that they relate to each other as equals. By “fairness” I mean an impartial appraisal of the reasons that each individual could offer on matters of common concern.
Be there no doubt: Those who emphasize the participatory and populist elements of democracy are in no way to blame or even complicit in this authoritarian misappropriation.
I argue for this latter claim elsewhere: Proposals, which nominally aim to improve democracy, often hollow out its values (Himmelreich 2022).
Strictly speaking, algorithms are abstract objects, like theorems and arithmetic operations. It is not obvious how this vast class of abstract objects—and not just their implementations—are supposed to give rise to ethical problems.
A related set of claims is defended for the practice of scientific research across the board by Kitcher (2011).
Because of this condition to increase or introduce direct democratic powers, the so-called Moral Machine experiment is not a form of democratizing AI. Some proponents of such surveys—and they are usually just surveys and not experiments—argue that public attitudes about ethics and technology must be studied, identified, and articulated to be “cognizant of public morality” (Awad et al. 2018). The idea is that the hence elicited public attitudes are to limit policymaking, because, otherwise, “societal push-back will drastically slow down the adoption of intelligent machines” (Awad et al. 2020). This approach is flawed (Jaques 2019; Himmelreich 2020). The overall idea contrasts with the call to “democratize AI” because it 1) aims mainly to inform and 2) sees individuals as subjects in an investigation. By contrast, the demand to democratize AI seeks to empower individuals, to endow marginalized groups with novel ways to make their voices heard (although in the wrong way as I argue here), and to give citizens greater direct influence—collective power—over decisions.
This proposal by Mills (2019) can be distinguished into a proposal about organizational function (the data trust) and a proposal about the trust’s mode of governance (deeper participation). My argument is only about the latter.
Kitcher writes (2011, 127): “Current scientific research neglects the interests of a vast number of people, except insofar as their interests coincide with those of people in the affluent world.”.
Kitcher also writes (2011, 126): “Privatization of scientific research will probably matters worse.” Given that much research on AI is privatized and proprietary, the problems that animate Kitcher are amplified in the case of AI.
Admittedly, some of the relevant experts in cases of AI injustice are those who suffer the injustice. It is their expertise that must find its way into our deliberation and collective decision-making. But I disagree that broadening and deepening participation is the right way of doing so.
On the tension between participation and deliberation see Cohen (2009, sec. 5).
The distinction intrinsic vs. instrumental value conflates a distinction about values’ location (intrinsic vs. extrinsic) with a distinction about their relations (final vs. instrumental). See Korsgaard (1983).
Questions about the legitimacy of the state: Why should you respect what the state asks you to do? Why can some demands of the state be enforced, even coercively? Questions about justifying democracy: Why should you value, and perhaps choose, democracy over alternative systems?
Many associations are governed democratically. Labor unions, recreational clubs, or church parish administrations are examples. In addition to exhibiting triggers of legitimatization requirements, these associations can also be seen as essential parts of a democratic society. In other words, they might be part of a state democracy and part of meeting legitimization requirements that are triggered by the state.
This assumes, of course, that there are feasible alternatives that have fewer of the costs outlined above.
I here argue against the second assumption of Sclove’s argument presented earlier.
Structural injustice are systematic violations of particularly important moral claims or liberties, the maintenance of which is explained by non-individual entities such as cultures, norms, or practices.
The question, of course, is whether a rejection of material equality is reasonable.
Some deliberative democrats want public reasons. That is, they demand that this justification should be based on reasons that everyone can accept. Roughly, the same justification should be offered to everyone. By contrast, others argue that each individual can be offered a different justification as long as each can be offered some reasons. Roughly, they contend that different reasons can be offered to different people.
Free expression is the broader category, it includes, for example, artistic expression.
References
Abizadeh A (2007) Cooperation, pervasive impact, and coercion: on the scope (not site) of distributive justice. Philos Public Aff 35(4):318–358
Anderson E (2017) Private government: how employers rule our lives (and why we don’t talk about it). Princeton University Press, Princeton
Angwin J, Larson J, Mattu S and Kirchner L (2016) Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Accessed 24 July 2018
Awad E, Dsouza S, Kim R, Schulz J, Henrich J, Shariff A, Bonnefon J-F, Rahwan I (2018) The moral machine experiment. Nature. https://doi.org/10.1038/s41586-018-0637-6
Awad E, Dsouza S, Bonnefon J-F, Shariff A, Rahwan I (2020) Crowdsourcing moral machines. Commun ACM 63(3):48–55. https://doi.org/10.1145/3339904
Bartels LM (2002) Beyond the running tally: partisan bias in political perceptions. Polit Behav 24(2):117–150. https://doi.org/10.1023/A:1021226224601
Bell DA (2016) The china model: political meritocracy and the limits of democracy. Princeton University Press, Princeton
Blake M (2008) International justice. In: Edward NZ (ed) The Stanford encyclopedia of philosophy. Winter 2008. http://plato.stanford.edu/archives/win2008/entries/international-justice/
Brennan J (2016) Against democracy. Princeton University Press, Princeton
Brock G (2021) Global justice. In: Edward NZ (ed) The Stanford encyclopedia of philosophy. Winter 2021. Metaphysics Research Lab, Stanford University https://plato.stanford.edu/archives/win2021/entries/justice-global/
Brooks T (2020) The Oxford handbook of global justice. Oxford University Press, Oxford
Broome J (2012) Climate matters: ethics in a warming world. W. W. Norton & Company, New York
Cammaerts B, Mansell R (2020) Digital platform policy and regulation: toward a radical democratic turn. Int J Commun 14(22):135–154
Carugati F (2020) A council of citizens should regulate algorithms. Wired, December 6, 2020. https://www.wired.com/story/opinion-a-council-of-citizens-should-regulate-algorithms/. Accessed 17 Aug 2020
Checkoway B (1981) The politics of public hearings. J Appl Behav Sci 17(4):566–582. https://doi.org/10.1177/002188638101700411
Cohen J (1989) Deliberation and democratic legitimacy. In: Hamlin AP, Pettit P (eds) The good polity: normative analysis of the state. Blackwell, Oxford, pp 17–34
Cohen J (1993) Pluralism and proceduralism. Chicago-Kent Law Rev 3(1994):589–618
Cohen J (1997) Procedure and substance in deliberative democracy. In: James B, William R (eds) Deliberative democracy: essays on reason and politics. The MIT Press, Cambridge Mass, pp 407–437
Cohen GL (2003) Party over policy: the dominating impact of group influence on political beliefs. J Pers Soc Psychol 85(5):808–822. https://doi.org/10.1037/0022-3514.85.5.808
Cohen J (2009) Reflections on deliberative democracy. In: Thomas C, John C (eds) Contemporary debates in political philosophy. Wiley, Chichester, pp 247–263. https://doi.org/10.1002/9781444310399
Crawford K (2021) The Atlas of AI: power, politics, and the planetary costs of artificial intelligence. Yale University Press, New Haven
Curato N, Dryzek JS, Ercan SA, Hendriks CM, Niemeyer S (2017) Twelve key findings in deliberative democracy research. Daedalus 146(3):28–38. https://doi.org/10.1162/DAED_a_00444
Dahl RA (1989) Democracy and its critics. Yale University Press, New Haven
Ditto PH, Pizarro DA, Tannenbaum D (2009) Motivated moral reasoning. In: Bartels DM, Bauman CW, Skitka LJ, Medin DL (eds) Psychology of learning and motivation: moral judgment and decision making. Academic Press, San Diego, pp 307–338
Dryzek JS, Bächtiger A, Chambers S, Cohen J, Druckman JN, Felicetti A, Fishkin JS et al (2019) The crisis of democracy and the science of deliberation. Science 363(6432):1144–1146. https://doi.org/10.1126/science.aaw2694
Enoch D (2017) Hypothetical consent and the value(s) of autonomy. Ethics 128(1):6–36. https://doi.org/10.1086/692939
Enoch D (2020) False consciousness for liberals, part I: consent, autonomy, and adaptive preferences. Philos Rev 129(2):159–210. https://doi.org/10.1215/00318108-8012836
Gabriel I (2018) The problem with yuppie ethics. Utilitas 30(1):32–53. https://doi.org/10.1017/S0953820817000024
Gilens M, Page BI (2014) Testing theories of American politics: elites, interest groups, and average citizens. Perspect Polit 12(3):564–581. https://doi.org/10.1017/S1537592714001595
Goldfarb A, Tucker C (2019) Digital economics. J Econ Lit 57(1):3–43. https://doi.org/10.1257/jel.20171452
Gould CC (2019) How democracy can inform consent: cases of the internet and bioethics. J Appl Philos 36(2):173–191. https://doi.org/10.1111/japp.12360
Haidt J (2012) The righteous mind: why good people are divided by politics and religion. Penguin, New York
Heap SH, Hollis M, Lyons B, Sugden R, Weale A (1992) The theory of choice: a critical guide. Blackwell, Oxford
Heath J (2020) The machinery of government: public administration and the liberal state. Oxford University Press, Oxford
Himmelreich J (2020) Ethics of technology needs more political philosophy. Commun ACM 63(1):33–35. https://doi.org/10.1145/3339905
Himmelreich J (2022) Should we automate democracy? In: Carissa V (ed) The Oxford handbook of digital ethics. Oxford University Press, Oxford
Hulme D (2016) Should rich nations help the poor? Polity, Cambridge
Irvin RA, Stansbury J (2004) Citizen participation in decision making: Is it worth the effort? Public Adm Rev 64(1):55–65. https://doi.org/10.1111/j.1540-6210.2004.00346.x
Jaques AE (2019) Why the moral machine is a monster. In: 10. Miami School of Law. https://robots.law.miami.edu/2019/wp-content/uploads/2019/03/MoralMachineMonster.pdf. Accessed 22 Oct 2021
Jurowetzki R, Hain D, Mateos-Garcia J, Stathoulopoulos K (2021) The privatization of AI research(-Ers): causes and potential consequences—From University-Industry Interaction to Public Research Brain-Drain?. http://arxiv.org/abs/2102.01648
Kahan DM (2012) Ideology, motivated reasoning, and cognitive reflection: an experimental study. SSRN Scholarly Paper ID 2182588. Rochester: Social Science Research Network. https://doi.org/10.2139/ssrn.2182588
Kahneman D (2011) Thinking, fast and slow. Farrar, Straus and Giroux, New York
Kitcher P (2011) Science in a democratic society. Prometheus Books, Amherst
Korsgaard CM (1983) Two distinctions in goodness. Philos Rev 92(2):169–195. https://doi.org/10.2307/2184924
Lakoff G (2008) The political mind: a cognitive scientist’s guide to your brain and its politics. Penguin, New York
Lenz GS (2013) Follow the leader?: How voters respond to politicians’ policies and performance. University of Chicago Press, Chicago
Lodge M, Taber CS (2013) The rationalizing voter. Cambridge University Press, Cambridge
Lord CG, Ross L, Lepper MR (1979) Biased assimilation and attitude polarization: the effects of prior theories on subsequently considered evidence. J Pers Soc Psychol 37(11):2098–2109. https://doi.org/10.1037/0022-3514.37.11.2098
Mayer R (2001) Strategies of justification in authoritarian ideology. J Political Ideol 6(2):147–168. https://doi.org/10.1080/13569310120053830
McQuillan D (2018) People’s councils for ethical machine learning. Soc Media Soc 4(2):2056305118768303. https://doi.org/10.1177/2056305118768303
Miller D (2009) Democracy’s domain. Philos Public Aff 37(3):201–228. https://doi.org/10.1111/j.1088-4963.2009.01158.x
Mills CW (2013) Retrieving Rawls for racial justice?: A critique of Tommie Shelby. Crit Philos Race 1(1):1–27. https://doi.org/10.5325/critphilrace.1.1.0001
Mills CW (2015) Racial equality. In: George H (ed) The equal society: essays on equality in theory and practice. Lexington Books, Lanham, pp 43–72
Mills CW (2018) I—Racial justice. Aristot Soc Supplement 92(1):69–89. https://doi.org/10.1093/arisup/aky002
Mills S (2019) Who owns the future? Data trusts, data commons, and the future of data ownership. SSRN Electron J. https://doi.org/10.2139/ssrn.3437936
Moellendorf D (2015) Climate change justice: climate change justice. Philos Compass 10(3):173–186. https://doi.org/10.1111/phc3.12201
Nabatchi T, Leighninger M (2015) Public participation for 21st century democracy. Wiley, Hoboken
O’Neill O (2016) Justice across boundaries: Whose obligations? Cambridge University Press, Cambridge
Plato (2008) Republic. In: Robin Waterfield (ed) Translated. Oxford University Press, Oxford
Pogge T (2005) World poverty and human rights. Ethics Int Aff 19(1):1–7. https://doi.org/10.1111/j.1747-7093.2005.tb00484.x
Pogge T (2008) World poverty and human rights: cosmopolitan responsibilities and reforms. Polity, Cambridge
Rahwan I (2018) Society-in-the-loop: programming the algorithmic social contract. Ethics Inf Technol 20(1):5–14. https://doi.org/10.1007/s10676-017-9430-8
Rawls J (1971) A theory of justice. 1999, Revised. Harvard University Press, Cambridge
Rawls J (1993) Political liberalism. Columbia University Press, New York
Rawls J (2001) Justice as fairness: a restatement. Harvard University Press, Cambridge
Rejali D (2009) Torture and democracy. Princeton University Press, Princeton
Ronzoni M (2009) The global order: a case of background injustice? A practice-dependent account. Philos Public Aff 37(3):229–256. https://doi.org/10.1111/j.1088-4963.2009.01159.x
Sclove R (1995) Democracy and technology. Guilford Press, New York
Shelby T (2003) Race and social justice: Rawlsian considerations. Fordham l Rev 72:1697
Shue H (1980) Basic rights: subsistence, affluence, and U.S. foreign policy. Princeton University Press, Princeton
Shue H (2020) Basic rights: subsistence, affluence, and U.S. foreign policy: 40th, Anniversary. Princeton University Press, Princeton
Shughart WF, Thomas DW (2019) Interest groups and regulatory capture. In: Congleton RD, Grofman B, Voigt S (eds) The Oxford handbook of public choice, vol 1. Oxford University Press, Oxford, pp 584–603. https://doi.org/10.1093/oxfordhb/9780190469733.013.29
Singer P (1972) Famine, affluence, and morality. Philos Public Aff 1(3):229–243
Sloane M, Moss E, Awomolo O and Forlano L (2020) Participation is not a design fix for machine learning. http://arxiv.org/abs/2007.02423
Stanley ML, Henne P, Yang BW, De Brigard F (2020) Resistance to position change, motivated reasoning, and polarization. Polit Behav 42(3):891–913. https://doi.org/10.1007/s11109-019-09526-z
Sunstein CR (2002) The law of group polarization. J Polit Philos 10(2):175–195
Sunstein CR (2006) Infotopia: how many minds produce knowledge. Oxford University Press, Oxford
Taylor RS (2009) Rawlsian affirmative action. Ethics 119(3):476–506. https://doi.org/10.1086/598170
Tutt A (2017) An FDA for algorithms. Admin Law Rev 69(1):83–124
Tversky A, Kahneman D (1974) Judgment under uncertainty: heuristics and biases. Sci New Ser 185(4157):1124–1131
Unger P (1996) Living high and letting die: our illusion of innocence. Oxford University Press, Oxford
Weaver VM, Prowse G (2020) Racial authoritarianism in U.S. democracy. Science 369(6508):1176–1178. https://doi.org/10.1126/science.abd7669
Westen D (2008) The political brain: the role of emotion in deciding the fate of the nation. PublicAffairs, New York
Wong P-H (2020) Democratizing algorithmic fairness. Philos Technol 33:225–244. https://doi.org/10.1007/s13347-019-00355-w
Young IM (1990) Justice and the politics of difference. Princeton University Press, Princeton
Zimmermann A, Di Rosa E and Kim H (2020) Technology can’t fix algorithmic injustice. Boston Rev. http://bostonreview.net/science-nature-politics/annette-zimmermann-elena-di-rosa-hochan-kim-technology-cant-fix-algorithmic. Accessed 09 Jan 2020
Acknowledgements
In writing this paper I benefited greatly from early discussions with Iason Gabriel who suggested objections, their names, and relevant literature. I thank Tina Nabatchi for hel** me with references to literature on participation and collaborative governance. I also am grateful for the discussions and the feedback I received at the exploratory seminar on The Ethics of Technology: beyond Privacy and Safety at the Radcliffe Institute for Advanced Study at Harvard University, the Embedding AI in Society Symposium at NC State University, the brownbag seminar of the Information Society Project at Yale Law School, and the workshop Business Ethics in the 6ix.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Himmelreich, J. Against “Democratizing AI”. AI & Soc 38, 1333–1346 (2023). https://doi.org/10.1007/s00146-021-01357-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-021-01357-z