“We live in a real world. Come back to it.”

Senator Padme Amidala

Many leading intellectuals, technologists, commentators, and ordinary people have in recent weeks become embroiled in a fiery debate (yet to hit the pages of scholarly journals) on the alleged need to press pause on the development of generative artificial intelligence (AI). Spurred by an open letter from the Future of Life Institute (FLI) calling for just such a pause, the debate occasioned, at lightning speed, a large number of responses from a variety of sources pursuing a variety of argumentative strategies. Not all of the respondents resist the calls for the pause. For example, the Distributed AI Research Institute (DAIRI) has issued a statement claiming that while the pause is indeed desirable, the FLI’s focus on predicted existential risks to the exclusion of present-day concerns is misguided. The discussion shows no signs of abating.

I wish to raise an objection both to the FLI’s open letter, and to the statement by DAIRI (I will refer to both collectively as “letters” or “open letters”). While in the context of the issues brought up by the two Institutes, this objection is, to my knowledge, novel, I pretend to little originality in voicing it. Rather, I offer a simple (yet unappreciated) application of some insights from political philosophy and economics to show that the recommendations contained in both missives are severely underargued.

Despite differences in emphasis, both the FLI and the DAIRI letters insist not just on pausing commercial development of generative AI. They forcefully argue for laws and regulations to combat the (predicted and actual) risks of continuing the work on creating increasingly more powerful systems unimpeded.

The FLI letter begins with a litany of potential dangers that advanced generative AI may precipitate, from widespread technological unemployment to existential risks of unaligned superintelligence.

The authors say that “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs” (FLI Staff 2023, references omitted) and further claim that these risks will not be adequately managed in the current competitive climate. Instead, “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control” (ibid.). Consequently, not only is a six-month (at least) pause called for. Furthermore, “AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems … a robust auditing and certification ecosystem; …; robust public funding for technical AI safety research; and well-resourced institutions for co** with the dramatic economic and political disruptions (especially to democracy) that AI will cause.” (ibid., emphasis added).

The DAIRI statement, in contrast, points to present-day complaints with how the industry operates as its basis for a fundamentally similar conclusion. The authors say: “The harms from so-called [sic] AI are real and present and follow from the acts of people and corporations deploying automated systems” (Gebru et al. 2023). Specifically, AI (so-called?) companies are guilty of “(1) worker exploitation and massive data theft to create products that profit a handful of entities, (2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem, and (3) the concentration of power in the hands of a few people which exacerbates social inequities.” (ibid., references omitted).

Regardless of what prompts their respective concerns, however, the solutions demanded by DAIRI are very similar to what the FLI exhorts us to do. To wit, “What we need is regulation that enforces transparency. Not only should it always be clear when we are encountering synthetic media, but organizations building these systems should also be required to document and disclose the training data and model architectures. The onus of creating tools that are safe to use should be on the companies that build and deploy generative systems, which means that builders of these systems should be made accountable for the outputs produced by their products. The actions and choices of corporations must be shaped by regulation which protects the rights and interests of people.” (ibid.)

Ignoring many important details, I will summarize the arguments in schematic form as follows:

  1. (1)

    The unimpeded development of generative AI for commercial purposes poses serious threats to safety or else promotes exploitation, inequality, and wealth concentration.

  2. (2)

    Regulation of the unimpeded development of generative AI will mitigate these threats.

  3. (3)

    Therefore, there should be regulation of the unimpeded development of generative AI.

Most of the critical responses to FLI’s open letter focus on undermining the first premise. In contrast, I challenge the second premise below.

Imagine, for starters, that we live in an ideal world, where people are always moral and always obey what justice requires. Suppose that morality and justice require that the development of generative AI be stopped (or paused). Then, it will be stopped (or paused). Developers of generative AI are people, and people in the ideal world are moral and obey the requirements of justice. For the same reason, in this ideal world, there is no need for regulators.

I dare suggest no one on any side in the debate thinks we live in the ideal world. Whatever we may disagree on, we all concede that not everyone is moral and does what justice requires. People are selfish, prefer shortsighted personal gains to ensuring global safety, and have no qualms about exploiting others or perpetuating harmful stereotypes if doing so is in their interest. Left to their own devices and driven by the inescapable commercial logic of the pursuit of profit, AI developers lead the world to potential doom (or actual iniquities). Presumably, this is why we are in the situation we find ourselves in today.

This is also, presumably, why we need regulators (guided by experts, or expert regulators themselves) to step in and take control of the direction in which the development of the technology is headed, to “enforce… transparency [and] protect… the rights and interests of people” (ibid.); we need regulators that are “capable,” “robust,” publicly funded, and “well-resourced” (FLI Staff 2023). Once we have these systems in place, we will move a long way toward solving the problem. Or so both sets of authors argue.

To summarize: we need regulators because AI companies, engaged in a shortsighted race toward ever more powerful models and driven by profit cannot be expected to do what justice requires. State power (presumably guided by the experts from prestigious think tanks and institutes) is the solution to safety risks, exploitation, systemic oppression, and wealth inequality.

But herein lies the rub. In the ideal world, remember, we didn’t need regulators at all, because ideal people are moral and just. But in the non-ideal world, where people are driven to act recklessly or unjustly in pursuit of self-interest, we also have to contend with non-ideal regulators, experts, politicians, and voters. Why? Because regulators, experts, politicians, and voters are people. And in the non-ideal world, people would prefer shortsighted personal gains to ensuring global safety and have no qualms about exploitation or structural oppression if they can be used for personal advantage. Left to their own devices and driven by the inescapable political logic of the pursuit of power, politicians, regulators, and voters would lead the world to potential doom (or actual iniquities).

What is more, capable regulatory agencies that hold substantial power to affect the operations of the generative AI industry would become prime targets for capture, lobbying, or other types of influence. After all, the profit-hungry, shortsighted executives of AI companies would seek to influence the operations of the regulator in as beneficial a direction as possible, safety and equity be damned.

Both letters, instead of grappling with these concerns, simply assume regulator imperfection away, and expect that whatever form the regulatory bodies will take, they will achieve their intended purposes.

It seemed hopelessly naïve just a few minutes ago to think that we live in an ideal world, so hopelessly naïve in fact that it would be ludicrous to accuse any serious thinker of making such an error. But it seems almost equally naïve to assume that, though we live in a non-ideal world, we have recourse to the ideal regulator—the regulator we can count on for doing what is right, achieving what is intended, pursuing justice and protecting our interests. And yet, this is essentially what we find in both letters.

This way of thinking is by no means unique, nor is pointing out its flaws. As Jason Brennan (2016) puts it (in an introductory textbook, no less), “many economists, political scientists, and philosophers make this… mistake when they judge institutions. They complain about how ugly some institutions are in practice and then say we should go with their favored alternatives instead. But they fail to examine whether their favored alternatives are even uglier… It’s one thing to argue that, in principle, a well-informed and well-motivated government could correct a market failure. It’s another thing to argue that a real-life government will actually correct a market failure. … In the real world, we don’t get to stipulate governments are [well-informed and well-intentioned]. That could make all the difference in what we want real-life governments to do.” (2016, pp. 147–148).

The errors identified by Brennan are in clear view in both Institutes’ letters.

The argument for regulation, in both the FLI and DAIRI versions, appears in need of what economists have called “behavioral symmetry” (Freiman 2017). We need to assume, absent reasons to the contrary, that people are people; that they have the same types of motivations across the board, in market and political settings alike. As Christopher Freiman puts it “The customer at Target and the voter at the ballot box are quite literally the same person. So why give her different personalities when she’s shop** for soup and when she’s shop** for senators?” (ibid., p. 26).

We cannot simply assume that while markets, with market incentives, populated by imperfect people, will lead us to doom, state institutions, populated by the same kinds of people, will effectively achieve the goals of safety and equity.

Arriving at such a conclusion requires, ideally, a serious analytic effort. We can’t, of course, expect that from open letters. But one would hope for at least a sketch of an argument why we can put our trust in the idea that regulators will be justice-pursuing, but commercial actors will not. Why will political institutions incentivize the pursuit of justice, but market institutions won’t? The faintest attempt at an answer is nowhere to be found in either letter.

The closest we get to a like-for-like comparison is where both Institutes express serious concern that decisions about how to deal with the threats from generative AI are left to unelected entrepreneurs. Instead, of course, we should institute regulatory agencies to deal with these threats. Yet, this claim is puzzling on its face. At least on a fairly common understanding of the term, regulatory agencies are also unelected (and moreover, shielded, as much as possible, from electoral politics—by design). Nor, of course, are the experts (academics, think-tank employees, etc.) elected in any sense of the term. This emphasis on being unelected as some sort of mark against the tech leaders of today seems to be very selectively applied. (Plus, in the non-ideal world, it is not at all clear that leaving things to elected representatives is a recipe for a more responsible tech policy.)

That there’s no thorough comparison between different regimes of AI governance is no fault of the two letters’ authors; the letters obviously have a different purpose. But that they skirt the question of comparing like-for-like while making a case for radical policy change is an illegitimate intellectual shortcut.