1 AI and society: ethics, principles, and practices

After decades of alternating boom and bust for Artificial Intelligence (AI) research (Floridi 2020), surges in computing power and the widespread adoption of machine learning techniques made it again a high-priority topic for researchers and industries worldwide. Although governments and regulators were struggling with striking a balance between fostering innovation and exercising precaution (see also Russell et al. 2015a, b; Nowak et al. 2018; Winickoff and Pfotenhauer 2018), research institutions and the AI industry were the first ones trying to set standards for how to responsibly research AI, drawing from ethical and social scientific expertise (Benkler 2019; Larsson 2020). This happened, amongst others, by establishing ethical expert and advisory panels in companies (Phan et al. 2022), by integrating ethical expertise in the research and development process (McLennan et al. 2020), by employing researchers to work on relevant ethical aspects (Gebru et al. 2020; Hagendorff 2020; Schiff et al. 2020; Smit et al. 2020; Galindo et al. 2021; Ulnicane et al. 2021). We further included the High-Level Expert Group’s Ethics Guidelines for Trustworthy AI to look into the particular context of European AI research (Commission 2019).Footnote 3

We conducted a content analysis with both bodies of literature to look for and analyse salient themes. For the first literature body, we analysed discussions about the achievements and challenges of ELS programs and how they have developed since the first ELSI program in the HGP in the early 1990s. For the second body of literature, we looked at how the relationship between AI and society is described in the sampled documents, the challenges they highlight, and the approaches they suggest to address them. Our research question was what lessons we can learn from the first literature body around ELS programs for the current discussions and debates around AI’s challenges for wider society.

3 The ELS programs discourse

3.1 The genomics era: asilomar, ELSI, and the HGP

The Human Genome Project (HGP) was an international research project lasting from 1990 until 2003. Over its existence, it combined over 1000 scientists in 40 countries to map and sequence the human genome. By doing so, the project’s initiators hoped to produce groundbreaking new knowledge that would help to understand the role of genetic factors for cancer, dementia, and other genetically-influenced diseases; or, as the first Head of the HGP phrased it: “A more important set of instruction books will never be found by human beings” (Watson 1990). It finally presented the world with an “essentially complete genomic sequence” (NHGRI 2020) in 2003 that accounted for 92% of the human genome.

Although this was a big success, the initiators of the HGP were also aware of the project’s challenges and risks early on. In particular, there was a fear that the newly won genetic information could be harmfully misused. This was especially relevant in light of the various eugenics movements of the twentieth century (Jasanoff et al. 2015). There was a fear that this kind of thinking might return and revive the worldview of genetic determinism that tries to explain complex social problems, such as poverty or public health, with the genetic information the HGP would make available.

In response to such concerns, the initial HGP project devoted 5% of its research budget to studying the “ethical, legal and social implications” of the HGP’s research and genomic research more generally (Juengst 1991, 2021; Hanna 1995). The HGP founded an independent funding and review body that managed its own budget share called the ethical, legal, and social implications (ELSI) program. The program operated under an agenda that included nine topics areas on “the possible impacts of disease-related genetic information” (Wolfe 2000). These included, amongst others, “fairness in insurance, employment, the criminal justice system, education, adoption, and the military,” “historical abuses of genetic information” (especially eugenics), and “commercialization” (Wolfe 2000). The agenda was continuously refined together with the HGP’s five-year plan.

Before the ELSI program, experts who researched the wider societal dimensions of genomics research were not integrated in such a systematic and structured manner. A good example of how societal dimensions were even excluded before the ELSI program is the Asilomar conference in 1975 (McLeod and Nerlich 2017). The Asilomar conference was organized by a group of life scientists in response to concerns about the safety of recombinant DNA (rDNA) research. At the time, rDNA researchers had just accomplished cutting and splicing together the DNA of disparate species, which fueled concerns about whether this would lead to dangerous new organisms that could threaten public health. These concerns led to a voluntary moratorium of rDNA research by the National Academy of Sciences on certain types of recombinant DNA research “until the hazards could be evaluated” (Barinaga 2000).

Seven months after this moratorium, 140 participants, mostly scientists researching or making use of rDNA technologies, gathered in Asilomar (in Pacific Grove, California) for the International Congress on Recombinant DNA Molecules to “discuss appropriate ways to deal with the potential biohazards of this work” (Berg et al. 1975). The conferees focused on discussing safety issues they felt they could address as scientists themselves and defined “appropriate safeguards” (Berg et al. 1975) as risk-based strategies to contain rDNA molecules in laboratories. They concluded that, under these safeguards, the “work on construction of [rDNA] molecules should proceed” (Berg et al. 1975). However, the ethical, legal, and social issues of rDNA research were excluded from the congress agenda, and scholars from these fields and the wider public were not invited. This approach forestalled a wider deliberation on the ethical and social dimensions of rDNA research since it was the scientists themselves who defined the quality of the risk applying to the different types of rDNA research as well as the adequate safety measures to be taken to contain rDNA molecules in the laboratory (Parthasarathy 2015).

The Asilomar conference provides an example of how scientists attempt to preserve their autonomy from society through self-regulatory approaches. The conference provided a stage for demonstrating that rDNA researchers would be able to identify and deal with the ethical issues around their work themselves. This approach also constructed a particular relationship between science and society, one that drew a clear boundary between the scientists on the one hand and society on the other. It confined the competencies, access to research, and political power required to discuss and work on the ethical issues around rDNA research to the rDNA researchers themselves.

The first ELSI program now went beyond this focus on self-regulation by providing funding for ethical, legal, and social sciences experts to study the scientific work done in the HGP. At the same time, however, the relationship between science and society did non-fundamentally change in comparison to Asilomar: The first ELSI program reproduced Asilomar’s exclusive reliance on academic expertise by hosting mostly expert panels and scientific meetings as the main formats to deliberate about the HGP’s ethical, legal, and social aspects (Felt et al. 2011; Kemper 2011; Jasanoff et al. 2015). Critical voices raised the concern that, in this way, the early ELSI program delegates societal aspects to a panel of professional ethicists, legal scholars, and social scientists while not, at the same time, sufficiently involving the wider public and affected or vulnerable groups (see also Juengst 1996; Jasanoff 2011).

This was not the only criticism raised against the ELSI program. Some feared that the early ELSI program’s agenda mostly focused on societal implications that might occur downstream in the research process, i.e., once the outcomes of the HGP’s research have been produced. They argued that in this way, the ELSI program had little to no leeway to meaningfully impact the trajectory of genomics research. They opted for more access ‘upstream,’ i.e., to the HGP’s actual research practices and processes to better align them with existing societal concerns.

Calls for a better alignment of research with wider society also extended to the policy level. Critics pointed to the unclear role of the ELSI program and the various insights it generated for policy-making (Lindee 1994). They lamented that it failed to provide actionable knowledge for policy-makers regarding, for example, health or educational policies. Although many actors in and around the HGP had emphasized the tremendous impact a fully mapped and sequenced human genome has on science and wider society (Wolfe 2001; Fortun 2005), the project remained all too silent in terms of political advisory and guidance to ensure that these impacts happen in responsible, just, or societally desirable ways.

These critiques did not go unheard. In its subsequent phases, the HGP adjusted its scientific agenda and institutional structures (see also Collins and Galas 1993; Kitcher 2003) to gear ELSI research more towards “stakeholder participation and democratization” (Hilgartner et al. 2017) upstream in the research process. These programmatic changes finally provided the potential for stronger participation by other societal actors and the wider public in sha** research trajectories. However, ELS programs still lacked concrete mechanisms for how participatory practices could exercise this potential since it was still scientists themselves who had an eventual say on which kinds of research to pursue and how.

Nevertheless, the shift towards upstream participation was a major achievement in the history of ELS programs, and they were even relabeled in Europe and the US to support it. From 1994 onwards, the European Commission chose “ELSA” to denote research on ethical, legal, and social aspects, while the US programs replaced “implications” with “issues” to show awareness about the widespread association of “implications” with the downstream aspects of research (Myskja et al. 2014). This, however, did not halt ongoing reflections on the kinds of research these labels should promote, which continue until today (McCain 2002; Balmer et al. 2015; Parker et al. 2019).

In summation, the key characteristics of genomics era ELS programs described in the literature are (a) the setup of independent funding bodies to support parallel ethical, legal, and social scientific research by scholars external to the original research, (b) an analytic focus on the outcomes and downstream implications of the original research and (c) research structures that rested on the assumption that academic knowledge is the best resource for an effective assessment of the societal dimensions of scientific and technological research.

4 The nano era: applications, integration, and public engagement

Although ELS programs originated in the genomics domain, they soon spread to other fields. One important example around the turn of the century are ELS programs in nanotechnology. With definitions slightly differing between funding contexts, nanotechnology research is widely understood to create "materials, devices, and systems with fundamentally new properties and functions by engineering their small structure” (Roco 2011). By 2004, around 60 countries had developed nanotechnology activities that built on this common vision, which made nanotechnology a significant domain of scientific research and technology development. Bigger nanotechnology research programs emerged around 2000, especially in the US and Europe, and included various (non)scientific actors. They brought together scientists from biology, physics, and chemistry, medical researchers, experts from various engineering disciplines, and even companies or larger corporations from the food, cosmetics, and textile industries (Laurent 2017).

In its various instantiations, nanotechnology research has always been informed by the political and institutional context in which it took place. European policy, for instance, focuses on market regulation and on the safety of the products and services offered in its member states. This includes, e.g., the regulation of medical technologies that are marketed in the EU as medical products or devices (Laurent 2017). European research policy on nanotechnology was initially built on this product-oriented logic of the EU legislation and stipulated safety as an important design feature for nanotechnology products that could result from ongoing nanotechnology research. This made concepts such as “safe(ty)-by-design” popular among nanotechnology researchers in the European context and beyond (Poel and Robaey 2017, Kelty 2009).

In the US, nanotechnology research was institutionalized through the National Nanotechnology Initiative in 2000, which until today coordinates nanotechnology research in the US. By 2008, nanotechnology had attracted the interest of over a dozen federal agencies, which all have their own scientific agendas around, e.g., health, labor, justice, or commerce. The Department of Defense (DOD), however, became the agency with the largest share of funding for nanotechnology research since 2002, which also points to the strategic significance this domain has for military affairs in the US (Roco 2007). This stands in stark contrast to the HGP, which was mainly driven by civil concerns, with the Department of Energy (DOE) and the National Institutes of Health (NIH) having been the only two contributing agencies before the project became increasingly international (Cantor 1990).

Whereas the HGP had focused on mastering a “basic understanding” of the human genome through map** and sequencing it, in nanotechnology research, the “context of application” had become much more important (Myskja et al. 2014). Nanotechnology programs not only promoted a better “understanding and control of matter and processes at the nanoscale range” but also aimed to produce “improved materials, devices, and systems that exploit these new properties” in manufacturing and medicine (ISO/TC229 2005). Examples include using smart biomaterials as small-size components added to human tissue, or nanoscale implants to exert precise electric stimulations in the human brain. The envisioned benefits to emerge from this research were no less than “curing Parkinson’s and Alzheimer’s diseases” or “revolutioniz[ing] cancer treatment” (Laurent 2017).

This focus on the applications of nanotechnologies impacted how nanotechnology research engaged with the wider public. Nanotechnology researchers often developed concrete scenarios that linked the research on fundamental nano-scale materials or devices with their implementation in a particular context (e.g., in medical treatment) or the marketization of nanotechnology-based products. Nanotechnologists involved in the research and development of nanotechnology products had clear incentives to contact the wider public early in the research process to test how it would respond to nanotechnology-based products and ideas. The resulting engagement formats between these nanotechnologists and the wider public also provided ELS researchers with abundant potential applications for their own engagements with the wider public (Felt et al. 2015).

Generally, nanotechnology research further embedded ELS programs and researchers in many different ways and framings. In the EU, the Sixth Framework Programme (2002–2006) relied on an “ELSA board” as “the first attempt in institutionalizing ethical reflections in Europe in the field of nanobiotechnology” (Laurent 2017). The US Congress passed the 21st Century Nanotechnology Research and Development Act in 2003 that mandated “the ‘integration’ of ‘social and ethical issues’ research within scientific and technical agendas and practices" (Viseu and Maguire 2012). In the UK, around the same time, the Nanotechnology Engagement Group was established as “the primary means for public engagement” in nanotechnology research (Fisher and Maricle 2014).

An important change as compared to the genomics era that came with these second-generation ELS programs was that they demanded applicants to integrate ELS research in research proposals. ELS scholars were not left to their own devices to find access to relevant research fields. They were invited by colleagues from the natural or engineering sciences, who were sometimes overwhelmed by the requirement to include research on the ethical and social aspects of their work, to form part of smaller research projects or larger research consortia. In a best-case scenario, this granted researchers from the ethical, legal, and social sciences access and proximity to the field of interest and put them closer to relevant societal concerns upstream in the research process, together with representatives of the wider public (see also Fisher 2005; Rip and Kulve 2008).

In this way, nano-era ELS programs also offered a different approach to sha** the relationship between science and society. While genomics-era research programs initially sought to solve ELS issues through academic expertise, nano-era research programs focused much more on anticipating potential controversies and engaging potentially affected publics early in the research process. This also led to the mobilization of additional societal actors and institutions capable of establishing a closer relationship between science and society, such as science museums, risk experts, and public engagement facilitators. It was not just the researchers themselves who had the power and responsibility to think about and engage with ELS issues concerning their work. Instead, much larger bodies were set up to deliberate on the societal dimensions of nanotechnology, and ELS researchers became much more integrated into the research process itself.

However, being integrated also threatened the epistemic authority of ELS researchers, something that the ELSI program’s funding structures preserved (Bjornstad and Wolfe 2011). In the initial ELSI program, a separate funding body granted ELS researchers a certain independence from the domains they critically investigated. In nano-era ELS programs, these researchers had to negotiate or align their research agenda with colleagues from the natural or engineering sciences. They often had to justify their role and prove useful in achieving the overall research goals and outcomes stipulated in a project’s outline (Viseu 2015). This also fueled debates on the necessity of dedicated “nanoethicists,” whose critical independence some scholars severely doubted (Fisher and Mahajan 2006).

Another concern around this potential lack of independence was that ELS researchers might buy into nanotechnology’s overly excited and often exaggerated narratives since they could not assess whether these are too far-fetched or premised on solid scientific evidence. Such narratives were employed by some nanotechnology researchers, companies, and policy-makers to advertise potential applications or benefits of nanotechnology research (Simakova and Coenen 2013; Shumpert et al. 2014; Marris and Calvert 2020). Using such narratives was already criticized in genomics research, but critics perceived the “hype” (Berube 2006) around nanotechnology as intentionally misleading, with many application scenarios being too unrealistic to merit serious assessment and used to foster acceptance for nanotechnology research and products. Critics feared that the limited resources of ELS programs would be wasted on such speculative promises or giving them additional credibility (Nordmann 2007).

At the same time, those nanotechnology researchers in pursuit of acceptance and credibility for their envisioned products were only reluctantly accepting the wider public’s and ELS researchers’ involvement and hoped it could contribute to the research process without subverting existing research agendas (see also Lezaun and Soneryd 2007; Delgado et al. 2011; Bogner 2012). This led to questionable expectations about the role of ELS researchers in research collaborations (Stegmaier 2009). They were viewed as “mediator[s] between nanotechnology and society” (Rip 2009) that should take over the time-consuming task of facilitating public engagement or adequately “construct” the right public needed to accommodate nanotechnology applications (Levidow 2007; Braun and Schultz 2010).

Viewing ELS researchers in such roles troubled the democratic ideas behind public engagement activities. These were supposed to facilitate a dialogue between researchers and the wider public upstream in which saying ‘no’ to nanotechnology research and products should be possible (Kemper 2011). The dominance of the acceptance frame, however, questioned whether and how the public’s engagement in ELS programs can influence the trajectory of research or seriously inform policy-making. This also turned the shift to public engagement activities from an achievement in the initial ELSI program again into a challenge. Consequently, concerns around how to engage the wider public remained ongoing, especially in the next era of ELS programs, which widened the scope of ELS programs from particular research domains to a more general framework for responsible research and innovation.

As a summary, the key characteristics of ELS programs in the nano era are framed as (a) a shift to shared funding applications, with the social sciences and humanities embedded in nanotechnology research projects from the beginning, (b) an increasing involvement of the wider public through public engagement activities upstream, (c) ongoing concerns that integrated ELS programs might merely be enrolled to facilitate the wider public’s acceptance for scientific and technological research amidst an overhyped research environment, and (d) an ongoing lack of mechanisms for ELS programs to inform research and policy-making.

5 The RRI era: the shift to users and innovation

The origin of the term “Responsible Research and Innovation” (RRI) can be traced back to a nanotechnology workshop in the Netherlands in 2007 (de Saille 2015). As we have seen above, nanotechnology research programs have adopted the idea of public engagement since the early 2000s and increasingly promoted interactions between nanotechnology researchers and the wider public. By 2010, the idea of public engagement had extended to other scientific domains and scaled up substantially, moving from an exclusive focus on fostering interactions in particular research domains to implementing it in scientific and technological research more generally.

In Europe, this upscaling of public engagement in scientific research coincided with political desires for a “dynamic and competitive knowledge-based economy” (Commission 2005) and a re-orientation of research towards a set of “Grand Societal Challenges” (Ulnicane 2016). In an attempt to produce research agendas that address these challenges, the European Commission did not resort to scientific research alone anymore. Instead, it increasingly relied on the term innovation to emphasize its desire for profitability and economic utility and to contrast the term research, which could still be associated with producing knowledge for its own sake.

Since the ‘90 s, European research programs have started to promote the translation of research output into innovations in the form of services and products to accomplish economic competitiveness and growth, as we have already seen in nanotechnology’s push toward applications. From 2010 onwards, however, innovation became the new guiding concept for a much wider, transdisciplinary approach to producing economically relevant research output. Alongside this shift in the EU, concepts of responsible (research and) innovation were gradually taken up in ELS-type programs’ funding calls and eventually enshrined in a variety of research frameworks we today refer to as RRI (Randles et al. 2022).

These ELS-type programs operated differently as compared to former ELS programs. While ELS programs in genomics or nanotechnology had been mostly established as a response to a particular set of emerging challenges and technologies in a given research domain, RRI came to encompass innovation and research more generally (Owen et al. 2012; Stilgoe et al. 2013). Consequently, research programs and institutions viewed RRI as a framework to be applied in almost any research domain, including synthetic biology, climate change science, microbiology, biochemistry, computer science, nanotechnology, and digitalization (Taylor and Woods 2020; Tabarés et al. 2022).

What these various implementations of RRI had in common was the attempt to continue the integration of ELS researchers and to intensify the science-society interactions of the nano and genomics era. Research and innovation should become a “site of politics” (Hartley et al. 2017) in which researchers and innovators consider the societal dimensions of their work from the outset in a mutually-reflexive process. This was also emphasized by a shift in terminology, especially in European policy, where terms familiar from previous ELS programs (e.g., public engagement) were increasingly superseded with terminology of “co-producing” or “co-creating” responsible research practices, processes, and outcomes by scientific and societal actors alike (Ruess et al. 2023).

RRI also continued the integration of ethical, legal, and social sciences researchers in larger research consortia, a feature familiar from ELS programs in the nano era (Forsberg 2015; Broks 2017). On the one hand, this continuity led some scholars to call RRI “ELSI by another name” or a signifier for “add-on” research, for example, in synthetic biology (Taylor and Woods 2020), although the initial academic literature around RRI largely omitted explicit connections to former ELS programs (see also Bensaude Vincent 2014; Ribeiro et al. 2017).

On the other hand, the uses of the RRI framework remained diverse and heterogeneous (Yaghmaei and Poel 2021; Blok 2023) and brought about important differences in how public engagement activities and integrated ELS research were framed. Before, the genomics and nanotechnology eras emphasized the importance of addressing the public as citizens of democratic societies who should be given a voice in scientific and technological decision-making. Especially in the European context, however, a major shift in framing occurred, as policy-makers emphasized innovation’s socio-economic role “as the driver of jobs” (de Saille 2015) and were especially concerned about the absence of “public support for research and innovation” (Felt 2017). This new era of science policies expected the relationship between science and society to strongly relate to ideas of creativity, production, and innovation.

As a result, it became increasingly common for European ELS programs in the RRI era to start addressing citizens as users or consumers of innovations. Critics feared that this shift aimed to safeguard support for technological innovations, privileging the concerns of innovators and producers of novel technologies over those of European citizens. Against this backdrop, public engagement could become reframed from a democratic necessity into a means to prevent the wider public from becoming a potential obstacle for innovating actors. Consequently, although RRI managed to provide continuity for the public engagement activities of earlier ELS programs, many worried that public engagement had irrevocably become subjugated to economic imperatives and a focus on translating research outcomes into the private sector (Zwart et al. 2014; Macq et al. 2020).

Actors from the private sector were indeed very present in European research programs that operated under RRI’s pro-innovation premises. By boosting the involvement of the industry, these programs tried to produce economic benefits in the form of patents, startups, and other entrepreneurial efforts. ELS researchers, however, had often only access to the academic side of these science-industry collaborations, while private sector research remained largely unscrutinized. This resulted in asymmetrical accountability relationships, in which only the academic side of these collaborations bears the responsibilities envisioned by RRI. However, how industrial actors use and exploit research findings remains out of this framework and goes largely unnoticed.

In summary, ELS programs in the RRI era (a) shifted from a focus on particular sciences and research domains (e.g., genomics, nanotechnology) to a focus on scientific and technological innovation more generally that also encompasses private actors, (b) addressed the wider public increasingly as users or consumers, in a shift from public engagement to co-creation, that should be included in innovation and research to produce better research outcomes and products, and (c) became increasingly concerned about how to make not only academic research but also in the private sector and the industry accountable for responsible research and innovation practices.

6 Summary: ELS programs achievements & challenges

ELS era

Achievements (A) and ongoing challenges (C)

Genomics era: Asilomar, ELSI and the HGP

A

the setup of independent funding bodies to support parallel ethical, legal, and social scientific research by scholars external to the original research context

C

a too-narrow analytic focus on the outcomes and downstream implications of the original research

C

assumption that academic expertise is the best resource for effectively assessing the societal dimensions of scientific and technological research, without including the wider public

Nano era: Applications, integration and public engagement

A/C

a shift to shared funding applications, with scholars from the social sciences and humanities embedded in nanotechnology research projects from the beginning

A

an increasing involvement of the wider public through public engagement activities upstream the research process

C

concerns that ELS programs might merely be enrolled to facilitate the wider public’s acceptance of scientific and technological research amidst overhyped research environments

RRI era: The shift to users and innovation

A

shift from a focus on particular sciences and research domains to a focus on scientific and technological innovation more generally that also encompasses private actors

C

addressing the wider public increasingly as users or consumers that should be included in innovation and research to produce better research outcomes and products

C

finding answers for how to hold not only academic research but also in the private sector and the industry accountable for responsible research and innovation practices

All

C

an ongoing lack of mechanisms for ELS programs to inform research and policy-making

7 All ethics, all good? ELS programs and AI

Looking into the literature on ELS programs, we identified three distinct eras and highlighted their key challenges and achievements. In the first era, we have seen how the initial ELSI program in the HGP tried to overcome the focus on scientific self-regulation of the 1975 Asilomar conference, which attempted to exclude the wider public from influencing the research trajectory of rDNA research. The ELSI program programmatically included the societal dimensions of genomics research, however, by still favoring academic experts over a more substantial involvement of the wider public. In the nano era, we have then seen how this focus on academic expertise was complemented by the introduction of public engagement activities in ELS programs on a larger scale, which, however, remained ambiguous regarding how much influence they were willing to grant to ELS researchers and the wider public upstream in research practices and process. We ended with the RRI era and its more recent turn to innovation as a comprehensive policy concept that mainstreamed the integration of ELS scholars and public engagement practices in the hope that they could contribute to economic growth and competitiveness.

In all of these eras, we have seen how changes in ELS programs’ practices built up on the past challenges and achievements of prior ELS programs and that participation and public engagement became more important with each iteration. However, we also ended with a set of challenges that remained to be addressed after the RRI era, for example, finding concrete mechanisms for ELS programs to effectively influence not only research trajectories but also policy. Building on this brief history of ELS programs and their remaining challenges, we now bring them into conversation with the documents mentioned in the second section that raise similar programmatic concerns about AI research and its challenges for the wider society.

In these documents, ethics has become a shorthand for summarizing the societal challenges of AI research, which have come to be mostly framed as regulatory concerns (see also Floridi et al. 2018; Whittlestone et al. 2019; Smit et al. 2020). A common practice to address these ethical concerns became principle-based approaches, particularly the publication of guideline documents that should prove an organization’s (voluntary) commitment to producing and using AI systems in beneficial, non-harmful, or trustworthy ways. Although many scholars criticized the gap between principle-based approaches and research practice (Mittelstadt 2019; Morley et al. 2020; Ibáñez and Olmeda 2021), the publication of guideline documents became for a long time the most prominent practice for addressing the societal concerns around AI. A widely received review from 2019 mentioned 84 such documents (Jobin et al. 2019). Actual AI research and development practice, however, has remained remarkably understudied, which poses a challenge familiar to earlier ELS programs that desired to move their research from downstream outcomes to upstream research practice.

Instead of an emergence of more ELS research upstream AI research practices and processes, we witnessed the regress to the Asilomar model of self-regulation from the pre-genomics era through the 2017 “Asilomar Conference on Beneficial AI” (Tegmark 2017). Similar to the 1975 Asilomar conference on rDNA, it invited AI researchers and influential actors from the private sector to evaluate and define the challenges and risks of AI research while excluding the wider public, suggesting that it should not have a say in decision-making processes related to AI research and development. This standpoint has been successively confirmed through various letters signed by influential experts from science and technology that called for caution and safety but not for public engagement (see also Russell et al. 2015a, b; Institute 2023a).

These documents also show that there has been a convergence in AI between those proclaiming a bright future and exponential economic growth that novel AI technologies and products will bring about and those warning about the societal or even existential risks these technologies pose. This poses an important power issue since there are only a few (private) actors that are currently capable of develo** such technologies. Exaggerating the risks that these technologies pose makes these companies even more powerful, as they are increasingly viewed as the only ones having the knowledge and the means—and products—to deal with these risks. This makes the risks of AI a concern to be regulated by the market and a question of investment, rendering other stakeholders, especially those adversely affected by AI technologies and the wider public, into hapless bystanders without any means to intervene, as in the first Asilomar conference on rDNA.

Although large-scale public engagement had already been established by the RRI framework in the European context, European policymakers produced a self-regulatory discourse around AI that only continued the pro-innovation stance of the RRI era but not its programmatic calls for more public engagement. The influential High-level Expert Group’s (HLEG) Guidelines for Trustworthy AI (Commission 2019) mentioned certain requirements that AI developers should consider when building “trustworthy” AI, but it did not go substantially further than demanding that “[e]nd-users and the broader society should be informed about these requirements” without proposing a more active role for contributing to the development and research of AI itself (Commission 2019, italics added). In this way, the HLEG missed develo** a broader vocabulary to deal with the societal dimensions of AI research and development.

Instead, the HLEG promoted a principle-based approach around the notion of trust, making ethics again the central notion for discussing the societal challenges of AI research. It thereby contributed to the formation of a discourse around “AI ethics” that became increasingly institutionalized in the form of expert and consulting bodies (Knight 2019; Catapult 2020; Hallensleben and Hustedt 2020), academic journals (Hui 2020; MacIntyre et al. 2020), policy initiatives (Bird et al. 2020), and research organizations (Floridi et al. 2018). This made it increasingly difficult to raise inquiries beyond the ethics perspective for a richer understanding of how AI can change our society, in which ethics still forms a fundamental perspective but is complemented and enriched by a broader engagement with other disciplines, such as the social and political sciences, and the wider public.

Nevertheless, works in AI research that address earlier or ongoing challenges of former ELS programs exist, but they still need to be reflected in a programmatic manner to deal with the structural limitations known from the prior ELS program eras. Contemporary modalities of ELS-type programs in AI aim at “integrating” or “embedding” ethicists and other scholars from the humanities and social sciences in(to) AI research contexts (McLennan et al. 2022). This likely helps to facilitate access to the practices of AI research, but linking funding to this particular way of collaboration also creates new dependencies that might trouble embedded researchers’ independence and ability to be critical, as we have seen in the nano era (Viseu 2015). Fostering more independent funding and requiring AI researchers to make their research accessible to independent, non-collaborating researchers can help resolve these issues.

We also miss a wider call for public engagement in AI research to an extent we have seen in the nano and RRI era of ELS programs. The absence of a “participatory turn” for AI research, as it had occurred in genomics and nanotechnology before (Jasanoff 2005; Powell and Lee Kleinman 2008), is especially striking in light of the many harmful effects that AI can cause for underrepresented, marginalized, and minority groups (Larson et al. 2016; O'Neil 2016; Buolamwini and Gebru 2018). While debates around the discriminatory behavior of AI systems have found uptake in the form of technical research on ethical principles, such as “fairness” or “explainability” (see also Bird et al. 2019; Selbst et al. 2019), they have not served as an entry point for the wider public to significantly influence existing academic debates, which are still focusing on ethics and self-regulation.

Consequently, the wider public is not provided with the right means and mechanisms to influence AI research trajectories or to reject certain kinds of AI research, development, or products. AI research still has to benefit from the introduction of public engagement in the nano era and its mainstreaming in the RRI era. While recent conceptualizations of public engagement, such as the concept of “co-creation” (Ruess et al. 2023) in Europe, have contributed to the ongoing dissemination of participatory practices in European research, they have not helped to counteract discourses of “inevitability” (Bareis and Katzenbach 2021) or of a global “race to AI” (Nordstrom 2022) that prioritize for pace over participation, precaution, and public deliberation.

This current focus on pace also points to the elephant in the room: How to deal with the dominance of industrial actors and market interests in AI research. Since the “rise of deep learning techniques” in 2012, we have seen a concentration of “research activities in resource-rich elite universities and large technology firms” that leaves AI researchers often receiving resources from and holding hybrid affiliations with both academia and the industry (Williams et al. 2023). However, ELS programs so far have been mostly implemented and enforced in public research agendas that can only exert a limited influence on the private sector, on which it relies for technical expertise and resources. Furthermore, conflicts of interest are rarely disclosed in private sector research, and particular challenges of AI research, such as considering diversity in research practice, remain hard to assess or enforce in corporate environments (Hagendorff and Meding 2021).

This lack of transparency provides a major barrier to change, and recent attempts to include the practices known from prior ELS programs, such as the integration of ELS researchers into AI research and development practice, have yet to be successful, as some prominent examples show (Mitchell et al. 2019; Crawford 2021; Whittaker 2021). The sudden endings to the company careers of integrated researchers in AI point to the fundamental challenges in embedding such scholarship into corporate environments (Bergen and Brustein 2019; Criddle 2020). While this is a parallel to former ELS programs, in which integrated scholars were seen as “adverse critical observers” (Smolka 2020), the current structural limitations for ELS programs in the private sector still make integration more of a voluntary commitment that has almost no consequences in practice.

We need concrete mechanisms for ELS programs in AI research to inform practices in the private sector. They have to deal with the structural limitations of ELS programs to go beyond problematic employment relationships that many interpret as a tactical move to prevent a stronger regulation of AI (Floridi 2019; Rességuier and Rodrigues 2020; Whittaker 2021). While some works started to shed light on the tensions that are taking place in such integrative corporate spaces (Metcalf et al. 2019; Moss and Metcalf 2020), we need to find ways that allow integrated researchers in the private sector to effectively contribute to the practices, processes, and products of a vastly and rapidly growing AI industry, while still preserving their independence and authority.

Once this is achieved, integrated research can also conduct the empirical research needed to understand the "co-production" (Jasanoff 2004) of scientific, political, economic, and social orders as well as the power dynamics in the different local environments in which AI research is conducted. As our analysis has shown, how ELS research is conducted always depended on the relationship between science and society that the research programs of their time deemed desirable and sought to realize: from scientific autonomy and independence in the genomics era to much more integration and engagement in the nano and RRI eras. In AI, we seem to be witnessing a return to autonomy thanks to the influence of big tech companies, which also impacts the type of ELS knowledge to be produced. We now need to better understand how.

Furthermore, we need a participatory turn for AI research and concrete mechanisms for how public engagement in AI can inform AI research practices and policy-making. This can start with a shift in funding calls and research agendas for more programmatic attempts to reflect, besides the ethical and regulatory questions, the social and political aspects of AI that also include the wider public on a larger scale. Although many self-regulatory approaches to AI try to include elements of participation (Institute 2023b), they will not be successful if the systemic limitations for public engagement and ELS research outlined in our summary of the past ELS programs above have not been overcome. This includes providing ELS programs and public engagement in AI with the power and ability to inform, shape, or reject certain research trajectories regardless of whether the research is carried out in public or private research organizations and without making AI’s economic potential the only criterion for successful AI research policy.