1 Introduction

The current AI boom is accompanied by constant calls for applied ethics, which are meant to harness the “disruptive” potentials of new AI technologies. As a result, a whole body of ethical guidelines has been developed in recent years collecting principles, which technology developers should adhere to as far as possible. However, the critical question arises: Do those ethical guidelines have an actual impact on human decision-making in the field of AI and machine learning? The short answer is: No, most often not. This paper analyzes 22 of the major AI ethics guidelines and issues recommendations on how to overcome the relative ineffectiveness of these guidelines.

AI ethics—or ethics in general—lacks mechanisms to reinforce its own normative claims. Of course, the enforcement of ethical principles may involve reputational losses in the case of misconduct, or restrictions on memberships in certain professional bodies. Yet altogether, these mechanisms are rather weak and pose no eminent threat. Researchers, politicians, consultants, managers and activists have to deal with this essential weakness of ethics. However, it is also a reason why ethics is so appealing to many AI companies and institutions. When companies or research institutes formulate their own ethical guidelines, regularly incorporate ethical considerations into their public relations work, or adopt ethically motivated “self-commitments”, efforts to create a truly binding legal framework are continuously discouraged. Ethics guidelines of the AI industry serve to suggest to legislators that internal self-governance in science and industry is sufficient, and that no specific laws are necessary to mitigate possible technological risks and to eliminate scenarios of abuse (Calo 2017). And even when more concrete laws concerning AI systems are demanded, as recently done by Google (2019), these demands remain relatively vague and superficial.

Science- or industry-led ethics guidelines, as well as other concepts of self-governance, may serve to pretend that accountability can be devolved from state authorities and democratic institutions upon the respective sectors of science or industry. Moreover, ethics can also simply serve the purpose of calming critical voices from the public, while simultaneously the criticized practices are maintained within the organization. The association “Partnership on AI” (2018) which brings together companies such as Amazon, Apple, Baidu, Facebook, Google, IBM and Intel is exemplary in this context. Companies can highlight their membership in such associations whenever the notion of serious commitment to legal regulation of business activities needs to be stifled.

This prompts the question as to what extent ethical objectives are actually implemented and embedded in the development and application of AI, or whether merely good intentions are deployed. So far, some papers have been published on the subject of teaching ethics to data scientists (Garzcarek and Steuer 1, I only inserted green markers if the corresponding issues were explicitly discussed in one or more paragraphs. Isolated mentions without further explanations were not considered, unless the analyzed guideline is so short that it consists entirely of brief mentions altogether.

Table 1 Overview of AI ethics guidelines and the different issues they cover

2.2 Multiple Entries

As shown in Table 1, several issues are unsurprisingly recurring across various guidelines. Especially the aspects of accountability, privacy or fairness appear altogether in about 80% of all guidelines and seem to provide the minimal requirements for building and using an “ethically sound” AI system. What is striking here is the fact that the most frequently mentioned aspects are those for which technical fixes can be or have already been developed. Enormous technical efforts are undertaken to meet ethical targets in the fields of accountability and explainable AI (Mittelstadt et al. 2019), fairness and discrimination aware data mining (Gebru et al. 2004), but those guidelines were intentionally excluded from the analysis. Nonetheless, advances in AI research contribute, for instance, to increasingly anthropomorphized technical devices. The ethical question that arises in this context echoes Immanuel Kant’s “brutalization argument” and states that the abuse of anthropomorphized agents—as, for example, is the case with language assistants (Brahnam 2006)—also promotes the likelihood of violent actions between people (Darling 2016). Apart from that, the examined ethics guidelines pay little attention to the rather popular trolley problems (Awad et al. 2018) and their alleged relation to ethical questions surrounding self-driving cars or other autonomous vehicles. In connection to this, no guideline deals in detail with the obvious question where systems of algorithmic decision making are superior or inferior, respectively, to human decision routines. And finally, virtually no guideline deals with the “hidden” social and ecological costs of AI systems. At several points in the guidelines, the importance of AI systems for approaching a sustainable society is emphasized (Rolnick et al. 2018). The same holds true for the idea of an “AI for Global Good”, as was proposed at the 2017’s ITU summit, or the large number of leading AI researchers who signed the open letter of the “Future of Life Institute”, embracing the norm that AI should be used for prosocial purposes.

Despite the downsides, in less public discourses and in concrete practice, an AI race has long since established itself. Along with that development, in- and outgroup-thinking has intensified. Competitors are seen more or less as enemies or at least as threats against which one has to defend oneself. Ethics, on the other hand, in its considerations and theories always stresses the danger of an artificial differentiation between in- and outgroups (Derrida 1997). Constructed outgroups are subject to devaluation, are perceived de-individualized and in the worst case can become victims of violence simply because of their status as “others” (Mullen and Hu 1989; Vaes et al. 2014). I argue that only by abandoning such thinking in- and outgroups may the AI race be reframed into a global cooperation for beneficial and safe AI.

3.3 Ethics in Practice

Do ethical guidelines bring about a change in individual decision-making regardless of the larger social context? In a recent controlled study, researchers critically reviewed the idea that ethical guidelines serve as a basis for ethical decision-making for software engineers (McNamara et al. 2018). In brief, their main finding was that the effectiveness of guidelines or ethical codes is almost zero and that they do not change the behavior of professionals from the tech community. In the survey, 63 software engineering students and 105 professional software developers were scrutinized. They were presented with eleven software-related ethical decision scenarios, testing whether the influence of the ethics guideline of the Association for Computing Machinery (ACM) (Gotterbarn et al. 2018) in fact influences ethical decision-making in six vignettes, ranging from responsibility to report, user data collection, intellectual property, code quality, honesty to customer to time and personnel management. The results are disillusioning: “No statistically significant difference in the responses for any vignette were found across individuals who did and did not see the code of ethics, either for students or for professionals.” (McNamara et al. 2018, 4).

Irrespective of such considerations on the microsociological level, the relative ineffectiveness of ethics can also be explained at the macrosociological level. Countless companies are eager to monetize AI in a huge variety of applications. This strive for a profitable use of machine learning systems is not primarily framed by value- or principle-based ethics, but obviously by an economic logic. Engineers and developers are neither systematically educated about ethical issues, nor are they empowered, for example by organizational structures, to raise ethical concerns. In business contexts, speed is everything in many cases and skip** ethical considerations is equivalent to the path of least resistance. Thus, the practice of development, implementation and use of AI applications has very often little to do with the values and principles postulated by ethics. The German sociologist Ulrich Beck once stated that ethics nowadays “plays the role of a bicycle brake on an intercontinental airplane” (Beck 1988, 194). This metaphor proves to be particularly true in the context of AI, where huge sums of money are invested in the development and commercial utilization of systems based on machine learning (Rosenberg 2017), while ethical considerations are mainly used for public relations purposes (Boddington 2017, 56).

In their AI Now 2017 Report, Kate Crawford and her team state that ethics and forms of soft governance “face real challenges” (Campolo et al. 2017, 5). This is mainly due to the fact that ethics has no enforcement mechanisms reaching beyond a voluntary and non-binding cooperation between ethicists and individuals working in research and industry. So what happens is that AI research and development takes place in “closed-door industry settings”, where “user consent, privacy and transparency are often overlooked in favor of frictionless functionality that supports profit-driven business models” (Campolo et al. 2017, 31 f.). Despite this dispensation of ethical principles, AI systems are used in areas of high societal significance such as health, police, mobility or education. Thus, in the AI Now Report 2018, it is repeated that the AI industry “urgently needs new approaches to governance”, since, “internal governance structures at most technology companies are failing to ensure accountability for AI systems” (Whittaker et al. 2018, 4). Thus, ethics guidelines often fall into the category of a “’trust us’ form of [non-binding] corporate self-governance” (Whittaker et al. 2018, 30) and people should “be wary of relying on companies to implement ethical practices voluntarily” (Whittaker et al. 2018, 32).

The tension between ethical principles and wider societal interests on the one hand, and research, industry, and business objectives on the other can be explained with recourse to sociological theories. Especially on the basis of system theory it can be shown that modern societies differ in their social systems, each working with their own codes and communication media (Luhmann 1984, 1997, 1988). Structural couplings can lead decisions in one social system to influence other social systems. Such couplings, however, are limited and do not change the overall autonomy of social systems. This autonomy, which must be understood as an exclusive, functionalist orientation towards the system’s own codes is also manifested in the AI industry, business and science. All these systems have their own codes, their own target values, and their own types of economic or symbolic capital via which they are structured and based upon which decisions are made (Bourdieu 1984). Ethical intervention in those systems is only possible to a very limited extent (Hagendorff 2016). A certain hesitance exists towards every kind of intervention as long as these lie beyond the functional laws of the respective systems. Despite that, unethical behavior or unethical intentions are not solely caused by economic incentives. Rather, individual character traits like cognitive moral development, idealism, or job satisfaction play a role, let alone organizational environment characteristics like an egoistic work climate or (non-existent) mechanisms for the enforcement of ethical codes (Kish-Gephart et al. 2010). Nevertheless, many of these factors are heavily influenced by the overall economic system logic. Ethics is then, so to speak, “operationally effectless” (Luhmann 2008).

And yet, such system-theoretical considerations apply only on a macro level of observation and must not be generalized. Deviations from purely economic behavioral logics in the tech industry occur as well, for example when Google withdrew from the military project “Maven” after protests from employees (Statt 2018) or when people at Microsoft protested against the company’s cooperation with Immigration and Customs Enforcement (ICE) (Lecher 2018). Nevertheless, it must also be kept in mind here that, in addition to genuine ethical motives, the significance of economically relevant reputation losses should not be underestimated. Hence, the protest against unethical AI projects can in turn be interpreted in an economic logic, too.

3.4 Loyalty to Guidelines

As indicated in the previous sections, the practice of using AI systems is poor in terms of compliance with the principles set out in the various ethical guidelines. Great progress has been made in the areas of privacy, fairness or explainability. For example, many privacy-friendly techniques for the use of data sets and learning algorithms have been developed, using methods where AI systems’ “sight” is “darkened” via cryptography, differential or stochastic privacy (Ekstrand et al. 2018; Baron and Musolesi 2017), on the ways of designing algorithms and code, respectively (Kitchin 2017; Kitchin and Dodge 2011), or on the ways training data sets are selected (Gebru et al. 2018; Cowls and Floridi 2018; Eaton et al. 2017; Goldsmith and Burton 2017). So far, however, hardly any of these demands have been met.

5 Conclusion

Currently, AI ethics is failing in many cases. Ethics lacks a reinforcement mechanism. Deviations from the various codes of ethics have no consequences. And in cases where ethics is integrated into institutions, it mainly serves as a marketing strategy. Furthermore, empirical experiments show that reading ethics guidelines has no significant influence on the decision-making of software developers. In practice, AI ethics is often considered as extraneous, as surplus or some kind of “add-on” to technical concerns, as unbinding framework that is imposed from institutions “outside” of the technical community. Distributed responsibility in conjunction with a lack of knowledge about long-term or broader societal technological consequences causes software developers to lack a feeling of accountability or a view of the moral significance of their work. Especially economic incentives are easily overriding commitment to ethical principles and values. This implies that the purposes for which AI systems are developed and applied are not in accordance with societal values or fundamental rights such as beneficence, non-maleficence, justice, and explicability (Taddeo and Floridi 2018; Pekka et al. 2018).

Nevertheless, in several areas ethically motivated efforts are undertaken to improve AI systems. This is particularly the case in fields where technical “fixes” can be found for specific problems, such as accountability, privacy protection, anti-discrimination, safety, or explainability. However, there is also a wide range of ethical aspects that are significantly related to the research, development and application of AI systems, but are not or very seldomly mentioned in the guidelines. Those omissions range from aspects like the danger of a malevolent artificial general intelligence, machine consciousness, the reduction of social cohesion by AI ranking and filtering systems on social networking sites, the political abuse of AI systems, a lack of diversity in the AI community, links to robot ethics, the dealing with trolley problems, the weighting between algorithmic or human decision routines, “hidden” social and ecological costs of AI, to the problem of public–private-partnerships and industry-funded research. Again, as mentioned earlier, the list of omissions is not exhaustive and not all omissions can be justified equally. Some omissions, like deliberations on artificial general intelligence, can be justified by pointing at their purely speculative nature, while other omissions are less valid and should be a reason to update or improve existing and upcoming guidelines.

Checkbox guidelines must not be the only “instruments” of AI ethics. A transition is required from a more deontologically oriented, action-restricting ethic based on universal abidance of principles and rules, to a situation-sensitive ethical approach based on virtues and personality dispositions, knowledge expansions, responsible autonomy and freedom of action. Such an AI ethics does not seek to subsume as many cases as possible under individual principles in an overgeneralizing way, but behaves sensitively towards individual situations and specific technical assemblages. Further, AI ethics should not try to discipline moral actors to adhere to normative principles, but emancipate them from potential inabilities to act self-responsibly on the basis of comprehensive knowledge, as well as empathy in situations where morally relevant decisions have to be made.

These considerations have two consequences for AI ethics. On the one hand, a stronger focus on technological details of the various methods and technologies in the field of AI and machine learning is required. This should ultimately serve to close the gap between ethics and technical discourses. It is necessary to build tangible bridges between abstract values and technical implementations, as long as these bridges can be reasonably constructed. On the other hand, however, the consequence of the presented considerations is that AI ethics, conversely, turns away from the description of purely technological phenomena in order to focus more strongly on genuinely social and personality-related aspects. AI ethics then deals less with AI as such, than with ways of deviation or distancing oneself from problematic routines of action, with uncovering blind spots in knowledge, and of gaining individual self-responsibility. Future AI ethics faces the challenge of achieving this balancing act between the two approaches.