1 Introduction

Algorithmic systems that use artificial intelligence (AI) promise both significant benefits [1] and equally momentous risks related to biases, discrimination, opacity, and dissipation of human accountability [2,3,4]. To reap the benefits and manage the risks, there is widespread consensus that AI systems need to be governed to operate in line with human and societal values [5, 6]. However, current AI governance work faces the challenge of translating abstract ethical principles, such as fairness, into practicable AI governance processes [7, 8]. In a global overview of AI governance, Butcher and Beridze [9] conclude that “AI governance is an unorganized area.” While this statement refers to the number of stakeholders seeking to influence global AI governance, we suggest a different sense in which AI governance scholarship and practice are currently unorganized. Specifically, there is a lack of understanding of the position of AI governance within the organizational governance structure. Established scholarship on corporate, IT, and data governance understandably cannot include the more recent AI governance [10, 11]. However, emerging organizational AI governance literature has also devoted little attention to other governance areas such as IT governance [12, 13]

AI governance efforts do not take place in a vacuum. On the contrary, AI governance is entering an increasingly complex organizational governance landscape, where corporate governance [10], information technology (IT) governance [11], and data governance [14] already require management attention [15]. Thus, the current unorganized state of AI governance literature is unfortunate because organizations deploying AI in their operations play a key role in implementing AI governance in practice [13, 16, 17].

We bring increased conceptual clarity to the AI governance literature through two contributions. First, we draw on previous scholarly work on AI ethics and governance [9, 12, 18,19,20,21,22] and propose a synthesizing definition of AI governance at the organizational level. Second, we position AI governance as part of an organization’s governance structure, together with corporate, IT, and data governance. In doing so, we advance the body of knowledge on implementing AI ethics (e.g., [7, 8, 13, 23]) through AI governance (e.g., [12, 18]). Our contributions clarify the significance of AI governance as part of organizational governance that helps align the use of AI technologies with organizational strategies and legal and ethical requirements coming from the operating environment.

2 Defining AI governance

There is a growing body of research acknowledging the importance of governed AI. Georgieva and her colleagues [8] call this the “third wave of scholarship on ethical AI,” which focuses on turning AI principles into actionable practice and governance. The third wave aims at promoting practical accountability mechanisms [24]. In order to structure this complex domain, researchers have presented layered AI governance structures, which include, for example, ethical and legal layers and levels ranging from AI developers to regulation and oversight [18, 23]. At the societal level, AI regulation and policy [25], and particularly human rights law [19], have also been raised as critical considerations.

Despite this scholarly attention, there have been few explicit attempts to define AI governance. In their global overview, Butcher and Beridze [9] characterize AI governance as “a variety of tools, solutions, and levers that influence AI development and applications.” In its broad scope, this definition comes close to Floridi’s [20] concept of digital governance, defined as “the practice of establishing and implementing policies, procedures and standards for the proper development, use and management of the infosphere.” In a similar vein, Gahnberg [22] operationalizes governance of AI as “intersubjectively recognized rules that define, constrain, and shape expectations about the fundamental properties of an artificial agent.” The focus on rules is helpful, but Gahnberg’s definition focuses on drafting societal rules, such as standards and legislation, rather than organizational AI governance. Overall, these macro-level conceptions remain silent on how organizations should govern their AI systems.

Schneider et al. [12] define AI governance for businesses as “the structure of rules, practices, and processes used to ensure that the organisation’s AI technology sustains and extends the organisation’s strategies and objectives.” They conceptualize the scope of AI governance for businesses as including the machine learning (ML) model, the data used by the model, the AI system that contains the ML model, and other components and functionalities (depending on the use and context of the system). Although AI governance for businesses is a promising starting point, the concept largely omits ethical and regulatory questions present in previous AI governance literature. In doing so, the concept stands in contrast to the AI ethics literature and downplays established AI-specific ethical and regulatory issues stemming from the organization’s environment.

In contrast, Winfield and Jirotka [21] highlight ethical governance, which goes beyond good governance by instilling ethical behaviors in designers and organizations. They define ethical governance as “a set of processes, procedures, cultures and values designed to ensure the highest standards of behavior” [21]. The list of governance elements is instructive, but the objective, ensuring “the highest standards of behavior,” remains underdefined for clarifying organizational AI governance.

Cihon et al. [26], investigating corporate governance of AI, come close to our focus area and provide actor-specific means of improving AI governance. However, they do not explicitly define AI governance. Moreover, their study focuses on large corporations at the forefront of AI development, such as Alphabet and Amazon, and how they can better govern AI to serve the public interest [26]. In our effort to define organizational AI governance, we also aim to include smaller organizations that use AI systems but do not exercise such leverage over global AI technology development.

In addition, none of the previously mentioned AI governance conceptualizations explicate the role of technologies used to manage and govern AI systems. These include, for example, tools for data governance [27], explainable AI (XAI) [28], and bias detection [29]. Bringing together the ethical, organizational, and technological aspects, and considering the definitions of related governance fields, we propose the following definition of AI governance at the organization level:

AI governance is a system of rules, practices, processes, and technological tools that are employed to ensure an organization’s use of AI technologies aligns with the organization’s strategies, objectives, and values; fulfills legal requirements; and meets principles of ethical AI followed by the organization.

Our definition of AI governance is essentially normative in that the intentions are to be action-oriented and to guide organizations in implementing effective AI governance [cf. 30]. In particular, the definition draws on that of AI governance for business [6, 32]. Intra-organizational strategic alignment is a necessary condition for AI governance. However, it is not a sufficient condition because environmental and technical layers also need to be included.

In what follows, we explain the key elements of the definition. First, AI governance is a system whose constituent elements should be interlinked to form a functional entity (cf. [31]). The systemic perspective highlights how AI governance unifies heterogeneous tools to articulate and attain a central objective, which is the purpose of the system [33, 34]. The AI governance system can also include structural arrangements such as ethical review boards [35]. When AI governance is understood as a system, synergies between different tools, such as bias testing methods and participatory design, can be identified.

Second, the key elements of an AI governance system are rules, practices, processes, and technological tools. Essentially, these are all methods of regulating behavior to keep it within acceptable boundaries and enable desirable behavior. We have included technological tools in the definition to highlight the involvement of both human and technological components in AI governance. Third, these elements are in place to govern an organization’s use of AI technologies. Here, the term “use” is broadly understood to mean all engagement with AI technologies in the organization’s operations throughout the system’s life cycle, ranging from use case definition and design to maintenance and disposal. In other words, AI governance needs to address the entire AI system life cycle [13].

Fourth, the use of AI technologies is governed to ensure multiple alignments, both in internal operations and with external requirements. The use of AI should align with organizational strategies, objectives, and values. In addition, the use of AI technologies should comply with relevant legal requirements. Finally, AI use should align with ethical AI principles followed by the organization. These alignments may set differing requirements for AI technology; consequently, any possible trade-offs should be carefully considered [36].

3 AI governance as part of an organization’s governance structure

Having defined organizational AI governance, we position AI governance (understood as organizational practices) within an organization's governance structure. In particular, we highlight the relationship of AI governance with three relevant areas of governance, which are as follows: corporate, IT, and data (see Fig. 1). To the best of our knowledge, AI governance has not been explicitly connected with corporate, IT, and data governance beyond adapting definitions from these established fields to cover AI [53]. Fourth, AI auditing is needed to ensure that appropriate AI governance mechanisms are in place and to communicate AI governance to stakeholders. Scholars can study AI auditing literature, practices, and tools similarly to studies on AI governance and continuously build bridges to AI governance literature. These research streams can be advanced in parallel. However, they should not become separate silos but rather contribute to a growing general understanding of AI governance within academia and practice.