Abstract
Europe has recently taken the path of regulating artificial intelligence (AI). This is a complex task, in which it is crucial to understand what the purposes of regulation are. In this perspective, it is not enough to identify and set ethical guidelines and legal norms. It is also important to envisage the legal principles that might steer the regulation of AI, which is aimed to reconcile technological innovation, economic development and user trust. Therefore, it may be useful to consider whether some principles may emerge from existing legislation in the AI sector. To this aim, we review the work of the Expert Group on Product Liability in the field of AI and Emerging Technologies (2019) as a case study. We show how their work has started to lay the basis for a set of legal principles for AI liability regime. An initial and open list of legal principles can serve as a benchmark for future work on a principles-based AI regulation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
The interplay between trust and distrust is referred to and largely analysed in the literature on trust. See, for instance, Feldman (2018).
- 2.
In this regard, it is worth noting the work of the Ad hoc Committee on Artificial Intelligence (Towards AI Regulation, 2021), which assesses the impact of AI on ethical principles, human rights, the rule of law and democracy, and examines key values, rights and principles deriving – in a bottom-up perspective – from sectorial approaches and ethical guidelines; and – in a top-down perspective – from human rights, democracy and the rule of law requirements. It is also worth mentioning the European Declaration on Digital Rights and Principles for the Digital Decade (2022). See also Hacker (2020); Bernitz et al. (2020), notably, Chap. 18; Fjeld et al. (2020). On constitutional and human rights considerations on AI see recently Barfield – Pagallo (2020).
- 3.
An approach to the development of EU financial services regulation https://ec.europa.eu/info/business-economy-euro/banking-and-finance/regulatory-process-financial-services/regulatory-process-financial-services_en
- 4.
- 5.
See in this respect Mazzini (2019), who stated significantly (note 7, p. 3): “Another very relevant and perhaps more complicated question to address for policymakers in the context of AI governance is the future oriented question of identifying the direction we want to move towards as a society.” In this regard, see also crucially Floridi (2020). In the field of digital technologies, as has been remarked, “the ‘race to AI’ is also bringing forth a ‘race to AI regulation’” (Smuha, 2019, 4).
- 6.
See in this regard remarks in the Expert Group Report (2019, 56): “It is also more efficient to hold all potential injurers liable in such cases, as the different providers are in the best position to control risks of interaction and interoperability and to agree upfront on the distribution of the costs of accidents”.
- 7.
Control is a key term for the Expert Group, that has been defined as follows: “‘Control’ is a variable concept, though, ranging from merely activating the technology, thus exposing third parties to its potential risks, to determining the output or result (…), and may include further steps in between, which affect the details of the operation from start to stop” (Report, 2019, 41).
- 8.
References
Abbott, R. (2018). The reasonable computer: Disrupting the paradigm of tort liability (November 29, 2016). George Washington Law Review, 86(1) Available at SSRN: https://ssrn.com/abstract=2877380
Ad hoc Committee On Artificial Intelligence (CAHAI). (2020a). Feasibility Study, CAHAI, 23.
Ad hoc Committee on Artificial Intelligence (CAHAI). (2020b). Towards Regulation of AI Systems. Global perspectives on the development of a legal framework on Artificial Intelligence systems based on the Council of Europe’s standards on human rights, democracy and the rule of law, Council of Europe, DGI, 16.
Barfield, W., & Pagallo, U. (2020). Advanced introduction to law and artificial intelligence. Edward Elgar.
Bernitz, U., Groussot, X., Paiu, J., & De Vries, S. (2020). General principles of EU law and the EU digital order. Wolters Kluwer.
Bertolini, A. – Episcopo, F., The expert Group’s report on liability for artificial Intelligence and other emerging digital technologies: A critical assessment, in European Journal of Risk Regulation, 2021, pp. 1–16.
Black, J., Hopper, M., & Band, C. (2007, March). Making a success of Principles-based regulation. Law and Financial Markets Review, 191–206.
Durante, M. (2015). The democratic governance of information societies. A critique to the theory of stakeholders. Philosophy and Technology, 28, 11–32.
Durante, M. (2021). Computational power. The Impact of ICT on Law. In Society and knowledge. Routledge.
Dworkin, R. (1967). The model of rules. University of Chicago Law Review, 35(1), 14–46. Article 3.
Dworkin, R. (1986). Law’s empire. Fontana Press.
European Commission. (2020a). On artificial intelligence – A European approach to excellence and trust. White Paper, COM, 65 final.
European Commission. (2020b). Proposal for a regulation of the European Parliament and of the council on European data governance, COM, 767 final.
European Commission. (2021). Proposal for a regulation of the European Parliament and the Council laying down harmonised rules on Artificial Intelligence (AI Act), COM, 206 final.
Expert Group On Liability And New Technologies. (2019). Liability for artificial intelligence and other emerging digital technologies.
Feldman, R.-C. (2018). Artificial intelligence. The importance of trust and distrust. Green Bag, 21(3), 1–13. Available at: https://ssrn.com/abstract=3118523
Fjeld, J., Nele, A., Hilligoss, H., Nagy, A., & Srikumar, M. (2020, January 15). Principled artificial intelligence: Map** consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication No. 2020–1. Available at SSRN: https://ssrn.com/abstract=3518482
Floridi, L. (2008). The method of levels of abstraction. Minds and Machines, 18(3), 303–329.
Floridi, L. (2015). The politics of uncertainty. Philosophy and Technology, 28, 1–4.
Floridi, L. (2020). The fight for digital sovereignty: What it is, and why it matters, especially for the EU. Philosophy and Technology, 33, 369–378.
Floridi, L. (2021). The European legislation on AI: A brief analysis of its philosophical approach. Philosophy and Technology, 34, 215–222.
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1), 1–15.
Hacker, P. (2020, May 7). AI regulation in Europe. Available at SSRN: https://ssrn.com/abstract=3556532
Hagendorff, D.-T. (2020). The ethics of AI ethics — An evaluation of guidelines. Minds & Machines, 30, 99–120.
Hart, H. (1961). The concept of law. OUP.
High Level Expert Group on AI. (2019, April 8). Ethics guidelines for trustworthy AI.
Ivanova, Y. (2020). Data controller, processor or a joint controller: Towards reaching GDPR compliance in the data and technology driven world. In M. Tzanou (Ed.), Personal data protection and legal developments in the European Union. IGI Global.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389–399.
Mazzini, G. (2019). A system of governance for artificial intelligence through the lens of emerging intersections between AI and EU law. In A. De Franceschi & R. Schulze (Eds.), Digital revolution – New challenges for law. Available at SSRN: https://ssrn.com/abstract=3369266
Morley, J., Floridi, L., Kinsey, L., et al. (2020). From what to how: An initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Science and Engineering Ethics, 26, 2141–2168.
Pagallo, U., Casanovas, P., & Madelin, R. (2019). The middle-out approach: Assessing models of legal governance in data protection, artificial intelligence, and the Web of Data. The Theory and Practice of Legislation, 7(1), 1–25.
Quelle, C. (2017, April 7). Privacy, proceduralism and self-regulation in data protection law. In Teoria Critica della Regolazione Sociale.
Raz, J. (1972). Legal principles and the limits of law. Yale Law Journal, 81, 823.
Smuha, N.-A. (2019, November). From a ‘Race to AI’ to a ‘Race to AI Regulation’ − Regulatory competition for artificial intelligence. Available at SSRN: https://ssrn.com/abstract=3501410
Taddeo M., McCutcheon, T., & Floridi, L. (2019, December 01). Trusting artificial intelligence in cybersecurity is a double-edged sword. Available at SSRN: https://ssrn.com/abstract=3831285
Winn, J. (2019, July 11). The governance turn in information privacy law. Available at SSRN: https://ssrn.com/abstract=3418286
Funding
Massimo Durante’s research was supported by a fellowship funded by Google EU.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Durante, M., Floridi, L. (2022). A Legal Principles-Based Framework for AI Liability Regulation. In: Mökander, J., Ziosi, M. (eds) The 2021 Yearbook of the Digital Ethics Lab. Digital Ethics Lab Yearbook. Springer, Cham. https://doi.org/10.1007/978-3-031-09846-8_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-09846-8_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-09845-1
Online ISBN: 978-3-031-09846-8
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)