A Legal Principles-Based Framework for AI Liability Regulation

  • Chapter
  • First Online:
The 2021 Yearbook of the Digital Ethics Lab

Part of the book series: Digital Ethics Lab Yearbook ((DELY))

Abstract

Europe has recently taken the path of regulating artificial intelligence (AI). This is a complex task, in which it is crucial to understand what the purposes of regulation are. In this perspective, it is not enough to identify and set ethical guidelines and legal norms. It is also important to envisage the legal principles that might steer the regulation of AI, which is aimed to reconcile technological innovation, economic development and user trust. Therefore, it may be useful to consider whether some principles may emerge from existing legislation in the AI sector. To this aim, we review the work of the Expert Group on Product Liability in the field of AI and Emerging Technologies (2019) as a case study. We show how their work has started to lay the basis for a set of legal principles for AI liability regime. An initial and open list of legal principles can serve as a benchmark for future work on a principles-based AI regulation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The interplay between trust and distrust is referred to and largely analysed in the literature on trust. See, for instance, Feldman (2018).

  2. 2.

    In this regard, it is worth noting the work of the Ad hoc Committee on Artificial Intelligence (Towards AI Regulation, 2021), which assesses the impact of AI on ethical principles, human rights, the rule of law and democracy, and examines key values, rights and principles deriving – in a bottom-up perspective – from sectorial approaches and ethical guidelines; and – in a top-down perspective – from human rights, democracy and the rule of law requirements. It is also worth mentioning the European Declaration on Digital Rights and Principles for the Digital Decade (2022). See also Hacker (2020); Bernitz et al. (2020), notably, Chap. 18; Fjeld et al. (2020). On constitutional and human rights considerations on AI see recently Barfield – Pagallo (2020).

  3. 3.

    An approach to the development of EU financial services regulation https://ec.europa.eu/info/business-economy-euro/banking-and-finance/regulatory-process-financial-services/regulatory-process-financial-services_en

  4. 4.

    Fjeld et al. (2020); Hagendorff (2020); Jobin et al. (2019).

  5. 5.

    See in this respect Mazzini (2019), who stated significantly (note 7, p. 3): “Another very relevant and perhaps more complicated question to address for policymakers in the context of AI governance is the future oriented question of identifying the direction we want to move towards as a society.” In this regard, see also crucially Floridi (2020). In the field of digital technologies, as has been remarked, “the ‘race to AI’ is also bringing forth a ‘race to AI regulation’” (Smuha, 2019, 4).

  6. 6.

    See in this regard remarks in the Expert Group Report (2019, 56): “It is also more efficient to hold all potential injurers liable in such cases, as the different providers are in the best position to control risks of interaction and interoperability and to agree upfront on the distribution of the costs of accidents”.

  7. 7.

    Control is a key term for the Expert Group, that has been defined as follows: “‘Control’ is a variable concept, though, ranging from merely activating the technology, thus exposing third parties to its potential risks, to determining the output or result (…), and may include further steps in between, which affect the details of the operation from start to stop” (Report, 2019, 41).

  8. 8.

    See recently, for instance, Quelle (2017), Ivanova (2020). For a comparative analysis between US and Europe, see also Winn (2019).

References

  • Abbott, R. (2018). The reasonable computer: Disrupting the paradigm of tort liability (November 29, 2016). George Washington Law Review, 86(1) Available at SSRN: https://ssrn.com/abstract=2877380

  • Ad hoc Committee On Artificial Intelligence (CAHAI). (2020a). Feasibility Study, CAHAI, 23.

    Google Scholar 

  • Ad hoc Committee on Artificial Intelligence (CAHAI). (2020b). Towards Regulation of AI Systems. Global perspectives on the development of a legal framework on Artificial Intelligence systems based on the Council of Europe’s standards on human rights, democracy and the rule of law, Council of Europe, DGI, 16.

    Google Scholar 

  • Barfield, W., & Pagallo, U. (2020). Advanced introduction to law and artificial intelligence. Edward Elgar.

    Book  Google Scholar 

  • Bernitz, U., Groussot, X., Paiu, J., & De Vries, S. (2020). General principles of EU law and the EU digital order. Wolters Kluwer.

    Google Scholar 

  • Bertolini, A. – Episcopo, F., The expert Group’s report on liability for artificial Intelligence and other emerging digital technologies: A critical assessment, in European Journal of Risk Regulation, 2021, pp. 1–16.

    Google Scholar 

  • Black, J., Hopper, M., & Band, C. (2007, March). Making a success of Principles-based regulation. Law and Financial Markets Review, 191–206.

    Google Scholar 

  • Durante, M. (2015). The democratic governance of information societies. A critique to the theory of stakeholders. Philosophy and Technology, 28, 11–32.

    Google Scholar 

  • Durante, M. (2021). Computational power. The Impact of ICT on Law. In Society and knowledge. Routledge.

    Google Scholar 

  • Dworkin, R. (1967). The model of rules. University of Chicago Law Review, 35(1), 14–46. Article 3.

    Article  Google Scholar 

  • Dworkin, R. (1986). Law’s empire. Fontana Press.

    Google Scholar 

  • European Commission. (2020a). On artificial intelligence – A European approach to excellence and trust. White Paper, COM, 65 final.

    Google Scholar 

  • European Commission. (2020b). Proposal for a regulation of the European Parliament and of the council on European data governance, COM, 767 final.

    Google Scholar 

  • European Commission. (2021). Proposal for a regulation of the European Parliament and the Council laying down harmonised rules on Artificial Intelligence (AI Act), COM, 206 final.

    Google Scholar 

  • Expert Group On Liability And New Technologies. (2019). Liability for artificial intelligence and other emerging digital technologies.

    Google Scholar 

  • Feldman, R.-C. (2018). Artificial intelligence. The importance of trust and distrust. Green Bag, 21(3), 1–13. Available at: https://ssrn.com/abstract=3118523

    Google Scholar 

  • Fjeld, J., Nele, A., Hilligoss, H., Nagy, A., & Srikumar, M. (2020, January 15). Principled artificial intelligence: Map** consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication No. 2020–1. Available at SSRN: https://ssrn.com/abstract=3518482

  • Floridi, L. (2008). The method of levels of abstraction. Minds and Machines, 18(3), 303–329.

    Article  Google Scholar 

  • Floridi, L. (2015). The politics of uncertainty. Philosophy and Technology, 28, 1–4.

    Article  Google Scholar 

  • Floridi, L. (2020). The fight for digital sovereignty: What it is, and why it matters, especially for the EU. Philosophy and Technology, 33, 369–378.

    Article  Google Scholar 

  • Floridi, L. (2021). The European legislation on AI: A brief analysis of its philosophical approach. Philosophy and Technology, 34, 215–222.

    Article  Google Scholar 

  • Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1), 1–15.

    Google Scholar 

  • Hacker, P. (2020, May 7). AI regulation in Europe. Available at SSRN: https://ssrn.com/abstract=3556532

  • Hagendorff, D.-T. (2020). The ethics of AI ethics — An evaluation of guidelines. Minds & Machines, 30, 99–120.

    Article  Google Scholar 

  • Hart, H. (1961). The concept of law. OUP.

    Google Scholar 

  • High Level Expert Group on AI. (2019, April 8). Ethics guidelines for trustworthy AI.

    Google Scholar 

  • Ivanova, Y. (2020). Data controller, processor or a joint controller: Towards reaching GDPR compliance in the data and technology driven world. In M. Tzanou (Ed.), Personal data protection and legal developments in the European Union. IGI Global.

    Google Scholar 

  • Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389–399.

    Article  Google Scholar 

  • Mazzini, G. (2019). A system of governance for artificial intelligence through the lens of emerging intersections between AI and EU law. In A. De Franceschi & R. Schulze (Eds.), Digital revolution – New challenges for law. Available at SSRN: https://ssrn.com/abstract=3369266

  • Morley, J., Floridi, L., Kinsey, L., et al. (2020). From what to how: An initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Science and Engineering Ethics, 26, 2141–2168.

    Article  Google Scholar 

  • Pagallo, U., Casanovas, P., & Madelin, R. (2019). The middle-out approach: Assessing models of legal governance in data protection, artificial intelligence, and the Web of Data. The Theory and Practice of Legislation, 7(1), 1–25.

    Google Scholar 

  • Quelle, C. (2017, April 7). Privacy, proceduralism and self-regulation in data protection law. In Teoria Critica della Regolazione Sociale.

    Google Scholar 

  • Raz, J. (1972). Legal principles and the limits of law. Yale Law Journal, 81, 823.

    Article  Google Scholar 

  • Smuha, N.-A. (2019, November). From a ‘Race to AI’ to a ‘Race to AI Regulation’ − Regulatory competition for artificial intelligence. Available at SSRN: https://ssrn.com/abstract=3501410

  • Taddeo M., McCutcheon, T., & Floridi, L. (2019, December 01). Trusting artificial intelligence in cybersecurity is a double-edged sword. Available at SSRN: https://ssrn.com/abstract=3831285

  • Winn, J. (2019, July 11). The governance turn in information privacy law. Available at SSRN: https://ssrn.com/abstract=3418286

Download references

Funding

 Massimo Durante’s research was supported by a fellowship funded by Google EU.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Massimo Durante .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Durante, M., Floridi, L. (2022). A Legal Principles-Based Framework for AI Liability Regulation. In: Mökander, J., Ziosi, M. (eds) The 2021 Yearbook of the Digital Ethics Lab. Digital Ethics Lab Yearbook. Springer, Cham. https://doi.org/10.1007/978-3-031-09846-8_7

Download citation

Publish with us

Policies and ethics

Navigation