Abstract
Asimov’s Laws have been often taken as a starting point for reflection on liability of AI, one example being the EP resolution of 16 February 2017 on Civil Law Rules on Robotics. However, the analysis made herein shows that this way of proceeding is not effective. Furthermore, the mostly recommended model for liability of the producer of AI, or another person responsible for AI, or the AI itself, is a strict liability regime. In addition to reflecting on the issue of legally-relevant damage caused by AI, the chapter examines the problem associated with establishing a causative link between action of AI and damage and the questions of negligence and standard of conduct for AI. It argues that there are some nontrivial situations involving AI when fault liability could be useful. For this reason, it contains a proposal of some new understanding of legal culpability, when this concept is applied to AI. The proposal is consistent with the concepts proposed in the previous chapters, such as the concept of legal subjectivity of AI, the capability for juridical actions of AI, the equivalent of free will and the discernment of AI.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
There are different meanings of “responsibility” and different kinds of “liability”—see Hage (2017).
- 2.
It is proposed in Resolution 2020 motive 8, that these problems should be faced. The European Parliament: “Considers that the Product Liability Directive (PLD) has, for over 30 years, proven to be an effective means of getting compensation […] but should nevertheless be revised to adapt it to the digital world […] urges the Commission to assess whether the PLD should be transformed into a regulation, to clarify the definition of ‘products’ by determining whether digital content and digital services fall under its scope and to consider adapting concepts such as ‘damage’, ‘defect’ and ‘producer’ […] the concept of ‘producer’ should incorporate manufacturers, developers, programmers, service providers as well as backend operators; calls on the Commission to consider reversing the rules governing the burden of proof for harm caused by emerging digital technologies […].”
- 3.
- 4.
Resolution 2017, point A.
- 5.
Barbrook (2007).
- 6.
Asimov (1985).
- 7.
Asimov (1942), pp. 94–103.
- 8.
See e.g. McCauley (2007).
- 9.
The criteria of context may be useless in the case, when the situations very much alike externally are qualified: e.g. removal of the kidney being an effect of conscious consent and upon the legally valid rules vs. removal of the kidney without a conscious consent and illegally.
- 10.
Remember the trolley dilemma first introduced by Foot (1967).
- 11.
Murphy and Woods (2009).
- 12.
- 13.
EC High-Level Expert Group on AI (2019), A Definition of AI: Main Capabilities and Scientific Disciplines. Brussels.
- 14.
Bostrom (2014).
- 15.
- 16.
Murphy and Woods (2009).
- 17.
Clarke (1994).
- 18.
Williams (1973).
- 19.
Lucas (1993).
- 20.
See Open Letter to the European Commission. Artificial Intelligence and Robotics, 2019. http://www.robotics-openletter.eu, last access on the 4th of August 2022.
- 21.
This way of thinking presents Proposal 2021.
- 22.
The above ideas about the Asimov’s Law were presented in Księżak and Wojtczak (2020).
- 23.
However, point (17) of Annex B includes an additional condition: “unless stricter national laws and consumer protection legislation is in force. The national laws of the Member States, including any relevant jurisprudence, with regard to the amount and extent of the compensation, as well as the limitation period, should continue to apply.”
- 24.
A very clear juxtaposition of current national liability frameworks in EU is included in Evas (2020).
- 25.
The problems of causation appearing when albeit perpetrator of harm is AI and some proposals of solving these problems are presented in Wojtczak and Księżak (2021).
- 26.
- 27.
Cf. Blanco-Justicia and Domingo-Ferrer (2019).
- 28.
Cf. Wojtczak and Księżak (2021).
- 29.
Before reporting a provider must establish a causal link. So, when someone reports, simultaneously admits that there is a causal link. Usually, the other party would have to proof the causal link at the court.
- 30.
Stanicki et al. (2021).
- 31.
O’Sullivan et al. (2019), p. 9.
- 32.
Although there is certain difference: while it cannot be argued that medicine or equipment (e.g. respirator or dialyzer) could be substituted by a human doctor, it can be rationally said that a human doctor could substitute an AI or robot in cases where, historically, it was the AI or robots which substituted human doctors. So, the argument may sound like this: “The hospital is not liable for damage caused by not having the surgical robot, because it had the best surgeon among its staff and he did what he could.”
- 33.
According to Article 2 point 1 of Regulation (EU) 2017/75 on medical devices which is applied from 26.05.2020 ‘medical device’ means any instrument, apparatus, appliance, software, implant, reagent, material or other article intended by the manufacturer to be used, alone or in combination, for human beings for one or more of the following specific medical purposes: “— diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of disease, — diagnosis, monitoring, treatment, alleviation of, or compensation for, an injury or disability, — investigation, replacement or modification of the anatomy or of a physiological or pathological process or state, — providing information by means of in vitro examination of specimens derived from the human body, including organ, blood and tissue donations, and which does not achieve its principal intended action by pharmacological, immunological or metabolic means, in or on the human body, but which may be assisted in its function by such means.”
The following products shall also be deemed to be medical devices: “— devices for the control or support of conception; — products specifically intended for the cleaning, disinfection or sterilisation of devices as referred to in Article 1(4) and of those referred to in the first paragraph of this point.”
- 34.
Regulation (EU) 2017/75 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223 and repealing Council Directives 90/385/EEC and 93/42/EEC, L 117/1, https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32017R0745&from=PL, last access on the 4th of August 2022.
- 35.
Article 22 section 1 GDPR: The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.
- 36.
The same will happen in many other areas, such as road traffic. At the beginning the autonomous vehicles will operate under additional control demanded by the law, but over time, the possibility of human driving will be restricted because it will obviously increase the danger of accident.
- 37.
Hoeren and Niehoff (2018), pp. 308 and further.
- 38.
- 39.
https://www.europarl.europa.eu/doceo/document/TA-8-2019-0081_EN.html, access on the 4th of August 2022.
- 40.
https://airly.org/pl/alert-smogowy-kiedy-i-przy-jakich-wartosciach-oglaszany-jest-alarm-z-powodu-smogu/, last access on the 4th of August 2022.
- 41.
The problem is similar to that relating to so-called conscience clauses, when the view of one person, e.g., a doctor or pharmacist, may prevail over the claim of another person to receive a certain service.
- 42.
Of course, this problem is connected to political decisions and the concept of democracy accepted in a given time and a given place, but it does not mean that it should not be discussed or at least noticed.
- 43.
https://www.jeder-mensch.eu/informationen/?lang=en, last access on the 4th of August 2022.
- 44.
The rules proposed by EU bodies insist on the rule of risk-management and give many criteria which should be applied to assess the risk of harm posed by AI. The instance may be here Article 7 of Proposal 2021, which establishes the directives for the Commission regarding the method of updating the list in Annex III with the addition of high-risk AI systems, and Article 9 which details a risk management system.
- 45.
Among others: Directorate-General for Internal Policies Policy Department for Citizen’s Rights and Constitutional Rights and Constitutional Affairs, Artificial Intelligence and Civil Liability. Study requested by the JURI Committee, PE 621.926—July 2020, Resolution 2017, Commission Staff Working Document: Liability for emerging digital technologies accompanying the document Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions: Artificial Intelligence for Europe, Brussels, 25.4.2018, COM (2018) 237 final, Report from the Expert Group on Liability and New Technologies—New Technologies Formation, Liability for Artificial Intelligence and Other Emerging Digital Technologies, European Union 2019, Report from the Commission to the European Parliament, the Council and the European and Social Committee: Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and Robotics, Brussels, 19.2.2020, COM (2020) 64 final, Resolution 2020, Hamon et al. (2020), EUR 30040.
- 46.
Resolution 2020, Annex, B. (8): […] whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. This follows from general and widely accepted liability concepts of justice, according to which the person who creates or maintains a risk for the public is liable if that risk causes harm or damage, and thus should ex-ante minimise or ex-post compensate that risk.
- 47.
- 48.
de-Wit et al. (2016).
- 49.
Gazzaniga (2011), p. 222, cites R. Sapolsky, professor of neurology who said: “It is boggling that the legal system’s gold standard for an insanity defense – M’Naughten – is based on 166-year-old science. Our growing knowledge about the brain makes notions of volition, culpability, and, ultimately the very premise of the criminal justice system, deeply suspect”.
- 50.
- 51.
Gazzaniga (2011), pp. 228, 252–253.
- 52.
- 53.
- 54.
- 55.
Neemeh (2018), p. 1.
- 56.
Evans (2009).
- 57.
Monterossi (2020), p. 5.
- 58.
For the definition of radial category idealized cognitive models and composite prototype see Evans (2007), pp. 29, 104, 177–179.
- 59.
Gobert (1994), p. 409.
- 60.
The second of “General principles concerning the development of robotics and artificial intelligence for civil use” says that European Parliament: “2. Considers that a comprehensive Union system of registration of advanced robots should be introduced within the Union’s internal market where relevant and necessary for specific categories of robots, and calls on the Commission to establish criteria for the classification of robots that would need to be registered; in this context, calls on the Commission to investigate whether it would be desirable for the registration system and the register to be managed by a designated EU Agency for Robotics and Artificial Intelligence”.
- 61.
Proving the improper or ineffective realization of a function is a matter of civil procedure. It may depend on the opinion of experts or the report of an entity controlling registration or certification.
- 62.
References
Books and Articles
Anderson SL (2008) Asimov’s “Three Laws of Robotics” and machine metaethics. AI Soc 22(4):477–493. https://doi.org/10.1007/s00146-007-0094-5
Asimov I (1942) Roundaround. Astounding Science Fiction
Asimov I (1976) Bicentennial man. Ballantine Books, New York
Asimov I (1985) Robots and empire. Doubleday, New York
Barbrook R (2007) Imaginary futures: from thinking machines to the global village. Pluto Press, London
Beckers A, Teubner G (2021) The three liability regimes for artificial intelligence: algorithmic a cants, hybrids, crowds. Hart, Oxford
Blanco-Justicia A, Domingo-Ferrer J (2019) Machine learning explain ability through comprehensible decision trees. In: Holzinger A, Kieseberg P, Min Tjoa A, Weippl E (eds) Machine learning and knowledge extraction. Springer, Cham. ISBN 978-3-030-29726-8
Bostrom N (2014) Superintelligence: paths, dangers, Strategies. Oxford University Press, Oxford
Brkan M, Bonnet G (2020) Legal and technical feasibility of the GDPR’s quest for explanation of algorithmic decisions: of black boxes, white boxes and fata morganas. Eur J Risk Regul. 11(18):II.2–II.3. ISSN 2190-8249
Clarke R (1994) Asimov’s laws of robotics: implications for information technology. IEEE Comput 27(1):57–66
Dafni L (2018) Could AI agents be held criminally liable: artificial intelligence and the challenges for criminal law. South Carolina Law Rev 69:677. https://scholarcommons.sc.edu/cgi/viewcontent.cgi?article=4253&context=sclr, last access on the 4th of August 2022
De Maglie C (2005) Models of corporate criminal liability in comparative study. Washington Univ Global Stud Law Rev 4:547
DeGrazia D (2006) On the question of Personhood beyond Homo sapiens. In: Singer P (ed) The defense of animals. Second Wave. Blackwell Publishing, Malden, pp 40–53
de-Wit L, Alexander D, Ekroll V et al (2016) Is neuroimaging measuring information in the brain? Psychon Bull Rev 23:1415–1428. https://doi.org/10.3758/s13423-016-1002-0
Edersheim JG, Weintraub Brendel R, Price B (2012) Neuroimaging, diminished capacity and mitigation. In: Simpson JR (ed) Neuroimaging in forensic psychiatry. From clinic to the courtroom. Wiley and Sons
Evans V (2007) A glossary of cognitive linguistics. Edinburgh University Press, Edinburgh
Evans EP (2009) The criminal prosecution and capital punishment of animals. The Lawbook Exchange Ltd., Clark, NJ
Floridi, L, Cowls, J, Beltrametti, M, Chatila, R, Chazerand, P, Dignum, V, Luetge, Madelin R, Pagallo U, Rossi F, Schafer B, Valcke P, Vayena E (2018) “AI4People – An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles and Recommendations”, forthcoming in Minds and Machines, December 2018. https://ssrn.com/abstract=3284141, last access on the 4th of August 2022
Foot P (1967) The problem of abortion and the Doctrine of the double effect. Oxford Rev 5
Gazzaniga M (2011) Who is in charge? Free will and the science of the Brain. HarperCollins Publishers, Pymble (Australia)
Gless S, Silverman E, Weigend T (2016) If Robot cause harm, who is to blame? Self-driving cars and criminal liability. New Crim Law Rev Int Interdiscip J 19(3):412–436
Gobert J (1994) Corporate criminality: four models of fault. Legal Stud 14(03). https://doi.org/10.1111/j.1748-121x.1994.tb0510.x
Greely HT (2011) Neuroscience and criminal responsibility: proving ‘Can’t Help Himself’ as a narrow bar to criminal liability. In: Freeman M (ed) Law and neuroscience. Current Legal Issues 2010 vol. 13. Oxford University Press, Oxford
Hacker P, Krestel R, Grundmann S, Naumann F (2020) Explainable AI under contract and tort law: legal incentives and technical challenges. Artif Intell Law 228:415–439
Hage J (2017) Theoretical foundations for the responsibility of autonomous agents. Artif Intell Law 3(25)
Hamon R, Junklewitz H, Sanchez I (2020) Research Centre Technical Report. Robustness and explainability of artificial intelligence – from technical to policy solutions. Publications Office of the European Union, Luxembourg. https://doi.org/10.2760/57493. (online), JRC119336
Hoeren T, Niehoff M (2018) Artificial intelligence in medical diagnoses and the right to explanation. Eur Data Prot Law Rev 4:3. https://doi.org/10.21552/edpl/2018/3/9
Jordaan L (2003) New perspective on the criminal liability of corporate bodies. Acta Juridica 48
Kaplan J (2016) Artificial intelligence – what everyone needs to know. Oxford University Press, Oxford
Księżak P, Wojtczak S (2020) Prawa Aismova, czyli science fiction jako fundament nowego prawa cywilnego. Forum Prawnicze 4(60). https://doi.org/10.32082/fp.v0i4(60).378
Lucas J (1993) Responsibility. A Clarendon Press Publication, Oxford
McCauley L (2007) AI Armageddon and the Three Laws of Robotics. Ethics Inf Technol 9(2):153–164. https://doi.org/10.1007/s10676-007-9138-2
Monterossi MW (2020) Liability for the fact of autonomous artificial intelligence agents. Things, agencies and legal actors Global Jurist 20190054, eISSN 1934-2640, https://doi.org/10.1515/gj-2019-0054
Mueller GOW (1957–1958) Mens Rea and the corporation: a study of the Model Penal Code position on corporate criminal liability. Univ Pittsburgh Law Rev 19:21–50
Murphy R, Woods D (2009) Beyond Asimov: the three laws of responsible robotics. IEEE Intell Syst 24(4):14–20. https://doi.org/10.1109/MIS.2009.69
Nagel T (1979) Moral luck. In: Nagel T (ed) Mortal questions. Cambridge University Press, Cambridge
Nathan MJ (2021) Black Boxes: how science turns ignorance into knowledge. Oxford University Press
Neemeh ZA (2018) Husserlian empathy and simulationism, memphis: organization of phenomenological organizations VI: Phenomenology and Practical Life 2018, https://www.memphis.edu/philosophy/opo2019/pdfs/neemeh-zach.pdf, last access on the 4th of August 2022
Nilsson N (2009) The quest for artificial intelligence: a history of ideas and achievements. Cambridge University Press, New York
O’Sullivan S, Nevejans N, Allen C, Blyth A, Leonard S, Pagallo U, Holzinger K, Holzinger A, Sajid MI, Ashrafian H (2019) Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. Int J Med Robot Comp Assisted Surg 15(1):1–12. https://doi.org/10.1002/rcs.1968
Schoenberger D (2019) Artificial intelligence in healthcare: a critical analysis of the legal and ethical implications. Int J Law Inf Technol 27:2
Searle JR (1984) Minds, brains and science. Harvard University Press, Cambridge
Shapiro P (2006) Moral agency in other animals. Theoret Med Bioeth 27(4):357–373
Skinner BF (1969) Contingencies of reinforcement: a theoretical analysis. Appleton-Century-Crofts, New York
Stanicki P, Nowakowska K, Piwoński M, Żak K, Niedobylski S, Zaremba B, Oszczędłowski P (2021) The role of artificial intelligence in cancer diagnostics - a review. J Educ Health Sport 11(9):113–122. https://doi.org/10.12775/JEHS.2021.11.09.016
Vinge V (2003) Technological Singularity. http://cmm.cenart.gob.mx/delanda/textos/tech_sing.pdf, last access on the 4th of August 2022
Weiss KJ, Watson C (eds) (2015) Psychiatric expert testimony. Emerging applications. Oxford University Press, Oxford
Williams B (1973) A critique of utilitarianism. In: Smart J, Williams B (eds) Utilitarianism for and against. Cambridge University Press, Cambridge
Williams B (1982) Moral Luck [in] Moral Luck. Philosophical Papers 1973-1980. Cambridge University Press, Cambridge
Wojtczak S, Księżak P (2021) Causation in civil law and the problems of transparency in AI. Eur Rev Priv Law 29(4):561–582
Documents
Proposal for a directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive), COM (2022) 496 final, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52022PC0496
Proposal for a directive of the European Parliament and of the Council on liability for defective products, COM (2022)495 final, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2022%3A495%3AFIN&qid=1664465004344
Commission Staff Working Document: Liability for emerging digital technologies accompanying the document Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions: Artificial Intelligence for Europe, Brussels, 25.4.2018, COM (2018) 237 final, https://eur-lex.europa.eu/legal-content/en/ALL/?uri=CELEX%3A52018SC0137, last access on the 4th of August 2022
European Parliament resolution of 12 February 2019 on a comprehensive European industrial policy on artificial intelligence and robotics (2018/2088 (INI)), P8_TA (2019) 0081. https://www.europarl.europa.eu/doceo/document/TA-8-2019-0081_EN.html, last access on the 4th of August 2022
Evas T (2020) Civil liability regime for artificial intelligence. European added value assessment. Study. European Parliamentary Research Service. September 2020. Brussels. https://www.europarl.europa.eu/RegData/etudes/STUD/2020/654178/EPRS_STU(2020)654178_EN.pdf, last access on the 4th of August 2022
Expert Group on Liability and New Technologies – New Technologies Formation. Liability for artificial intelligence and other emerging technologies. 2019. https://doi.org/10.2838/573689. https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/JURI/DV/2020/01-09/AI-report_EN.pdf, last access on the 4th of August 2022
Open Letter to the European Commission: Artificial Intelligence and Robotics, http://www.robotics-openletter.eu, last access on the 4th of August 2022
Regulation (EU) 2017/75 of the European Parliament and of the Council of % April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223 and repealing Council Directives 90/385/EEC and 93/42/EEC, L 117/1, https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32017R0745&from=PL, last access on the 4th of August 2022
Report from the Commission to the European Parliament, the Council and the European Economic and Social Committee. Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics. 19.2.2020. Brussels. https://eur-lex.europa.eu/legal-content/en/TXT/?qid=1593079180383&uri=CELEX%3A52020DC0064, last access on the 4th of August 2022
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Księżak, P., Wojtczak, S. (2023). Liability of AI. In: Toward a Conceptual Network for the Private Law of Artificial Intelligence. Law, Governance and Technology Series(), vol 51. Springer, Cham. https://doi.org/10.1007/978-3-031-19447-4_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-19447-4_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-19446-7
Online ISBN: 978-3-031-19447-4
eBook Packages: Computer ScienceComputer Science (R0)