Part of the book series: Law, Governance and Technology Series ((LGTS,volume 53))

  • 1119 Accesses

Abstract

This chapter covers the guiding ethical principles which should be based on the EU’s ‘human-centric’ approach to AI that is respectful of European values and principles. The chapter discusses five ethical principles (“ethical imperatives”) and their correlated values that must be respected in the development, deployment and use of AI systems. These ethical principles are: (i) respect for human autonomy; (ii) prevention of harm (non-maleficence); (iii) fairness/justice; (iv) explicability; (v) the principle of beneficence (‘do only good’), i.e., the principle of creating AI technology that is beneficial to humanity. It is explained that EU policy-makers have chosen to remain faithful to the EU’s cultural preferences and higher standard of protection against the risks posed by AI, building on the existing regulatory framework and ensuring that European values are at the heart of creating the right environment of trust for the successful development and use of AI. The overall aim of the ethics guidelines is—apart from establishing an ethical level playing field across all Member States as well as offering guidance on how to foster and secure the development of ethical AI systems—to bring a European ethical approach to the global stage, i.e. to stimulate discussion of ethical frameworks for AI “at a global level” and, therefore, to build an international consensus on AI ethics guidelines.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Report of the Committee on Legal Affairs of the European Parliament with recommendations to the Commission on Civil Law Rules on Robotics—Motion for a European Parliament Resolution (2015/2103(INL)), 27.1.2017, A8-0005/2017.

  2. 2.

    European Parliament Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), (2018/C 252/25), OJ C 252, 18.7.2018, pp. 239–257 (hereafter: “European Parliament Resolution of 16 February 2017”).

  3. 3.

    European Parliament Resolution of 16 February 2017, Annex to the Resolution: Recommendations as to the content of the proposal requested, p. 253. See also points 13 and 14 of the Resolution: “(13) … the guiding ethical framework should be based on the principles of beneficence, non-maleficence, autonomy and justice, on the principles and values enshrined in Article 2 of the Treaty on European Union and in the Charter of Fundamental Rights, such as human dignity, equality, justice and equity, non-discrimination, informed consent, private and family life and data protection, as well as on other underlying principles and values of the Union law, such as non-stigmatisation, transparency, autonomy, individual responsibility and social responsibility, and on existing ethical practices and codes”; “(14) … special attention should be paid to robots that represent a significant threat to confidentiality owing to their placement in traditionally protected and private spheres and because they are able to extract and send personal and sensitive data”.

  4. 4.

    European Parliament Resolution of 16 February 2017, point 10, p. 244: “the potential for empowerment through the use of robotics is nuanced by a set of tensions or risks and should be seriously assessed from the point of view of human safety, health and security; freedom, privacy, integrity and dignity; self-determination and non-discrimination, and personal data protection”. See also: recital O; point 13; Annex, at p. 253 and p. 254: “Robotics research activities should respect fundamental rights and be conducted in the interests of the well-being and self-determination of the individual and society at large in their design, implementation, dissemination and use. Human dignity and autonomy – both physical and psychological – is always to be respected”.

  5. 5.

    Ibid., point 3, at p. 243.

  6. 6.

    European Parliament Resolution of 16 February 2017, Annex to the Resolution: Recommendations as to the content of the proposal requested, at pp. 244 and 255. See also recital M (p. 240): “the trend towards automation requires that those involved in the development and commercialisation of AI applications build in security and ethics at the outset, thereby recognizing that they must be prepared to accept legal liability for the quality of the technology they produce”.

  7. 7.

    “Inclusiveness allows for participation in decision-making processes by all stakeholders involved in or concerned by robotics research activities”. European Parliament Resolution of 16 February 2017, Annex to the Resolution: Recommendations as to the content of the proposal requested, p. 254.

  8. 8.

    “Robot designers should consider and respect people’s physical wellbeing, safety, health and rights. A robotics engineer must preserve human wellbeing, while also respecting human rights, and disclose promptly factors that might endanger the public or the environment”; “Precaution: Robotics research activities should be conducted in accordance with the precautionary principle, anticipating potential safety impacts of outcomes and taking due precautions, proportional to the level of protection, while encouraging progress for the benefit of society and the environment”. European Parliament Resolution of 16 February 2017, Annex to the Resolution: Recommendations as to the content of the proposal requested, p. 254.

  9. 9.

    General Secretariat of the Council, Subject: European Council meeting (19 October 2017) – Conclusions, Brussels, 19 October 2017 (OR. en), EUCO 14/17, at Section II – Digital Europe.

  10. 10.

    Commission Communication “Artificial Intelligence for Europe”, COM(2018) 237 final, Brussels, 25.4.2018.

  11. 11.

    European Council conclusions, 28 June 2018: https://www.consilium.europa.eu/en/press/press-releases/2018/06/29/20180628-euco-conclusions-final/.

  12. 12.

    Commission Communication “Coordinated Plan on Artificial Intelligence”, COM(2018) 795 final, Brussels, 7.12.2018.

  13. 13.

    Annex to the Commission Communication ‘Coordinated Plan on Artificial Intelligence’, COM(2018) 795 final, Brussels, 7.12.2018 - Coordinated Plan on the Development and Use of Artificial Intelligence Made in Europe – 2018.

  14. 14.

    “Draft Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence, 18 December 2018.

  15. 15.

    “Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence, 8 April 2019 (hereafter: “Ethics Guidelines for Trustworthy AI, 2019”).

  16. 16.

    “A definition of AI: Main capabilities and scientific disciplines – Definition developed for the purpose of the deliverables of the High-Level Expert Group on AI”, High-Level Expert Group on Artificial Intelligence, 18 December 2018.

  17. 17.

    Commission Communication, “Building Trust in Human-Centric Artificial Intelligence”, COM(2019) 168 final, Brussels, 8.4.2019.

  18. 18.

    Ibid., at p. 9.

  19. 19.

    Ibid., at pp. 4 and 9.

  20. 20.

    Ibid., at p. 3. “In their feedback so far, stakeholders overall have welcomed the practical nature of the guidelines and the concrete guidance they offer to developers, suppliers and users of AI on how to ensure trustworthiness”.

  21. 21.

    Ibid., at pp. 4 and 9: “The Commission supports the following key requirements for trustworthy AI, which are based on European values. It encourages stakeholders to apply the requirements and to test the assessment list that operationalises them in order to create the right environment of trust for the successful development and use of AI” (at p. 4); “based on the key requirements for AI to be considered trustworthy, the Commission will now launch a targeted piloting phase to ensure that the resulting ethical guidelines for AI development and use can be implemented in practice” (at p. 9).

  22. 22.

    “Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence, 8 April 2019. See the definition provided in the glossary section of the Ethics Guidelines, at p. 37.

  23. 23.

    Ibid., p. 4.

  24. 24.

    Ibid., at pp. 9 and 10.

  25. 25.

    Commission Communication, “Building Trust in Human-Centric Artificial Intelligence”, COM(2019) 168 final, Brussels, 8.4.2019, at pp. 1 and 2.

  26. 26.

    Ibid., at p. 2.

  27. 27.

    Ibid., at p. 9.

  28. 28.

    Ibid., at p. 2.

  29. 29.

    “Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence, 8 April 2019, at pp. 2 and 5–7.

  30. 30.

    In practice, however, there may be tensions between these elements (e.g. at times the scope and content of existing law might be out of step with ethical norms). It is our individual and collective responsibility as a society to work towards ensuring that all three components help to secure Trustworthy AI. This also means that policy-makers may need to review the adequacy of existing laws where these might be out of step with ethical principles. See Ethics Guidelines for Trustworthy AI, 2019, at p. 5.

  31. 31.

    Ibid., at p. 6.

  32. 32.

    Although the Guidelines set out a framework for achieving Trustworthy AI, they do not explicitly deal with Trustworthy AI’s first component (i.e. lawful AI). Instead, the main aim of the Guidelines is to offer guidance on the second and third components of trustworthy AI, i.e. fostering and securing ethical and robust AI. See Ethics Guidelines for Trustworthy AI, 2019, at pp. 2 and 6.

  33. 33.

    Ibid., at p. 6.

  34. 34.

    Ibid., at p. 9.

  35. 35.

    Ibid.

  36. 36.

    Ibid., at p. 10.

  37. 37.

    Ibid., at pp. 10 and 11.

  38. 38.

    Ibid., at p. 10.

    C. McCrudden acknowledges the feasibility to distill a minimum content of human dignity that has been applied universally and borrows three elements from Neuman (2000, pp. 249–271) to outline what he calls the ‘minimum core’ of human dignity. These elements are: (i) every human being possesses an intrinsic worth, merely by being human; (ii) intrinsic worth should be recognized and respected by others, and some forms of treatment are inconsistent with respect for this intrinsic worth; (iii) the state exists for individual human beings (not vice versa). See McCrudden (2008), pp. 655–724.

    See also: Hilgendorf (2018), pp. 325 ff; Neuman (2000, pp. 249–271); Zardiashvili and Fosch-Villaronga (2020), pp. 30:121–143; Schroeder and Bani-Sadr (2017); Weisstub (2002), pp. 263–294; Floridi (2016), pp. 307–312; Marchant et al. (2011); Sharkey (2014), pp. 63–75; Floridi (2018), pp. 1–8; Cath (2018), p. 2133; Chopra and White (2011); Dautenhahn et al. (2002); Di Nucci (2017); Dicke (2002); Ebers and Navas Navarro (2020); Fiske et al. (2019); Floridi (2013); Floridi (2014); Floridi (2018); Fosch-Villaronga (2019); Fosch-Villaronga and Albo-Canals (2019); Fosch-Villaronga and Golia (2019); Fosch-Villaronga and Heldeweg (2018); Fosch-Villaronga and Millard (2019); Graeme et al. (2012); Grodzinsky et al. (2008); Guihot et al. (2017); Koops and Leenes (2014); Kritikos (2016); O’Mahony (2012); Tzimas (2021); Veale et al. (2018); Wagner (2018); Winfield and Jirotka (2018).

  39. 39.

    “Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence, 8 April 2019, at pp. 10–11.

  40. 40.

    The ability of AI to identify, classify, and discriminate magnifies the potential for human rights abuses in both scale and scope. The role of AI and how it could violate or risk violating human rights—covering also risks posed by prospective future developments in AI—is discussed in “Human Rights in the Age of Artificial Intelligence”, Report by Access Now, November 2018, especially at pp. 18–30.

  41. 41.

    “Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence, 8 April 2019, at p. 11.

  42. 42.

    Ibid.

  43. 43.

    For instance, a threat could come from automated hiring, where applicants are judged by AI that has learnt from historical data sets. “Discrimination that comes out of systems trained on data from people … reflects the behaviour of people who previously carried out the job,” explaind Yoshua Bengio, a Professor in the University of Montreal’s department of computer science. In response to these concerns, ethical frameworks for AI are being written around the world. The threat of artificial intelligence is not that robots could be like us. The problem, according to scientists, is their inhumanity: we cannot make them care about justice or equality. See “EU backs AI regulation while China and US favour technology”, Siddharth Venkataramakrishnan, Financial Times, 25.04.2019, https://www.ft.com/content/4fd088a4-021b-11e9-bf0f-53b8511afd73.

  44. 44.

    “Human Rights in the Age of Artificial Intelligence”, Report by Access Now, November 2018, at p. 24. AI models are designed to sort and filter, whether by ranking search results or categorizing people into buckets. This discrimination can interfere with human rights when it treats different groups of people differently. Sometimes such discrimination has positive social aims, for example, when it is used in programs to promote diversity. In criminal justice, this discrimination is often the result of forms of bias. Use of AI in some systems can perpetuate historical injustice in everything from prison sentencing to loan applications. In 2013, researcher Latanya Sweeney found that a Google search for stereotypically African American-sounding names yielded ads that suggested an arrest record (such as “Trevon Jones, Arrested?”) in the vast majority of cases. In 2015, researchers at Carnegie Mellon found Google displayed far fewer ads for high-paying executive jobs to women. Google’s personalized ad algorithms are powered by AI, and they are taught to learn from user behavior. The more people click, search, and use the internet in racist or sexist ways, the more the algorithm translates that into ads. This is compounded by discriminatory advertiser preferences and becomes part of a cycle. How people perceive things affects the search results, which affect how people perceive things.

    See also: Sweeney (2013); Carpenter (2015).

    Given that facial recognition software has higher error rates for darker-skinned faces, it is likely that misidentification will disproportionately affect people of color. The gravity of the problem was demonstrated by the ACLU’s test of Amazon’s Rekognition facial recognition software. The ACLU scanned the faces of all 535 U.S. members of Congress against 25,000 public criminal mugshots using Rekognition’s API with the default 80% confidence level. No one in the U.S. Congress was actually in the mugshot database, yet there were 28 false matches. Of these matches, 38% were people of color, even though only 20% of members of Congress are people of color. This showed that as with many facial recognitions systems, Rekognition disproportionately impacted people of color. See: Brandom (2018).

  45. 45.

    The European Group on Ethics in Science and New Technologies (EGE), founded in 1991, is an independent advisory body of the President of the European Commission, which advises on all aspects of Commission policies where ethical, societal and fundamental rights issues intersect with the development of science and new technologies.

  46. 46.

    European Group on Ethics in Science and New Technologies (EGE), “Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems”, Brussels, 9 March 2018, European Commission, Directorate-General for Research and Innovation (“EGE – Statement on AI, 2018”).

  47. 47.

    Ibid., p. 14.

  48. 48.

    Ibid., p. 20.

  49. 49.

    Ibid.

  50. 50.

    Ibid., pp. 16–19.

  51. 51.

    Ibid., pp. 5 and 11.

  52. 52.

    The EGE’s Statement on AI mentioned some of the most prominent initiatives towards the formulation of ethical principles regarding AI and autonomous systems, such as the IEEE’s (Institute of Electrical and Electronics Engineers) policy paper on ‘Ethically Aligned Design’ (http://standards.ieee.org/news/2016/ethically_aligned_design.html); ITU’s (International Telecommunication Union) Global Summit ‘AI for Good’ in 2017 (https://www.itu.int/en/ITU-T/AI/Pages/201706-default.aspx); the ACM’s (Association for Computing Machinery) work on the issue, including a major AAAI/ACM ‘Conference on AI, Ethics, and Society’ in February 2018 (http://www.aies-conference.com). The EGE’s Statement noted that, within the private sector, companies such as IBM, Microsoft and Google’s DeepMind established their own ethic codes on AI and joined forces in creating broad initiatives such as the ‘Partnership on AI’ (https://www.partnershiponai.org/) or ‘OpenAI’ (https://openai.com/), which brought together industry, non-profit and academic organisations. One of the leading initiatives calling for a responsible development of AI was launched by the Future of Life Institute and culminated in the creation of the ‘Asilomar AI Principles’. This list of 23 fundamental principles to guide AI research and application was signed by hundreds of stakeholders, with signatories representing predominantly scientists, AI researchers and industry (https://futureoflife.org/ai-principles/). A similar participatory process was launched upon the initiative of the Forum on the Socially Responsible Development of Artificial Intelligence held by the University of Montreal in November 2017, in reaction to which a first draft of a potential ‘Declaration for a Responsible Development of Artificial Intelligence’ was developed. It is publicly accessible on an online platform where all sectors of society are invited to comment on the text (http://nouvelles.umontreal.ca/en/article/2017/11/03/montreal-declaration-for-aresponsible-development-of-artificial-intelligence/). The EGE’s Statement also mentioned that the UN has established a special research institute in The Hague to study the governance of Robotics and AI (UNICRI). It also noted, under the aegis of UNESCO, the COMEST Report on robotics ethics (see: UNESCO, ‘Report of COMEST on Robotics Ethics’, World Commission on the Ethics of Scientific Knowledge and Technology, 2017) and the IBC Report on big data and health, both adopted in September 2017. This was the reason—the lack of a harmonised European approach—which prompted the European Parliament in 2017 to call for a range of measures to prepare for the regulation of advanced robotics, including the development of a guiding ethical framework for the design, production and use of robots (European Parliament, Committee on Legal Affairs 2015/2103 (INL) Report with Recommendations to the Commission on Civil Law Rules on Robotics, Rapporteur Mady Delvaux).

  53. 53.

    European Group on Ethics in Science and New Technologies (EGE), “Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems”, Brussels, 9 March 2018, European Commission, Directorate-General for Research and Innovation, p. 20.

  54. 54.

    Ibid., pp. 15 and 20.

  55. 55.

    Ibid., p. 14.

  56. 56.

    “A Unified Framework of Five Principles for AI in Society”, by Luciano Floridi and Josh Cowls, Harvard Data Science Review, July 2019 (updated in November 2019), at p. 2.

  57. 57.

    Ibid., at p. 4.

  58. 58.

    Floridi et al. (2018), pp. 689–707.

  59. 59.

    The AI4People’s Scientific Committee assessed the following six documents, which, taken together, produced forty-seven (47) ethics principles:

    1. (i)

      The Asilomar AI Principles, developed under the auspices of the Future of Life Institute, in collaboration with attendees of the high-level Asilomar conference of January 2017 (hereafter “Asilomar”; Asilomar AI Principles 2017), “Principles developed in conjunction with the 2017 Asilomar conference” [Benevolent AI 2017]. See at: https://futureoflife.org/ai-principles/ [“Asilomar AI Principles”].

    2. (ii)

      The Montreal Declaration for a Responsible Development of Artificial Intelligence, developed under the auspices of the University of Montreal, following the Forum on the Socially Responsible Development of AI of November 2017. Final text, December 2018. The principles mentioned are those which were announced as of 1st May 2018. See at: https://www.montrealdeclaration-responsibleai.com/the-declaration [hereafter “Montreal AI Declaration”]. The Montreal Declaration for responsible AI development has three main objectives: (1) Develop an ethical framework for the development and deployment of AI; (2) Guide the digital transition so everyone benefits from this technological revolution; (3) Open a national and international forum for discussion to collectively achieve equitable, inclusive, and ecologically sustainable AI development.

    3. (iii)

      The IEEE Initiative on Ethics of Autonomous and Intelligent Systems - Ethically Aligned Design (EAD), 2019. This crowd-sourced global treatise received contributions from 250 global thought leaders to develop principles and recommendations for the ethical development and design of autonomous and intelligent systems. See at: https://ethicsinaction.ieee.org. [hereafter “IEEE Initiative – Ethically Aligned Design”].

    4. (iv)

      European Group on Ethics in Science and New Technologies, “Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems”, March 2018. See at: https://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf [hereafter “EGE Statement on AI”].

    5. (v)

      The “five overarching principles for an AI code” offered in paragraph 417 of the UK House of Lords Artificial Intelligence Committee’s Report “AI in the UK: ready, willing and able?”, published on 16 April 2018 (hereafter “AIUK”; House of Lords 2018). Paragraph 417 of the Report suggested the following “five overarching principles for an AI Code”: (1) Artificial intelligence should be developed for the common good and benefit of humanity. (2) Artificial intelligence should operate on principles of intelligibility and fairness. (3) Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities. (4) All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence. (5) The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence. See at: https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf [hereafter “AI House of Lords, 2018”].

    6. (vi)

      “Partnership on AI – Tenets”, 2018. Partnership on AI is a multi-stakeholder organisation consisting of academics, researchers, civil society organisations, companies building and utilising AI technology, and other groups. See at: https://partnershiponai.org/about/#tenets [hereafter “Partnership on AI – Tenets, 2018”].

  60. 60.

    Floridi et al. (2018), pp. 689–707, at pp. 700–705.

  61. 61.

    As it was explained, if the aim is to create a Good AI Society, the proposed five ethical principles should be embedded in the default practices of AI. To this end, AI should be designed and developed in ways that decrease inequality and further social empowerment, with respect for human autonomy, and increase benefits that are shared by all. Also, AI should be explicable, as explicability is considered to be a critical tool for building public trust in the emerging technologies. Furthermore, a multi-stakeholder approach is required, ensuring that AI will serve the needs of society, by enabling developers, users and rule-makers to collaborate from the outset. Different cultural frameworks inform attitudes to new technology. The AI4People’s Scientific Committee stated that this report represents a European approach, which is meant to be complementary to other approaches, and underlined its commitment to the development of AI technology in a way that secures people’s trust, serves the public interest and strengthens shared social responsibility. Ibid., at p. 701.

  62. 62.

    Ibid., at p. 696.

  63. 63.

    Ibid.

  64. 64.

    Ibid., at pp. 696–700.

  65. 65.

    “Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence, 8 April 2019, at pp. 11 and 12.

    It should be noted that these ethical principles resemble to the four ethical principles in robotics engineering which were proposed by the European Parliament Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), (2018/C 252/25). In particular, the Draft Code of Ethical Conduct for Robotics Engineers, as an Annex to the European Parliament Resolution, put forward four ethical principles in robotics engineering: (1) beneficence (robots should act in the best interests of humans); (2) non-maleficence (robots should not harm humans); (3) autonomy (human interaction with robots should be voluntary); (4) justice (the benefits of robotics should be distributed fairly). See European Parliament Resolution of 16 February 2017, Annex to the Resolution: Recommendations as to the content of the proposal requested, p. 253. See also point 13 of the EP Resolution: “(13) … the guiding ethical framework should be based on the principles of beneficence, non-maleficence, autonomy and justice, on the principles and values enshrined in Article 2 of the Treaty on European Union and in the Charter of Fundamental Rights, such as human dignity, equality, justice and equity, non-discrimination, informed consent, private and family life and data protection, as well as on other underlying principles and values of the Union law, such as non-stigmatisation, transparency, autonomy, individual responsibility and social responsibility, and on existing ethical practices and codes”.

  66. 66.

    EU Charter of Fundamental Rights: Article 1—Human dignity: Human dignity is inviolable. It must be respected and protected. Article 6—Right to liberty and security: Everyone has the right to liberty and security of person.

  67. 67.

    “Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence, 8 April 2019, 2019, at p. 12.

  68. 68.

    See, for instance, Article 22 of Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation—GDPR), which gives individuals the right not to be subject to a decision based solely on automated processing when this produces legal effects on users or similarly significantly affects them: Article 22: Automated individual decision-making, including profiling 1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. 2. Paragraph 1 shall not apply if the decision: (a) is necessary for entering into, or performance of, a contract between the data subject and a data controller; (b) is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or (c) is based on the data subject’s explicit consent. 3. In the cases referred to in points (a) and (c) of paragraph 2, the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision. 4. Decisions referred to in paragraph 2 shall not be based on special categories of personal data referred to in Article 9(1), unless point (a) or (g) of Article 9(2) applies and suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are in place.

  69. 69.

    “Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence, 8 April 2019, at p. 12.

  70. 70.

    European Group on Ethics in Science and New Technologies (EGE), “Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems”, Brussels, 9 March 2018, European Commission, Directorate-General for Research and Innovation (“EGE - Statement on AI, 2018”), at p. 16.

    Furthermore, according to EGE’s Statement on AI, “the term ‘autonomy’ stems from philosophy and refers to the capacity of human persons to legislate for themselves, to formulate, think and choose norms, rules and laws for themselves to follow. It encompasses the right to be free to set one’s own standards and choose one’s own goals and purposes in life. The cognitive processes that support and facilitate this are among the ones most closely identified with the dignity of human persons and human agency and activity par excellence. They typically entail the features of self-awareness, self-consciousness and self-authorship according to reasons and values. Autonomy in the ethically relevant sense of the word can therefore only be attributed to human beings. … autonomy in its original sense is an important aspect of human dignity that ought not to be relativized. Since no smart artefact or system - however advanced and sophisticated - can in and by itself be called ‘autonomous’ in the original ethical sense, they cannot be accorded the moral standing of the human person and inherit human dignity. Human dignity as the foundation of human rights implies that meaningful human intervention and participation must be possible in matters that concern human beings and their environment. Therefore, in contrast to the automation of production, it is not appropriate to manage and decide about humans in the way we manage and decide about objects or data, even if this is technically conceivable. Such an ‘autonomous’ management of human beings would be unethical, and it would undermine the deeply entrenched European core values. Human beings ought to be able to determine which values are served by technology, what is morally relevant and which final goals and conceptions of the good are worthy to be pursued. This cannot be left to machines, no matter how powerful they are”. See EGE—Statement on AI, 2018, at pp. 9–10.

  71. 71.

    Ibid., at p. 10. Also: “A second field of contestation and controversy are ‘autonomous’ weapon systems. These military systems can carry lethal weapons as their payload, but as far as the software is concerned they are not very different from ‘autonomous’ systems that we could find in a range of civilian domains close to home. A large part of the debate takes place at the Conference on Certain Conventional Weapons in Geneva concerning the moral acceptability of ‘autonomous’ weapons and legal and moral responsibility for the deployment of these systems. Now attention needs to turn to questions as to what the nature and meaning of ‘meaningful human control’ over these systems is and how to institute morally desirable forms of control” (at p. 11).

  72. 72.

    The “Asilomar AI Principles” document, 2017, which comprises a list of 23 fundamental principles to guide AI research and application, was developed under the auspices of the Future of Life Institute, in collaboration with attendees of the high-level Asilomar conference of January 2017 (“Asilomar AI Principles”). See Principle number 16 (“Human Control”), at: https://futureoflife.org/ai-principles/.

  73. 73.

    The UK House of Lords Artificial Intelligence Committee’s Report “AI in the UK: ready, willing and able?”, published on 16 April 2018. See the “five overarching principles for an AI code” offered in paragraph 417 of the Report, especially Principle number (5).

    See at: https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf.

  74. 74.

    “The Montreal Declaration for a Responsible Development of Artificial Intelligence”, developed under the auspices of the University of Montreal, following the Forum on the Socially Responsible Development of AI of November 2017. Final text, December 2018 (“Montreal Declaration for AI”).

    See Principle number 2 (“Respect for Autonomy Principle”), at p. 9, at: https://www.montrealdeclaration-responsibleai.com/the-declaration: “2 – Respect for Autonomy Principle: AIS (artificial intelligent systems) must be developed and used while respecting people’s autonomy, and with the goal of increasing people’s control over their lives and their surroundings: (1) AIS must allow individuals to fulfill their own moral objectives and their conception of a life worth living. (2) AIS must not be developed or used to impose a particular lifestyle on individuals, whether directly or indirectly, by implementing oppressive surveillance and evaluation or incentive mechanisms. (3) Public institutions must not use AIS to promote or discredit a particular conception of the good life. (4) It is crucial to empower citizens regarding digital technologies by ensuring access to the relevant forms of knowledge, promoting the learning of fundamental skills (digital and media literacy), and fostering the development of critical thinking. (5) AIS must not be developed to spread untrustworthy information, lies, or propaganda, and should be designed with a view to containing their dissemination. (6) The development of AIS must avoid creating dependencies through attention-capturing techniques or the imitation of human characteristics (appearance, voice, etc.) in ways that could cause confusion between AIS and humans”.

  75. 75.

    OECD, Recommendation of the Council on Artificial Intelligence, 2020, OECD/LEGAL/0449, adopted on 22.05.2019. The OECD Recommendation on AI provided the first intergovernmental standard for AI policies and a foundation on which to conduct further analysis and develop tools to support governments in their implementation efforts. The OECD’s Recommendation on AI was adopted by the OECD Council at Ministerial level on 22 May 2019 on the proposal of the Committee on Digital Economy Policy (CDEP). The Recommendation aims to foster innovation and trust in AI by promoting the responsible stewardship of trustworthy AI while ensuring respect for human rights and democratic values. Complementing existing OECD standards in areas such as privacy, digital security risk management, and responsible business conduct, the Recommendation focuses on AI-specific issues and sets a standard that is implementable and sufficiently flexible to stand the test of time in this rapidly evolving field. In June 2019, at the Osaka Summit, G20 Leaders welcomed G20 AI Principles, drawn from the OECD Recommendation. The OECD Recommendation on AI identified five complementary values-based principles for the responsible stewardship of trustworthy AI and called on AI actors to promote and implement them. These five ethical principles are the following: (i) inclusive growth, sustainable development and well-being; (ii) human-centred values and fairness; (iii) transparency and explainability; (iv) robustness, security and safety; (v) accountability (at pp. 7–8).

  76. 76.

    Ibid., at p. 7.

  77. 77.

    The UNI Global Union Top 10 Principles for Ethical Artificial Intelligence, 2017.

  78. 78.

    Ibid., at p. 8.

  79. 79.

    The IEEE (Institute of Electrical and Electronics Engineers) Global Initiative on Ethics of Autonomous and Intelligent Systems (A/IS) - Ethically Aligned Design (EAD) Report: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (A/IS), 2019 (“IEEE – EAD Report”), at p. 102.

    See also the following Recommendations, at p. 102: “1. It is important that human workers’ interaction with other workers not always be intermediated by affective systems (or other technology) which may filter out autonomy, innovation, and communication. 2. Human points of contact should remain available to customers and other organizations when using A/IS. 3. Affective systems should be designed to support human autonomy, sense of competence, and meaningful relationships as these are necessary to support a flourishing life. 4. Even where A/IS are less expensive, more predictable, and easier to control than human employees, a core network of human employees should be maintained at every level of decision-making in order to ensure preservation of human autonomy, communication, and innovation. 5. Management and organizational theorists should consider appropriate use of affective and autonomous systems to enhance their business models and the efficacy of their workforce within the limits of the preservation of human autonomy”.

  80. 80.

    Ibid., at p. 100.

  81. 81.

    Ibid., at p. 32 (see Principle 8 – Competence). According to the IEEE Report, “creators of A/IS should integrate safeguards against the incompetent operation of their systems. Safeguards could include issuing notifications/warnings to operators in certain conditions, limiting functionalities for different levels of operators (e.g., novice vs. advanced), system shut-down in potentially risky conditions, etc.”, at pp. 32–33.

  82. 82.

    Ibid., at p. 60.

    Also, according to the IEEE Report, “ultimately, the behavior of algorithms rests solely in their design, and that design rests solely in the hands of those who designed them. Perhaps more importantly, however, is the matter of choice in terms of how the user chooses to interact with the algorithm. Users often don’t know when an algorithm is interacting with them directly or their data which acts as a proxy for their identity. Should there be a precedent for the A/IS user to know when they are interacting with an algorithm? What about consent? The responsibility for the behavior of algorithms remains with the designer, the user, and a set of well-designed guidelines that guarantee the importance of human autonomy in any interaction. As machine functions become more autonomous and begin to operate in a wider range of situations, any notion of those machines working for or against human beings becomes contested. Does the machine work for someone in particular, or for particular groups but not others? Who decides on the parameters? Is it the machine itself? Such questions become key factors in conversations around ethical standards” (at p. 60).

  83. 83.

    Floridi et al. (2018), p. 698.

  84. 84.

    Ibid.

  85. 85.

    Ibid.

  86. 86.

    European Group on Ethics in Science and New Technologies (EGE), “Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems”, Brussels, 9 March 2018, European Commission, Directorate-General for Research and Innovation, at p. 8.

  87. 87.

    The prevention of harm principle is strongly associated with the protection of physical or mental integrity, as reflected in Article 3 of the EU Charter of Fundamental Rights, which reads as follows: Article 3—Right to integrity of the person: 1. Everyone has the right to respect for his or her physical and mental integrity. 2. In the fields of medicine and biology, the following must be respected in particular: (a) the free and informed consent of the person concerned, according to the procedures laid down by law; (b) the prohibition of eugenic practices, in particular those aiming at the selection of persons; (c) the prohibition on making the human body and its parts as such a source of financial gain; (d) the prohibition of the reproductive cloning of human beings.

  88. 88.

    “Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence, 8 April 2019, at p. 12.

  89. 89.

    Ibid.

  90. 90.

    Ibid.

  91. 91.

    Ibid.

  92. 92.

    Ibid., at p. 16.

  93. 93.

    European Group on Ethics in Science and New Technologies (EGE), “Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems”, Brussels, 9 March 2018, European Commission, Directorate-General for Research and Innovation, at pp. 18–19. Principle (g) – Security, safety, bodily and mental integrity.

  94. 94.

    Ibid., at p. 19: “Principle (h) – Data protection and privacy: In an age of ubiquitous and massive collection of data through digital communication technologies, the right to protection of personal information and the right to respect for privacy are crucially challenged. Both physical AI robots as part of the Internet of Things, as well as AI softbots that operate via the World Wide Web must comply with data protection regulations and not collect and spread data or be run on sets of data for whose use and dissemination no informed consent has been given. ‘Autonomous’ systems must not interfere with the right to private life which comprises the right to be free from technologies that influence personal development and opinions, the right to establish and develop relationships with other human beings, and the right to be free from surveillance. Also in this regard, exact criteria should be defined and mechanisms established that ensure ethical development and ethically correct application of ‘autonomous’ systems. In light of concerns with regard to the implications of ‘autonomous’ systems on private life and privacy, consideration may be given to the ongoing debate about the introduction of two new rights: the right to meaningful human contact and the right to not be profiled, measured, analysed, coached or nudged”.

  95. 95.

    Ibid.

  96. 96.

    Ibid.

  97. 97.

    “The Montreal Declaration for a Responsible Development of Artificial Intelligence”, 2018 (“Montreal Declaration for AI”). See Principle number 8 (“Prudence Principle”), at p. 15.

  98. 98.

    Ibid. See Principle number 8 (“Prudence Principle”), at p. 15: “8 – Prudence Principle: Every person involved in AI development must exercise caution by anticipating, as far as possible, the adverse consequences of AIS use and by taking the appropriate measures to avoid them. (1) It is necessary to develop mechanisms that consider the potential for the double use — beneficial and harmful —of AI research and AIS development (whether public or private) in order to limit harmful uses. (2) When the misuse of an AIS endangers public health or safety and has a high probability of occurrence, it is prudent to restrict open access and public dissemination to its algorithm. (3) Before being placed on the market and whether they are offered for charge or for free, AIS must meet strict reliability, security, and integrity requirements and be subjected to tests that do not put people’s lives in danger, harm their quality of life, or negatively impact their reputation or psychological integrity. These tests must be open to the relevant public authorities and stakeholders. (4) The development of AIS must preempt the risks of user data misuse and protect the integrity and confidentiality of personal data. (5) The errors and flaws discovered in AIS and SAAD should be publicly shared, on a global scale, by public institutions and businesses in sectors that pose a significant danger to personal integrity and social organization”.

  99. 99.

    The UK House of Lords Artificial Intelligence Committee’s Report “AI in the UK: ready, willing and able?”, 16 April 2018. See the “five overarching principles for an AI code” offered in paragraph 417 of the Report, especially Principle number (3). The UK House of Lords Artificial Intelligence Committee’s Report argued that several issues associated with the use of AI require careful thought, including the issue of determining legal liability, in cases where a decision taken by an algorithm has an adverse impact on someone’s life, the potential criminal misuse of AI and data, and the use of AI in autonomous weapons systems. See Chap. 9 of the Committee’s Report, Mitigating the risk of Artificial Intelligence, paras 304–347, pp. 95–105.

  100. 100.

    Ibid., at para. 300, p. 93.

  101. 101.

    “Partnership on AI – Tenets”, 2018. “Tenet number 6: We will work to maximize the benefits and address the potential challenges of AI technologies, by: a) Working to protect the privacy and security of individuals. b) Striving to understand and respect the interests of all parties that may be impacted by AI advances. c) Working to ensure that AI research and engineering communities remain socially responsible, sensitive, and engaged directly with the potential influences of AI technologies on wider society. d) Ensuring that AI research and technology is robust, reliable, trustworthy, and operates within secure constraints. e) Opposing development and use of AI technologies that would violate international conventions or human rights, and promoting safeguards and technologies that do no harm”. See at: https://partnershiponai.org/about/#tenets.

  102. 102.

    The “Asilomar AI Principles”, 2017, developed under the auspices of the Future of Life Institute. See Principles number 6 (“Safety”), number 12 (“Personal Privacy”), number 13 (“Liberty and Privacy”), number 18 (“AI Arms Race”), number 19 (“Capability Caution”).

  103. 103.

    OECD, Recommendation of the Council on Artificial Intelligence, 2020, OECD/LEGAL/0449, adopted on 22.05.2019, at p. 8. See Principle 1.4—Robustness, security and safety: (a) AI systems should be robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk. (b) To this end, AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outcomes and responses to inquiry, appropriate to the context and consistent with the state of art. (c) AI actors should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems, including privacy, digital security, safety and bias.

  104. 104.

    The UNI Global Union Top 10 Principles for Ethical Artificial Intelligence, 2017, at p. 9.

  105. 105.

    Ibid., at p. 8.

  106. 106.

    The IEEE (Institute of Electrical and Electronics Engineers) Global Initiative on Ethics of Autonomous and Intelligent Systems (A/IS)—Ethically Aligned Design (EAD) Report: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (A/IS), 2019 (“IEEE – EAD Report”).

    See Principle 1 – Human Rights, at p. 19. “A/IS shall be created and operated to respect, promote, and protect internationally recognized human rights. … “human rights”, as defined by international law, provide a unilateral basis for creating any A/IS [autonomous and intelligent systems], as these systems affect humans, their emotions, data, or agency”.

    Principle 3 – Data Agency, at p. 23: “A/IS creators shall empower individuals with the ability to access and securely share their data, to maintain people’s capacity to have control over their identity”. “In an era where A/IS are already pervasive in society, governments must recognize that limiting the misuse of personal data is not enough”.

    Principle 7 – Awareness of Misuse, at p. 31: “A/IS creators shall guard against all potential misuses and risks of A/IS in operation”. “New technologies give rise to greater risk of deliberate or accidental misuse, and this is especially true for A/IS. A/IS increases the impact of risks such as hacking, misuse of personal data, system manipulation, or exploitation of vulnerable users by unscrupulous parties. Cases of A/IS hacking have already been widely reported, with driverless cars, for example. The Microsoft Tay AI chatbot was famously manipulated when it mimicked deliberately offensive users. In an age where these powerful tools are easily available, there is a need for a new kind of education for citizens to be sensitized to risks associated with the misuse of A/IS. … Responsible innovation requires A/IS creators to anticipate, reflect, and engage with users of A/IS”.

  107. 107.

    Ibid. See Principle 8 – Competence, at p. 32: “Creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation”. According to the IEEE Report, “while standards for operator competence are necessary to ensure the effective, safe, and ethical application of A/IS, these standards are not the same for all forms of A/IS. The level of competence required for the safe and effective operation of A/IS will range from elementary, such as “intuitive” use guided by design, to advanced, such as fluency in statistics”.

    The IEEE – EAD Report provided the following recommendations (at pp. 32–33): “1. Creators of A/IS should specify the types and levels of knowledge necessary to understand and operate any given application of A/IS. In specifying the requisite types and levels of expertise, creators should do so for the individual components of A/IS and for the entire systems. 2. Creators of A/IS should integrate safeguards against the incompetent operation of their systems. Safeguards could include issuing notifications/warnings to operators in certain conditions, limiting functionalities for different levels of operators (e.g., novice vs. advanced), system shut-down in potentially risky conditions, etc. 3. Creators of A/IS should provide the parties affected by the output of A/IS with information on the role of the operator, the competencies required, and the implications of operator error. Such documentation should be accessible and understandable to both experts and the general public. 4. Entities that operate A/IS should create documented policies to govern how A/IS should be operated. These policies should include the real-world applications for such A/IS, any preconditions for their effective use, who is qualified to operate them, what training is required for operators, how to measure the performance of the A/IS, and what should be expected from the A/IS. The policies should also include specification of circumstances in which it might be necessary for the operator to override the A/IS. 5. Operators of A/IS should, before operating a system, make sure that they have access to the requisite competencies. The operator need not be an expert in all the pertinent domains but should have access to individuals with the requisite kinds of expertise”.

  108. 108.

    Floridi et al. (2018), p. 697.

  109. 109.

    “Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence, 8 April 2019, at p. 12.

  110. 110.

    Ibid.

    Also, according to the AI4People’s Scientific Committee, the principle of justice (which is one of the four classic bioethics principles and includes the aims of promoting prosperity and preserving solidarity) “is typically invoked in relation to the distribution of resources, such as new and experimental treatment options or simply the general availability of conventional healthcare”. See Floridi et al. (2018), p. 698.

  111. 111.

    The principle of proportionality is one of the basic principles of European law and described as follows: “In accordance with the principle of proportionality, which is one of the general principles of Community law, the lawfulness of the prohibition of an economic activity is subject to the condition that the prohibitory measures are appropriate and necessary in order to achieve the objectives legitimately pursued by the legislation in question, it being understood that when there is a choice between several appropriate measures recourse must be had to the least onerous, and the disadvantages caused must not be disproportionate to the aims pursued”, see Case C-331/88, Fedesa and others [1990] ECR I-4023. See also Case C-210/03, judgment of the Court of 14.12.2004, at para. 47: “… the principle of proportionality, which is one of the general principles of Community law, requires that measures implemented through Community provisions are appropriate for attaining the objective pursued and must not go beyond what is necessary to achieve it”. See also Case 137/85 Maizena [1987] ECR 4587, at para. 15; Case C-339/92 ADM Olmuhlen [1993] ECR I-6473, at para. 15; Case C-210/00 Kaserei Champignon Hofmeister [2002] ECR I-6453, at para. 59.

  112. 112.

    “Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence, 8 April 2019, at pp. 12–13.

  113. 113.

    Ibid. See also at p. 13: “Measures taken to achieve an end (e.g. the data extraction measures implemented to realise the AI optimisation function) should be limited to what is strictly necessary. [The principle of proportionality] also entails that when several measures compete for the satisfaction of an end, preference should be given to the one that is least adverse to fundamental rights and ethical norms (e.g. AI developers should always prefer public sector data to personal data). Reference can also be made to the proportionality between user and deployer, considering the rights of companies (including intellectual property and confidentiality) on the one hand, and the rights of the user on the other”.

  114. 114.

    Ibid., at p. 13.

  115. 115.

    Ibid. See also at p. 19: “The requirement of accountability complements the above requirements, and is closely linked to the principle of fairness. It necessitates that mechanisms be put in place to ensure responsibility and accountability for AI systems and their outcomes, both before and after their development, deployment and use”.

  116. 116.

    European Group on Ethics in Science and New Technologies (EGE), “Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems”, Brussels, 9 March 2018, European Commission, Directorate-General for Research and Innovation, at p. 17.

  117. 117.

    Ibid.

  118. 118.

    Ibid.

  119. 119.

    “The Montreal Declaration for a Responsible Development of Artificial Intelligence”, 2018. See Principle number 6 (“Equity Principle”), at p. 13: “6 – Equity Principle: The development and use of AIS must contribute to the creation of a just and equitable society. 1. AIS must be designed and trained so as not to create, reinforce, or reproduce discrimination based on — among other things — social, sexual, ethnic, cultural, or religious differences. 2. AIS development must help eliminate relationships of domination between groups and people based on differences of power, wealth, or knowledge. 3. AIS development must produce social and economic benefits for all by reducing social inequalities and vulnerabilities. 4. Industrial AIS development must be compatible with acceptable working conditions at every step of their life cycle, from natural resources extraction to recycling, and including data processing. 5. The digital activity of users of AIS and digital services should be recognized as labor that contributes to the functioning of algorithms and creates value. 6. Access to fundamental resources, knowledge and digital tools must be guaranteed for all. 7. We should support the development of commons algorithms — and of open data needed to train them — and expand their use, as a socially equitable objective”.

  120. 120.

    Ibid., Principle number 6 (“Equity Principle”), at p. 13, at 6.1.

  121. 121.

    Ibid., Principle number 6 (“Equity Principle”), at p. 13, at 6.2.

  122. 122.

    Ibid., Principle number 6 (“Equity Principle”), at p. 13, at 6.3.

  123. 123.

    Ibid., Principle number 7 (“Diversity - Inclusion Principle”), at p. 14, at 7.3.

  124. 124.

    The UK House of Lords Artificial Intelligence Committee’s Report “AI in the UK: ready, willing and able?”, 16 April 2018, at para. 275, p. 86. Also: “The UK must be ready for the disruption that AI will have on the way in which we work. We support the Government’s interest in develo** adult retraining schemes, as we believe that AI will disrupt a wide range of jobs over the coming decades, and both blue- and white-collar jobs which exist today will be put at risk. It will therefore be important to encourage and support workers as they move into the new jobs and professions we believe will be created as a result of new technologies, including AI. The National Retraining Scheme could play an important role here, and must ensure that the recipients of retraining schemes are representative of the wider population. Industry should assist in the financing of the National Retraining Scheme by matching Government funding. This partnership would help improve the number of people who can access the scheme and better identify the skills required” (at para. 236, p. 76 of the Report).

  125. 125.

    Ibid. See, for instance, at para. 238, p. 77 of the Report: “Education and artificial intelligence – Artificial intelligence, regardless of the pace of its development, will have an impact on future generations. The education system needs to ensure that it reflects the needs of the future, and prepares children for life with AI and for a labour market whose needs may well be unpredictable. Education in this context is important for two reasons. First, to improve technological understanding, enabling people to navigate an increasingly digital world, and inform the debate around how AI should, and should not, be used. Second, to ensure that the UK can capitalise on its position as a world leader in the development of AI, and grow this potential”.

  126. 126.

    Ibid., at para. 276, p. 86 of the Report. “Everyone must have access to the opportunities provided by AI. The Government must outline its plans to tackle any potential societal or regional inequality caused by AI, and this must be explicitly addressed as part of the implementation of the Industrial Strategy”.

  127. 127.

    Ibid., see the “five overarching principles for an AI code” offered in paragraph 417 of the Report, especially Principle number (4).

  128. 128.

    Amongst the immediate solutions offered by witnesses to tackle the issue of bias was the creation of more diverse datasets, which could fairly reflect the societies and communities which AI systems are increasingly affecting. Other witnesses highlighted the need to remove bias from training data and the AI systems developed using them. However, it was also argued that this is more complicated. As Dr Ing. Konstantinos Karachalios, Managing Director of IEEE-Standards Association stated, “you can never be neutral; it is us. This is projected in what we do. It is projected in our engineering systems and algorithms and the data that we are producing. The question is how these preferences can become explicit, because if it can become explicit it is accountable and you can deal with it. If it is presented as a fact, it is dangerous; it is a bias and it is hidden under the table and you do not see it. It is the difficulty of making implicit things explicit”. Ibid., at para. 110, p. 41 of the Report.

  129. 129.

    On the other hand, the UK House of Lords Artificial Intelligence Committee Report pointed out that, according to several witnesses, AI could help tackle long-standing prejudices and social inequalities. For instance, as Kriti Sharma, Vice-president of Artificial Intelligence and Bots, Sage, argued: “AI can help us fix some of the bias as well. Humans are biased; machines are not, unless we train them to be. AI can do a good job at detecting unconscious bias as well. For example, if feedback is given in performance reviews where different categories of people are treated differently, the machine will say, ‘That looks weird. Would you like to reconsider that?”. Ibid., at para. 118, p. 44.

  130. 130.

    Ibid., at paras 119–120, p. 44. See also paras 107–118 (“Addressing prejudice”) of the UK House of Lords Artificial Intelligence Committee’s Report: “The current generation of AI systems, which have machine learning at their core, need to be taught how to spot patterns in data, and this is normally done by feeding them large bodies of data, commonly known as training datasets. These systems are designed to spot patterns, and if the data is unrepresentative, or the patterns reflect historical patterns of prejudice, then the decisions which they make may be unrepresentative or discriminatory as well. This can present problems when these systems are relied upon to make real world decisions. Within the AI community, this is commonly known as ‘bias’. While the term ‘bias’ might at first glance appear straightforward, there are in fact a variety of subtle ways in which bias can creep into a system. Much of the data we deem to be useful is about human beings, and is collected by human beings, with all of the subjectivity that entails. As LexisNexis UK put it, “biases may originate in the data used to train the system, in data that the system processes during its period of operation, or in the person or organisation that created it. There are additional risks that the system may produce unexpected results when based on inaccurate or incomplete data, or due to any errors in the algorithm itself”. It is also important to be aware that bias can emerge when datasets inaccurately reflect society, but it can also emerge when datasets accurately reflect unfair aspects of society. For example, an AI system trained to screen job applications will typically use datasets of previously successful and unsuccessful candidates, and will then attempt to determine particular shared characteristics between the two groups to determine who should be selected in future job searches. While the intention may be to ascertain those who will be capable of doing the job well, and fit within the company culture, past interviewers may have consciously or unconsciously weeded out candidates based on protected characteristics (such as age, sexual orientation, gender, or ethnicity) and socio-economic background, in a way which would be deeply unacceptable today”. [paras 107–109 of the Report].

    Several witnesses pointed out that the issue could not be easily confined to the datasets themselves, and in some cases “bias and discrimination may emerge only when an algorithm processes particular data”. The Centre for the Future of Intelligence emphasised that “identifying and correcting such biases poses significant technical challenges that involve not only the data itself, but also what the algorithms are doing with it (for example, they might exacerbate certain biases, or hide them, or even create them)”. The difficulties in fixing these issues was illustrated recently when it emerged that Google has still not fixed its visual identification algorithms, which could not distinguish between gorillas and black people, nearly three years after the problem was first identified. Instead, Google has simply disabled the ability to search for gorillas in products such as Google Photos which use this feature. The consequences of this are already starting to be felt. Several witnesses highlighted the growing use of AI within the US justice system, in particular the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) system, developed by Northpointe, and used across several US states to assign risk ratings to defendants, which help to assist judges in sentences and parole decisions. Will Crosthwait, co-founder of AI start-up Kensai, highlighted investigations which found that the system commonly overestimated the recidivism risk of black defendants, and underestimated that of white defendants. Big Brother Watch observed that here in the UK, Durham Police have started to investigate the use of similar AI systems for determining whether suspects should be kept in custody, and described this and other developments as a “very worrying trend, particularly when the technology is being trialled when its abilities are far from accurate”. Evidence from Sheena Urwin, Head of Criminal Justice at Durham Constabulary, emphasised the considerable lengths that Durham Constabulary have taken to ensure their use of these tools is open, fair and ethical, in particular the development of their ‘ALGO-CARE’ framework for the ethical use of algorithms in policing. [paras 112–113 of the Report].

  131. 131.

    Ibid., at para. 121, p. 44.

  132. 132.

    The “Asilomar AI Principles” document, 2017, developed under the auspices of the Future of Life Institute. See Principle number 14 (“Shared Benefit”).

  133. 133.

    Ibid., Principle number 15 (“Shared Prosperity”).

  134. 134.

    “Partnership on AI – Tenets”, 2018. Tenet number 1.

  135. 135.

    Ibid., Tenet number 6(b).

  136. 136.

    OECD, Recommendation of the Council on Artificial Intelligence, 2020, OECD/LEGAL/0449, adopted on 22.05.2019, at p. 7.

  137. 137.

    The UNI Global Union Top 10 Principles for Ethical Artificial Intelligence, 2017. Principle 5 - Ensure a Genderless, Unbiased AI, at p. 8.

  138. 138.

    Ibid., Principle 6—Share the Benefits of AI Systems, at p. 8.

  139. 139.

    The IEEE (Institute of Electrical and Electronics Engineers) Global Initiative on Ethics of Autonomous and Intelligent Systems (A/IS) - Ethically Aligned Design (EAD) Report: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (A/IS), 2019 (“IEEE – EAD Report”), at p. 140.

  140. 140.

    Ibid.

  141. 141.

    Ibid., at p. 141.

  142. 142.

    Ibid.

  143. 143.

    Ibid.

  144. 144.

    Floridi et al. (2018), especially at pp. 699–700.

  145. 145.

    “Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence, 8 April 2019, at p. 13.

  146. 146.

    Ibid.

  147. 147.

    Floridi et al. (2018), p. 700.

  148. 148.

    “Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence, 8 April 2019, at p. 13.

  149. 149.

    Ibid., at pp. 13 and 18. See also Commission Communication, “Building Trust in Human-Centric Artificial Intelligence”, COM(2019) 168 final, Brussels, 8.4.2019, at pp. 5–6.

  150. 150.

    “Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence, 8 April 2019, at pp. 13 and 18. See also Commission Communication, “Building Trust in Human-Centric Artificial Intelligence”, COM(2019) 168 final, Brussels, 8.4.2019, at pp. 5–6.

  151. 151.

    Floridi et al. (2018), p. 700.

  152. 152.

    “Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence, 8 April 2019, at pp. 13 and 18. See also Commission Communication, “Building Trust in Human-Centric Artificial Intelligence”, COM(2019) 168 final, Brussels, 8.4.2019, at pp. 5–6.

  153. 153.

    “Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence, 8 April 2019, at p. 13.

  154. 154.

    European Group on Ethics in Science and New Technologies (EGE), “Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems”, Brussels, 9 March 2018, European Commission, Directorate-General for Research and Innovation (“EGE - Statement on AI, 2018”), at p. 6.

  155. 155.

    Ibid., at p. 7.

  156. 156.

    Ibid., at p. 8.

  157. 157.

    Ibid.

  158. 158.

    Ibid., at pp. 16–17.

  159. 159.

    The “Asilomar AI Principles” document, 2017, developed under the auspices of the Future of Life Institute. See Principle number 7 (“Failure Transparency”).

  160. 160.

    Ibid., Principle number 8 (“Judicial Transparency”).

  161. 161.

    Ibid., Principle number 9 (“Responsibility”).

  162. 162.

    “The Montreal Declaration for a Responsible Development of Artificial Intelligence”, December 2018.

    Principle number 5 (“Democratic Participation Principle”), at p. 12: “5 – Democratic Participation Principle: AIS must meet intelligibility, justifiability, and accessibility criteria, and must be subjected to democratic scrutiny, debate, and control. 1. AIS processes that make decisions affecting a person’s life, quality of life, or reputation must be intelligible to their creators. 2. The decisions made by AIS affecting a person’s life, quality of life, or reputation should always be justifiable in a language that is understood by the people who use them or who are subjected to the consequences of their use. Justification consists in making transparent the most important factors and parameters sha** the decision, and should take the same form as the justification we would demand of a human making the same kind of decision. 3. The code for algorithms, whether public or private, must always be accessible to the relevant public authorities and stakeholders for verification and control purposes. 4. The discovery of AIS operating errors, unexpected or undesirable effects, security breaches, and data leaks must imperatively be reported to the relevant public authorities, stakeholders, and those affected by the situation. 5. In accordance with the transparency requirement for public decisions, the code for decision-making algorithms used by public authorities must be accessible to all, with the exception of algorithms that present a high risk of serious danger if misused. 6. For public AIS that has a significant impact on the life of citizens, citizens should have the opportunity and skills to deliberate on the social parameters of these AIS, their objectives, and the limits of their use. 7. We must at all times be able to verify that AIS are doing what they were programed for and what they are used for. 8. Any person using a service should know if a decision concerning them or affecting them was made by an AIS. 9. Any user of a service employing chatbots should be able to easily identify whether they are interacting with an AIS or a real person. 10. Artificial intelligence research should remain open and accessible to all”.

  163. 163.

    Ibid. Principle number 9 (“Responsibility Principle”), especially paras 1 and 5, at p. 16: “9 – Responsibility Principle: The development and use of AIS must not contribute to lessening the responsibility of human beings when decisions must be made. 1. Only human beings can be held responsible for decisions stemming from recommendations made by AIS, and the actions that proceed therefrom. 2. In all areas where a decision that affects a person’s life, quality of life, or reputation must be made, where time and circumstance permit, the final decision must be taken by a human being and that decision should be free and informed. 3. The decision to kill must always be made by human beings, and responsibility for this decision must not be transferred to an AIS. 4. People who authorize AIS to commit a crime or an offense, or demonstrate negligence by allowing AIS to commit them, are responsible for this crime or offense. 5. When damage or harm has been inflicted by an AIS, and the AIS is proven to be reliable and to have been used as intended, it is not reasonable to place blame on the people involved in its development or use”.

  164. 164.

    “Partnership on AI – Tenets”, 2018, see Tenet number 7.

  165. 165.

    OECD, Recommendation of the Council on Artificial Intelligence, 2020, OECD/LEGAL/0449, adopted on 22.05.2019, at p. 8.

  166. 166.

    Ibid.

  167. 167.

    The IEEE (Institute of Electrical and Electronics Engineers) Global Initiative on Ethics of Autonomous and Intelligent Systems (A/IS) - Ethically Aligned Design (EAD) Report: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (A/IS), 2019, Principles 5 (Transparency) and 6 (Accountability).

  168. 168.

    Ibid., Principle 5 (Transparency), at p. 27.

  169. 169.

    Ibid.

  170. 170.

    Ibid.

  171. 171.

    Ibid. “Principle 5 – 1. For users, what the system is doing and why. 2. For creators, including those undertaking the validation and certification of A/IS, the systems’ processes and input data. 3. For an accident investigator, if accidents occur. 4. For those in the legal process, to inform evidence and decision-making. 5. For the public, to build confidence in the technology”.

  172. 172.

    Ibid.

  173. 173.

    Ibid., Principle 6 (Accountability), at p. 29.

  174. 174.

    Ibid.

  175. 175.

    Ibid.

  176. 176.

    Ibid. The IEEE Report underlined that, in order to best address issues of responsibility and accountability, systems for registration and record-kee** should be established so that it is always possible to find out who is legally responsible for a particular A/IS. Creators, including manufacturers, along with operators, of A/IS should register key, high-level parameters, including: intended use; training data and training environment, if applicable; sensors and real world data sources; algorithms; process graphs; model features, at various levels; user interfaces; actuators and outputs; optimization goals, loss functions, and reward functions. See Principle 6 (Accountability), at p. 30.

  177. 177.

    The UK House of Lords Artificial Intelligence Committee’s Report “AI in the UK: ready, willing and able?”, 16 April 2018, at para. 90, p. 36.

  178. 178.

    Ibid. See the “five overarching principles for an AI code” offered in paragraph 417 of the Report, Principle number (2).

  179. 179.

    Ibid., at para. 318, p. 98.

  180. 180.

    Ibid., at para. 91, p. 36.

  181. 181.

    Ibid., at para. 92, p. 36. See also at para. 94, p. 37. Examples of contexts in which a high degree of intelligibility was thought to be necessary included: (i) Judicial and legal affairs (see written evidence from Joanna Goodman, Dr Paresh Kathrani, Dr Steven Cranfield, Chrissie Lightfoot and Michael Butterworth, Future Intelligence, Ocado Group plc; (ii) Healthcare (see written evidence from Medicines and Healthcare products Regulatory Agency, PHG Foundation, SCAMPI Research Consortium, City, University of London, Professor John Fox); (iii) Certain kinds of financial products and services, for example, personal loans and insurance (see written evidence from Professor Michael Wooldridge); (iv) Autonomous vehicles (see written evidence from Five AI Ltd and UK Computing Research Committee); (v) Weapons systems (see written evidence from Amnesty International and Big Brother Watch).

  182. 182.

    The UK House of Lords Artificial Intelligence Committee’s Report “AI in the UK: ready, willing and able?”, 16 April 2018, at para. 94, p. 37. See written evidence from Professor Robert Fisher, Professor Alan Bundy, Professor Simon King, Professor David Robertson, Dr Michael Rovatsos, Professor Austin Tate and Professor Chris Williams.

    According to the Amnesty International [Written evidence (AIC0180), at points 24–27] - Amnesty International United Kingdom Section, a lack of transparency and accountability in current systems denies those who are harmed by AI-informed decisions adequate access to justice or remedy. The inability to scrutinise the workings of all current deep learning systems (the ‘black box phenomenon’) creates a huge problem with trusting algorithmically-generated decisions. Where AI systems deny someone their rights, understanding the steps taken to deliver that decision is crucial to deliver remedy and justice. Provisions for accountability need to be considered before AI systems become widespread—practically, this may occur at multiple points, including in develo** software, using training data responsibly, executing decisions. To what extent will any automated decision be able to be ‘overridden’, and by whom? Restricting the use of deep learning systems in some cases may be required, where such systems make decisions that directly impact individual rights. The UK government should encourage the development of explainable AI systems, which would be more transparent and allow for effective remedies. Systems need transparency, good governance (including scrutiny of systems and data for potential bias), and accountability measures in place before they are rolled out into public use—especially where AI systems play a decisive and influential role in public services (policing, social care, welfare, state healthcare).

    Also, in relation to the question under what conditions a relative lack of transparency in artificial intelligence systems (so-called ‘black boxing’) is considered to be acceptable (or when it should not be permissible), the Amnesty International argued (points 39–41) that the black box problem is most acute in applications of AI that have the potential to impact on individual rights. Many commercial and non-commercial AI applications may not require the same threshold of accountability, as their impact on individual rights is either remote or negligible. It is vital that AI systems are not rolled out in areas of public life where they could discriminate or generate otherwise unfair decisions without the ability for interrogation and accountability. Where there is potential adverse consequences for individual rights, there must be higher transparency standards applied, with obligations both on the developers of the AI and the institutions using the AI system. This includes: (i) detecting for and correcting for bias in design of the AI and in the data used; (ii) effective mechanisms to guarantee transparency and accountability in use, including regular audits to check for discriminatory decisions and access to remedy when individuals are harmed; (iii) not using AI where there is a risk of harm and no effective means of accountability.

  183. 183.

    Ibid., at para. 92, pp. 36–37. Nvidia, for example, made the point that the assertion that neural networks lack transparency is false. The nature of neural networks is different from that of hand-written computer programs; therefore, new methods for validation must be, and indeed are being, developed. Machine learning algorithms are often much shorter and simpler than conventional software coding, and are therefore in some respects easier to understand and inspect. In practice, other code validation methods are used (for example unit or coverage tests) to ensure source code is functioning correctly and is secure (security breaches are just lapses in that test methodology). The fact that the code is written in a programming language just provides an illusion of transparency as it is not possible for humans to read and the only validation can be done through thorough tests. From this perspective, neural networks offer significant advantage over the hand-written code. Not only do the standard test procedures still apply (as it is common practice to test them against a test dataset), a neural network’s code complexity is negligible (to the extent where the code can be easily read and understood by a single human) and more importantly they can be more formally tested and their performance mathematically proven (since they are just complex functions).

    Also, according to Dr Timothy Lanfear, Director of the EMEA Solution Architecture and Engineering team at Nvidia, “We are using systems every day that are of a level of complexity that we cannot absorb. Artificial intelligence is no different from that. It is also at a level of complexity that cannot be grasped as a whole. Nevertheless, what you can do is to break this down into pieces, find ways of testing it to check that it is doing the things you expect it to do and, if it is not, take some action”.

  184. 184.

    Ibid., at para. 93, p. 37. See written evidence from Deep Science Ventures; Dr Dan O’Hara, Professor Shaun Lawson, Dr Ben Kirman and Dr Conor Linehan; Professor Robert Fisher, Professor Alan Bundy, Professor Simon King, Professor David Robertson, Dr Michael Rovatsos, Professor Austin Tate and Professor Chris Williams.

  185. 185.

    Ibid. See written evidence (AIC0029) from Professor Robert Fisher, Professor Alan Bundy, Professor Simon King, Professor David Robertson, Dr Michael Rovatsos, Professor Austin Tate, Professor Chris Williams.

  186. 186.

    Ibid. See written evidence from Professor Robert Fisher, Professor Alan Bundy, Professor Simon King, Professor David Robertson, Dr Michael Rovatsos, Professor Austin Tate and Professor Chris Williams; Michael Veale; Five AI Ltd; University College London (UCL) and Electronic Frontier Foundation.

  187. 187.

    Ibid., at para. 95, p. 38.

  188. 188.

    Ibid., at para. 99, p. 38.

  189. 189.

    Ibid.

    For a distinction between transparency and explainability, see below the responses of Professor Alan Winfield and Dr. Konstantinos Karachalios to Question 22, Select Committee on Artificial Intelligence, Corrected oral evidence: Artificial Intelligence, Tuesday 17 October 2017, Members present: Lord Clement-Jones (The Chairman); Baroness Bakewell, Lord Giddens; Baroness Grender; Lord Hollick; Lord Levene of Portsoken; Viscount Ridley; Baroness Rock; Lord St John of Bletso; Lord Swinfen., Evidence Session No. 3, Heard in Public, Questions 18–28, Witnesses: Professor Alan Winfield, Professor of Robot Ethics, University of the West of England, Bristol; Dr Ing Konstantinos Karachalios, Managing Director, IEEE Standards Association:

    The Chairman: You seem to be making a distinction between transparency and explainability. Would either of you like to unpack that?

    Dr Ing Konstantinos Karachalios: Full transparency is having the source code to see how it is written. It is fully transparent but it does not explain anything. You do not understand it.

    Professor Alan Winfield: An example might help to clarify. If you have a driverless car and it is involved in an accident, the accident investigators need to understand what happened to cause the accident. That requires transparency, not necessarily explainability. Explainability for the user of a care robot—say it is an elderly person with a care robot in their home—means that that elderly person should be able to ask the robot in some fashion, “Why did you just do that?”.

  190. 190.

    Ibid., at para. 100, p. 39.

    It was pointed out that many companies and organisations are working on explanation systems, which would help to consolidate and translate the processes and decisions made by machine learning algorithms into forms that are comprehensible to human operators. Furthermore, some of the largest technology companies, including Google, IBM and Microsoft, mentioned their commitment to develo** interpretable machine learning systems.

    See, for instance, written evidence from IBM (AIC0160), especially points 28 and 29: “the algorithms that underpin AI systems need to be as transparent, or at least as interpretable as possible. In other words, they need to be able to explain their behaviour in terms that humans can understand — from how they interpreted their input to why they recommended a particular output. To do this, we recommend all AI systems should include “explanation-based collateral systems”. The provided explanations should be meaningful to the targeted users. For example, in AI decision support systems whose aim is to help doctors identify the best therapy for a patient, such AI systems need to provide useful explanations to doctors, patients, nurses, relatives, etc. More generally, existing AI systems support many advanced analytical applications for industries like healthcare, financial services and law. In these scenarios, data-centric compliance monitoring and auditing systems can visually explain various decision paths and their associated risks, complete with the reasoning and motivations behind the recommendation. And the parameters for these solutions are defined by existing regulatory requirements specific to that industry”.

    See also written evidence from Google (AIC0225), point 6.9, where it was mentioned that Glassbox is a machine learning framework optimised for interpretability. It involves creating mathematical models to smooth out the influence of outliers in a data set, thus hel** to make results more predictable and decipherable.

    Moreover, with regard to Microsoft’s development of best practices for intelligible AI systems, see written evidence Microsoft (AIC0149), at point 27: As AI-based systems are increasingly used to make decisions that affect people’s lives in important ways, people naturally want to understand how these systems operate, and why they make the recommendations that they do. Enabling transparency of AI systems can be challenging due to their complexity and the fact that recommendations are largely a function of an understanding of massive amounts of data, which computers excel at, but people do not. (And access to data about people often cannot be provided, given privacy considerations.) Researchers are develo** a number of promising techniques to help provide transparency, such as develo** simpler systems that closely mimic the recommendations of more accurate systems yet are easier to understand, and systems that enable people to vary various inputs to see the effect on system recommendations. Microsoft is working with PAI to develop best practices to enable useful transparency. Such best practices will likely include an explanation of the system objectives, the data sets used to train the system during development and in deployment, selection criteria for the algorithms, the system components and their interactions, testing and validation of the system, and risk mitigation considerations.

  191. 191.

    Ibid., at para. 101, p. 39. See written evidence from Future Intelligence (AIC0216); PricewaterhouseCoopers LLP (AIC0162); IBM (AIC0160); CognitionX (AIC0170); Big Brother Watch (AIC0154); Will Crosthwait (AIC0094) and Dr Maria Ioannidou (AIC0082). Also: written evidence from Article 19 (AIC0129); Information Commissioner’s Office (AIC0132); CBI (AIC0114) and Ocado Group plc (AIC0050).

  192. 192.

    Ibid., at para. 101, p. 39.

    Article 22 of Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation – GDPR) gives individuals the right not to be subject to a decision based solely on automated processing when this produces legal effects on users or similarly significantly affects them. “Article 22: Automated individual decision-making, including profiling 1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. 2. Paragraph 1 shall not apply if the decision: (a) is necessary for entering into, or performance of, a contract between the data subject and a data controller; (b) is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or (c) is based on the data subject’s explicit consent. 3. In the cases referred to in points (a) and (c) of paragraph 2, the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision. 4. Decisions referred to in paragraph 2 shall not be based on special categories of personal data referred to in Article 9(1), unless point (a) or (g) of Article 9(2) applies and suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are in place”.

  193. 193.

    The UK House of Lords Artificial Intelligence Committee’s Report “AI in the UK: ready, willing and able?”, 16 April 2018, at p. 5.

    See below the responses of Professor Alan Winfield and Dr. Konstantinos Karachalios to Question 22, Select Committee on Artificial Intelligence, Corrected oral evidence: Artificial Intelligence, Tuesday 17 October 2017, Members present: Lord Clement-Jones (The Chairman); Baroness Bakewell, Lord Giddens; Baroness Grender; Lord Hollick; Lord Levene of Portsoken; Viscount Ridley; Baroness Rock; Lord St John of Bletso; Lord Swinfen., Evidence Session No. 3, Heard in Public, Questions 18–28, Witnesses: Professor Alan Winfield, Professor of Robot Ethics, University of the West of England, Bristol; Dr Ing Konstantinos Karachalios, Managing Director, IEEE Standards Association:

    Viscount Ridley: “My question is about transparency. We cannot be far from the point when artificial intelligence diagnoses a disease or offers a legal opinion without being able to explain how it reached that conclusion. That is already true of human beings in some sense, but is it a particular problem with AI, with robots, and how transparent should artificial intelligence systems be?

    Professor Alan Winfield: I take a very hard line on this. I think it should always—always—be possible to find out why an AI made a particular decision. That is very easy to say, of course, but we know that in practice it can be very difficult. It seems to me absolutely unacceptable that one might accept the decision of a medical-diagnosis AI or a mortgage application recommender system without understanding why it made that decision. Many members of the AI community will get very cross with me—I am sure they are right now if they are watching—because, of course, what I am effectively saying is that we should not be using systems that are not transparent, such as deep learning systems. I would simply apply an engineering approach whereby we need to be able to understand why the system makes the decisions it does; otherwise, if a system goes wrong and causes harm, we simply cannot find out what went wrong. I would like to offer the analogy of aircraft autopilots. We all understand very well the standards that we set for the engineering of those systems, and an AI autopilot and driverless cars, for instance, should be held to no less a standard of safety, and explainability or transparency.

    Viscount Ridley: Is not AlphaGo already failing that threshold?

    Professor Alan Winfield: It is, yes.

    The Chairman: By which you mean deep learning?

    Professor Alan Winfield: My challenge to the AI community—and they are smart guys, smart men and women—is to invent AI systems that are explainable. I do not believe that it is technically impossible.

    The Chairman: Has the horse not already bolted, though?

    Professor Alan Winfield: It may well have done, except that we can still regulate—and I believe that we should—and say that it is simply not acceptable to have, for instance, a medical-diagnosis AI that cannot be explained.

    Viscount Ridley: I have a supplementary question on that that I was supposed to ask. Does the degree of acceptable transparency differ depending on the situation? In other words, some of our evidence has suggested that it will be fine in some circumstances but not in others, so in a game it might be all right and in a medical diagnosis it might not be. Would you accept that?

    Professor Alan Winfield: I would indeed. I am focused primarily, as you can probably tell, on safety-critical systems. That is where if the AI goes wrong, physical, financial or psychological harm could result—in other words, where harms result from a failing AI.

    The Chairman: Dr Karachalios, do you agree with Professor Winfield’s hard line?

    Dr Ing Konstantinos Karachalios: Yes, and I believe that there are different aspects of it. Transparency is not explainability. It is different. A system that can explain what it does is different from a fully transparent system. There are efforts to make decisions more transparent. There are elements of the European Commission’s GDPR that are forcing this transparency. This is the right way to go—and it can be done—to give direction to the researchers and scientists. In addition, it is very difficult to understand why dataset A has been transformed to dataset B, because we do not remember the way computers remember. We do not have this capacity. Even if we see it, we do not understand how it came about. This is a problem that we need to understand, because if you do not get a job or you are refused medical treatment, nobody can tell you why. We need systems that can explain this, I agree with you.

  194. 194.

    Ibid., at para. 105, p. 40.

  195. 195.

    The UNI Global Union Top 10 Principles for Ethical Artificial Intelligence, 2017. See “Principle 1 – Demand That AI Systems Are Transparent”, at pp. 6–7.

  196. 196.

    Ibid., “Principle 1 – Demand That AI Systems Are Transparent”, (1A), at p. 6.

  197. 197.

    Ibid., “Principle 1 – Demand That AI Systems Are Transparent”, (1B), at p. 6.

  198. 198.

    Ibid., “Principle 1 – Demand That AI Systems Are Transparent”, (1C) to (1G), at pp. 6–7.

    See also Principle 4 – ‘Adopt a Human-In-Command Approach’, where it is stated that “workers must also have the ‘right of explanation’ when AI systems are used in human-resource procedures, such as recruitment, promotion or dismissal”, at p. 8.

  199. 199.

    Ibid., “Principle 2 – Equip AI Systems With an “Ethical Black Box””, at p. 7.

  200. 200.

    Floridi et al. (2018), p. 699.

  201. 201.

    Ibid., at p. 700.

  202. 202.

    Ibid.

  203. 203.

    Ibid.

  204. 204.

    Ibid., at pp. 696–697. See also: “A Unified Framework of Five Principles for AI in Society”, by Luciano Floridi and Josh Cowls, Harvard Data Science Review, July 2019 (updated in November 2019), at p. 6.

  205. 205.

    See, for instance, the Document “Ethics Guidelines for Trustworthy AI – Overview of the main comments received through the Open Consultation”, at p. 4: “The inclusion of the principle “do good” received a number of critical comments, as this was not found to be a principle that could be a moral imperative in each and every case (e.g. when pursuing fundamental research) and not well suited in the context of AI. A few contributors suggested the addition of other principles. The revised Guidelines no longer contain a reference to the “do good” principle. However, they stipulate clearly that one of the goals of Trustworthy AI is to improve individual and collective wellbeing, and provide guidance concerning how ethics can help to achieve this objective”.

  206. 206.

    “Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence, 8 April 2019, at p. 12.

  207. 207.

    Ibid.

  208. 208.

    Ibid., at p. 19.

  209. 209.

    Commission Communication, “Building Trust in Human-Centric Artificial Intelligence”, COM(2019) 168 final, Brussels, 8.4.2019, at p. 6.

  210. 210.

    European Group on Ethics in Science and New Technologies (EGE), “Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems”, Brussels, 9 March 2018, European Commission, Directorate-General for Research and Innovation, at p. 16.

  211. 211.

    Ibid., at p. 19.

  212. 212.

    Ibid., at p. 17.

  213. 213.

    One of the objectives of the Montreal Declaration for a Responsible Development of AI, alongside the development of an ethical framework for the deployment of AI so that everyone benefits from this technological revolution, is “to collectively achieve equitable, inclusive, and ecologically sustainable AI development”. See “The Montreal Declaration for a Responsible Development of Artificial Intelligence”, December 2018, at p. 5.

  214. 214.

    Ibid., at p. 7.

  215. 215.

    Ibid., Principle number 1 (“Well-Being Principle”), at p. 8.

  216. 216.

    Ibid., Principle number 1 (“Well-Being Principle”), 1.1, at p. 8.

    1 – Well-Being Principle: The development and use of artificial intelligence systems (AIS) must permit the growth of the well-being of all sentient beings. 1. AIS must help individuals improve their living conditions, their health, and their working conditions. 2. AIS must allow individuals to pursue their preferences, so long as they do not cause harm to other sentient beings. 3. AIS must allow people to exercise their mental and physical capacities. 4. AIS must not become a source of ill-being, unless it allows us to achieve a superior well-being than what one could attain otherwise. 5. AIS use should not contribute to increasing stress, anxiety, or a sense of being harassed by one’s digital environment.

  217. 217.

    Ibid., Principle number 2 (“Respect for Autonomy Principle”), 2.1, at p. 9.

  218. 218.

    Ibid., Principle number 10 (“Sustainable Development Principle”), at p. 17.

    10 – Sustainable Development Principle: The development and use of AIS must be carried out so as to ensure a strong environmental sustainability of the planet. 1. AIS hardware, its digital infrastructure and the relevant objects on which it relies such as data centers, must aim for the greatest energy efficiency and to mitigate greenhouse gas emissions over its entire life cycle. 2. AIS hardware, its digital infrastructure and the relevant objects on which it relies, must aim to generate the least amount of electric and electronic waste and to provide for maintenance, repair, and recycling procedures according to the principles of circular economy. 3. AIS hardware, its digital infrastructure and the relevant objects on which it relies, must minimize our impact on ecosystems and biodiversity at every stage of its life cycle, notably with respect to the extraction of resources and the ultimate disposal of the equipment when it has reached the end of its useful life. 4. Public and private actors must support the environmentally responsible development of AIS in order to combat the waste of natural resources and produced goods, build sustainable supply chains and trade, and reduce global pollution.

    According to the Montreal Declaration for a Responsible Development of AI, the terms strong environmental sustainability and sustainable development are explained as follows: (i) Strong Environmental Sustainability: the notion of strong environmental sustainability goes back to the idea that in order to be sustainable, the rate of natural resource consumption and polluting emissions must be compatible with planetary environmental limits, the rate of resources and ecosystem renewal, and climate stability. Unlike weak sustainability, which requires less effort, strong sustainability does not allow the substitution of the loss of natural resources with artificial capital. (ii) Sustainable Development: sustainable development refers to the development of human society that is compatible with the capacity of natural systems to offer the necessary resources and services to this society. It is economic and social development that fulfills current needs without compromising the existence of future generations (at p. 20 of the Declaration).

  219. 219.

    “Partnership on AI (PAI) – Tenets”, 2018, Preamble.

  220. 220.

    Ibid., Thematic Pillar 6.

  221. 221.

    Ibid., Our Goals.

  222. 222.

    Ibid., Tenet number 1.

  223. 223.

    Ibid., Tenet number 6(b) and 6(c).

  224. 224.

    The “Asilomar AI Principles” document, 2017, developed under the auspices of the Future of Life Institute. See Principle number 1 (“Research Goal”).

  225. 225.

    Ibid., Principle number 2 (“Research Funding”).

  226. 226.

    Ibid., Principle number 2 (“Research Funding”).

  227. 227.

    Ibid., See Principle number 14 (“Shared Benefit”).

  228. 228.

    Ibid., Principle number 15 (“Shared Prosperity”).

  229. 229.

    Ibid., Principle number 23 (“Common Good”).

  230. 230.

    The UK House of Lords Artificial Intelligence Committee’s Report “AI in the UK: ready, willing and able?”, published on 16 April 2018. See the “five overarching principles for an AI code” offered in paragraph 417 of the Report, p. 125, Principle number (1).

  231. 231.

    Ibid., see the “five overarching principles for an AI code” offered in paragraph 417 of the Report, p. 125, Principle number (4).

  232. 232.

    The UNI Global Union Top 10 Principles for Ethical Artificial Intelligence, 2017. See Principle 6 - Share the Benefits of AI Systems, at p. 8.

  233. 233.

    Ibid., at p. 7.

  234. 234.

    OECD, Recommendation of the Council on Artificial Intelligence, 2020, OECD/LEGAL/0449, adopted on 22.05.2019, at p. 6.

  235. 235.

    Ibid., at p. 7. Principle 1.1. – Inclusive growth, sustainable development and well-being.

  236. 236.

    The IEEE (Institute of Electrical and Electronics Engineers) Global Initiative on Ethics of Autonomous and Intelligent Systems (A/IS) - Ethically Aligned Design (EAD) Report: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (A/IS), 2019, at p. 17.

  237. 237.

    Ibid., at p. 2.

  238. 238.

    Ibid.

  239. 239.

    Ibid.

  240. 240.

    Ibid., at pp. 70–71. According to the IEEE Report, and for the purposes of the Ethically Aligned Design, “the term ‘well-being’ refers to an evaluation of the general quality of life of an individual and the state of external circumstances. The conception of well-being encompasses the full spectrum of personal, social, and environmental factors that enhance human life and on which human life depend. The concept of well-being shall be considered distinct from moral or legal evaluation” (at p. 70). The IEEE Study (at p. 71) argued that the most recognized aspects of well-being are (in alphabetical order) the following: • Community: Belonging, Crime & Safety, Discrimination & Inclusion, Participation, Social Support • Culture: Identity, Values • Economy: Economic Policy, Equality & Environment, Innovation, Jobs, Sustainable Natural Resources & Consumption & Production, Standard of Living • Education: Formal Education, Lifelong Learning, Teacher Training • Environment: Air, Biodiversity, Climate Change, Soil, Water • Government: Confidence, Engagement, Human Rights, Institutions • Human Settlements: Energy, Food, Housing, Information & Communication Technology, Transportation • Physical Health: Health Status, Risk Factors, Service Coverage • Psychological Health: Affect (feelings), Flourishing, Mental Illness & Health, Satisfaction with Life • Work: Governance, Time Balance, Workplace Environment.

  241. 241.

    Ibid., at pp. 11, 18, 21–22.

  242. 242.

    Ibid., at p. 21.

    For further reading on this topic, see: IEEE P7010™, Well-being Metric for Autonomous and Intelligent Systems; OECD Guidelines on Measuring Subjective Well-being, 2013; OECD Better Life Index, 2017; United Nations Sustainable Development Goal (SDG) Indicators, 2018.

  243. 243.

    Ibid., at p. 21.

  244. 244.

    Ibid., at p. 140.

  245. 245.

    Ibid.

  246. 246.

    Ibid., at p. 141.

  247. 247.

    Ibid.

  248. 248.

    “Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence, 8 April 2019, at p. 13.

  249. 249.

    Ibid. See also at p. 14: “Acknowledge that, while bringing substantial benefits to individuals and society, AI systems also pose certain risks and may have a negative impact, including impacts which may be difficult to anticipate, identify or measure (e.g. on democracy, the rule of law and distributive justice, or on the human mind itself.) Adopt adequate measures to mitigate these risks when appropriate, and proportionately to the magnitude of the risk”.

  250. 250.

    “Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence, 8 April 2019, at p. 13.

  251. 251.

    Ibid.

  252. 252.

    Ibid.

    Some of the fundamental rights of the ‘Charter of Fundamental Rights of the European Union’ (‘the Charter’ - OJ C 83, 30.03.2010, p0. 389–403) are guaranteed in absolute terms, which means that they cannot be subject to ‘limitations’ or ‘restrictions’. Measures taken by public authorities that interfere with a right protected in absolute terms amount to a violation (an infringement) of this fundamental right. There are only a small number of fundamental rights which are guaranteed in absolute terms. The Charter itself does not explicitly list those rights, but in the light of the case law of the European Courts it is argued that the prohibition of torture and inhuman or degrading treatment or punishment (Article 4 of the Charter) and the prohibition of slavery or servitude (Article 5 of the Charter) are protected in absolute terms. For instance, the prohibition of torture and inhuman or degrading treatment or punishment as enshrined in Article 4 of the Charter is absolute. It is, therefore, not possible to ‘balance’ this prohibition against interests of national security. See settled case law of the European Court of Human Rights, e.g. ECtHR (Grand Chamber), Saadi v. Italy, Application no. 37201/06, Judgment of 28 February 2008, at para. 140.

    On the other hand, other fundamental rights could be limited and, therefore, measures taken by public authorities that interfere with such rights could be justified under certain conditions. The interference will only amount to a violation of such a right when the justifying conditions cannot be fulfilled. The requirements for a justified limitation are set out in Article 52 of the Charter, which reads as follows: “Any limitation on the exercise of the rights and freedoms recognised by this Charter must be provided for by law and respect the essence of those rights and freedoms. Subject to the principle of proportionality, limitations may be made only if they are necessary and genuinely meet objectives of general interest recognised by the Union or the need to protect the rights and freedoms of others”. When applying Article 52 of the Charter, the following questions should be addressed: (a) Are the limitations provided for by law and formulated in a clear and predictable manner? (b) Are the limitations necessary to achieve an objective of general interest recognised by the Union or to protect the rights and freedoms of others (which)? (c) Are the limitations proportionate, i.e. appropriate for attaining the objective pursued and not going beyond what is necessary to achieve it? Is there an equally effective but less intrusive measure available? (d) Do the limitations preserve the essence of the fundamental rights concerned?

    The Charter does not explicitly define the term ‘limitation’ in the sense of Article 52. In general, however, any measure or conduct by public authorities, such as legislative acts, administrative decisions or state practice, which directly or indirectly affect in a negative way the exercise or enjoyment of the rights and freedoms guaranteed by the Charter, can be considered a ‘limitation’. As an example, the collection, use, or even mere storing [see the European Court of Human Rights, S. and Marper v. UK, Applications nos. 30562/04 and 30566/04, Judgment of 4 December 2008 at para. 67] of information by public authorities about an individual is a limitation of the right to protection of personal data, guaranteed under Article 8 of the Charter, which requires justification. For instance, an official census which includes compulsory questions relating to the individual’s sex, marital status, place of birth and other personal details; the recording of fingerprinting, photography and other personal information by the police (even if the police register is secret); the collection of medical data and the maintenance of medical records; the compulsion by state authorities to reveal details of personal expenditure; records relating to past criminal cases; information relating to terrorist activity, or collecting personal information in order to protect national security.

    See also Commission’s Staff Working Paper “Operational Guidance on taking account of Fundamental Rights in Commission Impact Assessments”, Brussels, 6.5.2011, SEC(2011) 567 final, at pp. 9–10.

References

  • Brandom R (2018) Amazon’s facial recognition matched 28 members of Congress to criminal mugshots. The Verge, July 26, 2018

    Google Scholar 

  • Carpenter J (2015) Google’s algorithm shows prestigious job ads to men, but not to women. Here’s why that should worry you. The Washington Post, July 6, 2015

    Google Scholar 

  • Cath C (2018) Governing artificial intelligence: ethical, legal and technical opportunities and challenges. Philos Transact R Soc A 376:2133

    Google Scholar 

  • Chopra S, White LF (2011) A legal theory for autonomous artificial agents. The University of Michigan Press, Ann Arbor

    Book  Google Scholar 

  • Dautenhahn K, Ogden B, Quick T (2002) From embodied to socially embedded agents—implications for interaction-aware robots. Cogn Syst Res 3(3):397–428

    Article  Google Scholar 

  • Di Nucci E (2017) Sexual rights, disability, and sex robots. In: Danaher J, McArthur N (eds) Robot sex: social and ethical implications. MIT Press, Cambridge, pp 73–88

    Google Scholar 

  • Dicke K (2002) The founding function of human dignity in the universal declaration of human rights. In: Kretzmer D, Klein E (eds) Concept of human dignity in human rights discourse. Columbia University Press, New York, pp 111–120

    Google Scholar 

  • Ebers M, Navas Navarro S (eds) (2020) Regulating AI and robotics. In: Algorithms and law. Cambridge University Press, Cambridge, pp 37–99

    Google Scholar 

  • Fiske A, Henningsen P, Buyx A (2019) Your robot therapist will see you now: Ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. J Med Int Res 21(5):e13216

    Google Scholar 

  • Floridi L (2013) The ethics of information. Oxford University Press, Oxford

    Book  Google Scholar 

  • L. Floridi (2014) The fourth revolution. How the infosphere is resha** human reality. Oxford University Press, Oxford

    Google Scholar 

  • Floridi L (2016) On human dignity as a foundation for the right to privacy. Philos Technol 29(4):307–312

    Article  Google Scholar 

  • Floridi L (2018) Soft ethics and the governance of the digital. Philos Technol 31(1):1–8

    Article  Google Scholar 

  • Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F, Schafer B, Valcke P, Vayena EJM (2018) AI4People – an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds Mach 28(4):689–707

    Article  Google Scholar 

  • Fosch-Villaronga E (2019) Robots, healthcare, and the law: regulating automation in personal care. Routledge

    Book  Google Scholar 

  • Fosch-Villaronga E, Albo-Canals J (2019) ‘I’ll take care of you’, said the robot. Paladyn. J Behav Robot 10(1):77–93

    Article  Google Scholar 

  • Fosch-Villaronga E, Golia A (2019) Robots, standards and the law: Rivalries between private standards and public policymaking for robot governance. Comput Law Secur Rev 35(2):129–144

    Article  Google Scholar 

  • Fosch-Villaronga E, Heldeweg M (2018) “‘Regulation, I presume?” said the robot—Towards an iterative regulatory process for robot governance. Comput Law Secur Rev 34(6):1258–1277

    Article  Google Scholar 

  • Fosch-Villaronga E, Millard C (2019) Cloud robotics law and regulation: challenges in the Governance of complex and dynamic cyber-physical ecosystems. Robot Auton Syst 119:77–91

    Article  Google Scholar 

  • Graeme L, Harmon SHE, Arzuaga F (2012) Foresighting futures: law, new technologies and the challenges of regulating for uncertainty. Law Innov Technol 4(1):1–33

    Article  Google Scholar 

  • Grodzinsky F, Miller KW, Wolf MJ (2008) The ethics of designing artificial agents. Ethics Inf Technol 10(2-3):115–121

    Article  Google Scholar 

  • Guihot M, Matthew AF, Suzor NP (2017) Nudging robots: innovative solutions to regulate artificial intelligence. Vanderbilt J Entertain Technol Law 20:385

    Google Scholar 

  • Hilgendorf E (2018) Problem areas in the dignity debate and the ensemble theory of human dignity. In: Grimm D, Kemmerer A, Möllers C (eds) Human dignity in context. Explorations of a contested concept. Nomos Verlagsgesellschaft mbH & Co. KG, Baden-Baden, p. 325

    Chapter  Google Scholar 

  • Koops BJ, Leenes R (2014) Privacy regulation cannot be hardcoded. A critical comment on the ‘privacy by design’ provision in data-protection law. Int Rev Law Comput Technol 28(2):159–171

    Article  Google Scholar 

  • Kritikos M (2016) Legal and ethical reflections concerning robotics. STOA Policy Briefing, June 2016 – PE 563.501, Brussels, European Parliament Research Service

    Google Scholar 

  • Marchant GE, Allenby BR, Herkert JR (2011) The growing gap between emerging technologies and legal-ethical oversight: the pacing problem, vol 7. Springer Science & Business Media, Berlin

    Google Scholar 

  • McCrudden C (2008) Human dignity and judicial interpretation of human rights. Eur J Int Law (EJIL) 19(4):655–724

    Article  Google Scholar 

  • Neuman GL (2000) Human dignity in United States constitutional law. In: Simon D, Weiss M (eds) Zur Autonomie des Individuums. Liber Amicorum Spiros Simitis, Baden-Baden, pp 249–271

    Google Scholar 

  • O’Mahony C (2012) There is no such thing as a right to dignity. Int J Const Law 10(2):551–574

    Google Scholar 

  • Schroeder D, Bani-Sadr AH (2017) Dignity in the 21st century: middle east and west. Springer, Berlin

    Book  Google Scholar 

  • Sharkey A (2014) Robots and human dignity: a consideration of the effects of robot care on the dignity of older people. Ethics Inf Technol 16(1):63–75

    Article  Google Scholar 

  • Sweeney L (2013) Discrimination in online ad delivery. Harvard University, January 28, 2013

    Google Scholar 

  • Tzimas T (2021) Legal and ethical challenges of artificial intelligence from an international law perspective. Springer, Cham, Switzerland, pp 9–32

    Google Scholar 

  • Veale M, Binns R, Edwards L (2018) Algorithms that remember: model inversion attacks and data protection law. Philos Transact R Soc A Math Phys Eng Sci 376(2133):20180083

    Article  Google Scholar 

  • Wagner B (2018) Ethics as an escape from regulation: from ethics-washing to ethics-shop**? In: Hildebrandt M (ed) Being profiling. Cogitas ergo sum. Amsterdam University Press, Amsterdam, pp 84–90

    Chapter  Google Scholar 

  • Weisstub DN (2002) Honor, dignity and the framing of multiculturalist values. In: Kretzmer D, Klein E (eds) Concept of human dignity in human rights discourse. Columbia University Press, pp 263–294

    Google Scholar 

  • Winfield AF, Jirotka M (2018) Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philos Transact R Soc A Math Phys Eng Sci 376(2133):20180085

    Article  Google Scholar 

  • Zardiashvili L, Fosch-Villaronga E (2020) “Oh, Dignity too?” Said the Robot: human dignity as the basis for the governance of robotics. Minds and machines. Springer, pp 30:121–143.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Legal, Policy Documents and Other Sources

Legal, Policy Documents and Other Sources

  • Report of the Committee on Legal Affairs of the European Parliament with recommendations to the Commission on Civil Law Rules on Robotics – Motion for a European Parliament Resolution (2015/2103(INL)), 27.1.2017, A8-0005/2017.

  • European Parliament Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), (2018/C 252/25), OJ C 252, 18.7.2018, pp. 239–257 (the “European Parliament Resolution of 16 February 2017”).

  • General Secretariat of the Council, Subject: European Council meeting (19 October 2017) – Conclusions, Brussels, 19 October 2017 (OR. en), EUCO 14/17.

  • Commission Communication “Artificial Intelligence for Europe”, COM(2018) 237 final, Brussels, 25.4.2018.

  • Commission Communication “Coordinated Plan on Artificial Intelligence”, COM(2018) 795 final, Brussels, 7.12.2018.

  • Annex to the Commission Communication ‘Coordinated Plan on Artificial Intelligence’, COM(2018) 795 final, Brussels, 7.12.2018 - Coordinated Plan on the Development and Use of Artificial Intelligence Made in Europe – 2018.

  • “Draft Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence, 18 December 2018.

  • “Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence, 8 April 2019 (the “Ethics Guidelines for Trustworthy AI, 2019”).

  • “A definition of AI: Main capabilities and scientific disciplines – Definition developed for the purpose of the deliverables of the High-Level Expert Group on AI”, High-Level Expert Group on Artificial Intelligence, 18 December 2018.

  • Commission Communication, “Building Trust in Human-Centric Artificial Intelligence”, COM(2019) 168 final, Brussels, 8.4.2019.

  • Commission Communication on “Enabling the digital transformation of health and care in the Digital Single Market; empowering citizens and building a healthier society”, Brussels, 25.4.2018, COM(2018) 233 final.

  • Opinion of the Data Ethics Commission (Datenethikkommission), Publisher: Data Ethics Commission of the Federal Government, Berlin, December 2019, pp. 12–227.

  • “Human Rights in the Age of Artificial Intelligence”, Report by Access Now, November 2018.

  • “EU backs AI regulation while China and US favour technology”, Siddharth Venkataramakrishnan, Financial Times, 25.04.2019.

  • European Group on Ethics in Science and New Technologies (EGE), “Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems”, Brussels, 9 March 2018, European Commission, Directorate-General for Research and Innovation (“EGE – Statement on AI, 2018”).

  • IEEE’s (Institute of Electrical and Electronics Engineers) policy paper on ‘Ethically Aligned Design’, 2016.

  • ITU’s (International Telecommunication Union) Global Summit ‘AI for Good’, 2017.

  • ACM’s (Association for Computing Machinery) – AAAI/ACM ‘Conference on AI, Ethics, and Society’, February 2018.

  • European Parliament, Committee on Legal Affairs 2015/2103 (INL) Report with Recommendations to the Commission on Civil Law Rules on Robotics, Rapporteur Mady Delvaux.

  • “A Unified Framework of Five Principles for AI in Society”, by Luciano Floridi and Josh Cowls, Harvard Data Science Review, July 2019 (updated in November 2019).

  • “The Montreal Declaration for a Responsible Development of Artificial Intelligence”, developed under the auspices of the University of Montreal, following the Forum on the Socially Responsible Development of AI of November 2017. Final text, December 2018.

  • The IEEE Initiative on Ethics of Autonomous and Intelligent Systems - Ethically Aligned Design (EAD), 2019.

  • The UK House of Lords Artificial Intelligence Committee’s Report “AI in the UK: ready, willing and able?”, 16 April 2018.

  • “Partnership on AI (PAI) – Tenets of the Partnership on AI to Benefit People and Society”, 2018. Available at: https://partnershiponai.org/about/#tenets

  • “Asilomar AI Principles”, developed under the auspices of the Future of Life Institute, 2017. Available at: https://futureoflife.org/ai-principles/

  • UNESCO, ‘Report of COMEST on Robotics Ethics’, World Commission on the Ethics of Scientific Knowledge and Technology, 2017.

  • Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation – GDPR).

  • OECD, Recommendation of the Council on Artificial Intelligence, 2020, OECD/LEGAL/0449, adopted on 22.05.2019.

  • The UNI Global Union Top 10 Principles for Ethical Artificial Intelligence, 2017.

  • The IEEE (Institute of Electrical and Electronics Engineers) Global Initiative on Ethics of Autonomous and Intelligent Systems (A/IS) - Ethically Aligned Design (EAD) Report: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (A/IS), 2019 (“IEEE – EAD Report”).

  • Case C-331/88, Fedesa and others [1990] ECR I-4023.

  • Case C-210/03, judgment of the Court of 14.12.2004.

  • Case 137/85 Maizena [1987] ECR 4587.

  • Case C-339/92 ADM Olmuhlen [1993] ECR I-6473.

  • Case C-210/00 Kaserei Champignon Hofmeister [2002] ECR I-6453.

  • OECD Guidelines on Measuring Subjective Well-being, 2013.

  • OECD Better Life Index, 2017.

  • United Nations Sustainable Development Goal (SDG) Indicators, 2018.

  • ECtHR (Grand Chamber), Saadi v. Italy, Application no. 37201/06, Judgment of 28 February 2008 at para. 140.

  • European Court of Human Rights, S. and Marper v. UK, Applications nos. 30562/04 and 30566/04, Judgment of 4 December 2008.

  • Commission’s Staff Working Paper “Operational Guidance on taking account of Fundamental Rights in Commission Impact Assessments”, Brussels, 6.5.2011, SEC(2011) 567 final.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Nikolinakos, N.T. (2023). Ethical Principles for Trustworthy AI. In: EU Policy and Legal Framework for Artificial Intelligence, Robotics and Related Technologies - The AI Act. Law, Governance and Technology Series, vol 53. Springer, Cham. https://doi.org/10.1007/978-3-031-27953-9_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-27953-9_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-27952-2

  • Online ISBN: 978-3-031-27953-9

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics

Navigation