Ethical Requirements for AI Systems

  • Conference paper
  • First Online:
Advances in Artificial Intelligence (Canadian AI 2020)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12109))

Included in the following conference series:

  • 2849 Accesses

Abstract

AI systems that offer social services, such as healthcare services for patients, driving for travellers and war services for the military need to abide by ethical and professional principles and codes that apply for the services being offered. We propose to adopt Requirements Engineering (RE) techniques developed over decades for software systems in order to elicit and analyze ethical requirements to derive functional and quality requirements that together make the system-to-be compliant with ethical principles and codes. We illustrate our proposal by sketching the process of requirements elicitation and analysis for driverless cars.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    But note, there are ethical requirements that are not legal, and legal ones that are not ethical.

  2. 2.

    In [9], the focus is on use value as opposed to ethical value. However, we believe the analysis still holds, in particular, regarding the connection between value and risk.

  3. 3.

    Notice that transparency w.r.t. the entities that compose an ecosystem regarding their capabilities, intentions, vulnerabilities, and goals strongly connects also to the notion of trust. In a nutshell, trust amounts to a set of relations connecting the beliefs of a (trustor) agent regarding the capabilities, vulnerabilities and intentions of a trustee insomuch as they can affect that agent’s goals [3]. From this we directly have that: (1) trustworthiness assessment can and should be grounded in the explicit assessment of these aspects; (2) trustworthiness is not an absolute property of a system, but one that depends on all these aspects. To put it bluntly, it is meaningless to speak of trustworthy systems in an unqualified manner.

References

  1. Autonomy in weapon system, DoD directive. Technical report, Department of Defence (2012). https://tinyurl.com/vh2qhej

  2. Artificial intelligence: Mankind’s last invention (2019). https://tinyurl.com/y9moo26c

  3. Amaral, G., Sales, T.P., Guizzardi, G., Porello, D.: Towards a reference ontology of trust. In: Panetto, H., Debruyne, C., Hepp, M., Lewis, D., Ardagna, C.A., Meersman, R. (eds.) OTM 2019. LNCS, vol. 11877, pp. 3–21. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-33246-4_1

    Chapter  Google Scholar 

  4. Arkin, R.: Lethal autonomous systems and the plight of the non-combatant. AISB Q. 137, 1–9 (2013)

    Google Scholar 

  5. High-Level Expert Group on Artificial Intelligence, E.C: Draft ethics guidelines for trustworthy AI, draft document (2018)

    Google Scholar 

  6. Etzioni, A., Etzioni, O.: Incorporating ethics into artificial intelligence. J. Ethics 21(4), 403–418 (2017). https://doi.org/10.1007/s10892-017-9252-2

    Article  Google Scholar 

  7. O’Connell, M.E.: Banning autonomous killing. In: Evangelista, M., Shue, H. (eds.) The American Way of Bombing: How Legal and Ethical Norms Change. Cornell University Press, Ithaca (2013)

    Google Scholar 

  8. Otto, P.N., Antón, A.I.: Addressing legal requirements in requirements engineering. In: Proceedings of the 15th IEEE RE 2007, New Delhi, 15–19 October 2007, pp. 5–14 (2007)

    Google Scholar 

  9. Sales, T.P., Baião, F., Guizzardi, G., Almeida, J.P.A., Guarino, N., Mylopoulos, J.: The common ontology of value and risk. In: Trujillo, J.C., et al. (eds.) ER 2018. LNCS, vol. 11157, pp. 121–135. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00847-5_11

    Chapter  Google Scholar 

  10. Tuffley, D.: At last! the world’s first ethical guidelines for driverless cars. The Conversation, September 2017. https://tinyurl.com/u4gbskh

Download references

Acknowledgments

This research is supported by the Strategic Partnership Grant “Middleware Framework and Programming Infrastructure for IoT Services”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Renata Guizzardi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Guizzardi, R., Amaral, G., Guizzardi, G., Mylopoulos, J. (2020). Ethical Requirements for AI Systems. In: Goutte, C., Zhu, X. (eds) Advances in Artificial Intelligence. Canadian AI 2020. Lecture Notes in Computer Science(), vol 12109. Springer, Cham. https://doi.org/10.1007/978-3-030-47358-7_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-47358-7_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-47357-0

  • Online ISBN: 978-3-030-47358-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation