Abstract
Autonomy is often considered a core value of Western society that is deeply entrenched in moral, legal, and political practices. The development and deployment of artificial intelligence (AI) systems to perform a wide variety of tasks has raised new questions about how AI may affect human autonomy. Numerous guidelines on the responsible development of AI now emphasise the need for human autonomy to be protected. In some cases, this need is linked to the emergence of increasingly ‘autonomous’ AI systems that can perform tasks without human control or supervision. Do such ‘autonomous’ systems pose a risk to our own human autonomy? In this article, I address the question of a trade-off between human autonomy and system ‘autonomy’.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
This is not to claim that machines will never be able to obtain the capacity for autonomy.
- 2.
The relationship between autonomy and informed consent is also frequently discussed in the context of biomedical ethics, e.g. Beauchamp et al. (2001).
- 3.
In a similar vein, Floridi & Cowls (2019) emphasise the importance of humans to be able to freely choose which decisions are delegated to AI systems, as well as to be able to reverse this choice, if needed. The authors call this the ‘decide-to-delegate’ model. Notably, there exists some ambiguity regarding who these ‘humans’ are, that is, whether it refers to any or all humans, users, operators, citizens, etc. Depending on the answer, the ‘deciding-to-delegate’ model will result in radically different demands on system design or governance mechanisms.
- 4.
This has been emphasised many times in the literature on relational autonomy. See e.g. Hutchison et al. (2018) and references therein.
References
Bayouth, M., Nourbakhsh, & I. Thorpe, C. (1997). A hybrid human-computer autonomous vehicle architecture. In Proceedings, Third ECPD International Conference on Advanced Robotics, Intelligent Automation and Control. Citeseer.
Beauchamp, T. L., Beauchamp, P. o. P. a. S. R. S. T. L., Childress, J. F. Childress, U. P. a. H. P. o. E. J. F. (2001). Principles of biomedical ethics. Oxford University Press.
Christman, J. (2009). The politics of persons: Individual autonomy and socio-historical selves. Cambridge University Press.
Christman, J. (2018). Autonomy in moral and political philosophy. In E.N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Spring 2018 ed.). Metaphysics Research Lab., Stanford University.
Dworkin, G. (1988). The theory and practice of autonomy. Cambridge University Press.
Floridi, L. & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1).
Frankfurt, H. G. (1971). Freedom of the will and the concept of a person. The Journal of Philosophy, 68(1), 5–20.
Franklin, S., & Graesser, A. (1996). Is it an agent, or just a program? A Taxonomy for autonomous agents. In International Workshop on Agent Theories, Architectures, and Languages (pp. 21–35). Springer.
HLEG. (2019). Ethics guidelines for trustworthy AI (Tech. Rep. No. B-1049). Brussels.
Hutchison, K., Mackenzie, C., & Oshana, M. (2018). Social dimensions of moral responsibility. Oxford University Press.
Mackenzie, C. (2014). Three dimensions of autonomy: A relational analysis. Oxford University Press.
Mackenzie, C., & Stoljar, N. (2000). Relational autonomy: Feminist perspectives on autonomy, agency, and the social self. Oxford University Press.
Montreal. (2017). Montreal declaration for responsible development of AI. Forum on the Socially Responsible Development of AI.
Prunkl, C. (2022). Human autonomy in the age of artificial intelligence. Nature Machine Intelligence, 4(2), 99–101.
Russell, S., & Norvig, P. (1998). Artificial intelligence: A modern approach (2nd ed.). Upper Saddle River, NJ: Pearson.
Susser, D., Roessler, B., Nissenbaum, H. (2019). Technology, autonomy, and manipulation (Tech. Rep.). Social Science Research Network.
Wooldridge, M., & Jennings, N. R. (1995). Agent theories, architectures, and languages: A survey. In M.J. Wooldridge N.R. Jennings (Eds.), Intelligent agents (pp. 1–39). Springer.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Prunkl, C. (2022). Is There a Trade-Off Between Human Autonomy and the ‘Autonomy’ of AI Systems?. In: Müller, V.C. (eds) Philosophy and Theory of Artificial Intelligence 2021. PTAI 2021. Studies in Applied Philosophy, Epistemology and Rational Ethics, vol 63. Springer, Cham. https://doi.org/10.1007/978-3-031-09153-7_6
Download citation
DOI: https://doi.org/10.1007/978-3-031-09153-7_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-09152-0
Online ISBN: 978-3-031-09153-7
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)