Abstract
With the increasing degrees AIs and machines, the need for implementing ethics in AIs is pressing. In this paper, we first survey current approaches to moral AIs and their inherent limitations. Then we propose the pluralist hybrid approach and show how these limitations of moral AIs can be partly alleviated by the pluralist hybrid approach. The core ethical decision-making capacity of an AI based on the pluralist hybrid approach consists of two systems. The first is a deterministic algorithm system that embraces different moral rules for making explicit moral decisions. The second is a machine learning system that accounts for calculating the value of the variables required by the application of moral principles. The pluralist hybrid system is better than the existing proposals as it better addresses the moral disagreement problem of the top-down approach by including distinct moral principles. Besides, the pluralist hybrid system reduces the opacity of ethical decision-making by implementing explicit moral principles for moral decision-making.
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00146-022-01601-0/MediaObjects/146_2022_1601_Fig1_HTML.png)
Similar content being viewed by others
Availability of data and materials
Not applicable.
Notes
For the details, please see Bringsjord (2009).
Different versions of satisficing consequentialism diverge in terms of how to interpret the concept of “good enough”. See Bradley (2006).
Sunstein (2005) and Gigerenzer (2010) adopt Simon’s bounded rationality perspective to explore which rules or heuristics bounded rational individuals’ use, when they are confronted with moral choice situations. Both scholars came up with a number of heuristics, such as do not knowingly cause a death; doing is morally worse than allowing, see Sunstein (2005), choose the default option; imitate your peer; and tit-for-tat, see (Gigerenzer 2010).
In arguing for the pluralist approach, we are operating under the assumption that our solution for moral de-risking can be formulated by a pluralist inclusion of different moral decision-making principles. This, of course, would exclude proponents of radical non-codifiability and moral particularism. To this, we respond (1) that our goal is not to construct an accurate AI model of human morality, but only to keep machines in check to prevent moral disasters; (2) that we have yet to find viable ways of approximating reliable particularist solutions to moral decision-making that is at the same time executable by machines.
References
Anderson M, Anderson SL (2011) Machine ethics. Cambridge University Press, Cambridge
Bradley B (2006) Against satisficing consequentialism. Utilitas 18(2):97–108. https://doi.org/10.1017/S0953820806001877
Bringsjord S (2009) Unethical but rule-bound robots would kill us all. Retrieved December 10, 2021, from http://www.kryten.mm.rpi.edu/PRES/AGI09/SB_agi09_ethicalrobots.pdf
Brundage M (2014) Limitations and risks of machine ethics. J Exp Theor Artif Intell 26(3):355–372. https://doi.org/10.1080/0952813X.2014.895108
Chakraborty S (2018) Can humanoid robots be moral? Ethics Sci Environ Polit 18:49–60
Dancy J (2004) Ethics without principles. Oxford University Press, Oxford
Gert B (1998) Morality: its nature and justification. Oxford University Press, Oxford
Gigerenzer G (2010) Moral satisficing: rethinking moral behavior as bounded rationality. Top Cogn Sci 2(3):528–554. https://doi.org/10.1111/j.1756-8765.2010.01094.x
Hagendorff T (2020) The ethics of AI ethics: an evaluation of guidelines. Mind Mach 30(1):99–120. https://doi.org/10.1007/s11023-020-09517-8
Harsanyi JC (1975) Can the maximin principle serve as a basis for morality? A critique of John Rawls’s theory. Am Polit Sci Rev 69(2):594–606. https://doi.org/10.2307/1959090
Harsanyi JC (1980) Rule utilitarianism, rights, obligations and the theory of rational behavior. Theor Decis 12(2):115–133. https://doi.org/10.1007/BF00154357
Hooker B (2000) Ideal code, real world: a rule-consequentialist theory of morality. Oxford University Press, Oxford
Kant I (1993) Grounding for the metaphysics of morals (third edition): with on a supposed right to lie because of philanthropic concerns. Ellington, J. W. (trans.) Hackett Publishing, New York
Lazari-Radek KD, Singer P (2010) Secrecy in consequentialism: a defence of esoteric morality. Ratio 23(1):34–58. https://doi.org/10.1111/j.1467-9329.2009.00449.x
Mitchel M (2019) Artificial intelligence: a guide for thinking humans. Penguin, London
Nozick R (2013) Anarchy, state, and utopia, 2nd edn. Basic Books, London
Parfit D (1986) Reasons and persons. Oxford University Press, Oxford
Rawls J (2005) Political liberalism: expanded edition. Columbia University Press, New York, p 576
Scanlon T (1998) What we owe to each other. Harvard University Press, London
Sen A (1975) Informational analysis of moral principles. In: Harrison R (ed) Rational action. Cambridge University Press, Cambridge
Simon HA (1955) A behavioral model of rational choice. Quart J Econ 69(1):99–118. https://doi.org/10.2307/1884852
Slote M, Pettit P (1984) Satisficing consequentialism. Aristotel Soc Suppl 58(1):139–176. https://doi.org/10.1093/aristoteliansupp/58.1.139
Sunstein CR (2005) Moral heuristics. Behav Brain Sci 28(4):531–542. https://doi.org/10.1017/S0140525X05000099
Wallach W, Allen C (2020) Moral machines: teaching robots right from wrong. In: Moral machines. Oxford University Press, Oxford. Retrieved March 26, 2021, from https://doi.org/10.1093/acprof:oso/9780195374049.001.0001/acprof-9780195374049
Acknowledgements
We are grateful to all the audiences who gave us inspiring feedbacks at the CEPE/IACAP joint Conference 2021: Philosophy and Ethics of Artificial Intelligence.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
FS: conception and design of the work; drafting the articles. YSHF: critical revision of the article.
Corresponding author
Ethics declarations
Conflicts of interests
All authors declare that they have no conflict of interest.
Consent for publication
The publisher has the authors’ permission to publish the research findings.
Ethical approval and consent to participate
Not applicable, no human participants are involved.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Song, F., Yeung, S.H.F. A pluralist hybrid model for moral AIs. AI & Soc 39, 891–900 (2024). https://doi.org/10.1007/s00146-022-01601-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-022-01601-0