Log in

A pluralist hybrid model for moral AIs

  • Original Paper
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

With the increasing degrees AIs and machines, the need for implementing ethics in AIs is pressing. In this paper, we first survey current approaches to moral AIs and their inherent limitations. Then we propose the pluralist hybrid approach and show how these limitations of moral AIs can be partly alleviated by the pluralist hybrid approach. The core ethical decision-making capacity of an AI based on the pluralist hybrid approach consists of two systems. The first is a deterministic algorithm system that embraces different moral rules for making explicit moral decisions. The second is a machine learning system that accounts for calculating the value of the variables required by the application of moral principles. The pluralist hybrid system is better than the existing proposals as it better addresses the moral disagreement problem of the top-down approach by including distinct moral principles. Besides, the pluralist hybrid system reduces the opacity of ethical decision-making by implementing explicit moral principles for moral decision-making.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Germany)

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Availability of data and materials

Not applicable.

Notes

  1. For the details, please see Bringsjord (2009).

  2. Different versions of satisficing consequentialism diverge in terms of how to interpret the concept of “good enough”. See Bradley (2006).

  3. Sunstein (2005) and Gigerenzer (2010) adopt Simon’s bounded rationality perspective to explore which rules or heuristics bounded rational individuals’ use, when they are confronted with moral choice situations. Both scholars came up with a number of heuristics, such as do not knowingly cause a death; doing is morally worse than allowing, see Sunstein (2005), choose the default option; imitate your peer; and tit-for-tat, see (Gigerenzer 2010).

  4. In arguing for the pluralist approach, we are operating under the assumption that our solution for moral de-risking can be formulated by a pluralist inclusion of different moral decision-making principles. This, of course, would exclude proponents of radical non-codifiability and moral particularism. To this, we respond (1) that our goal is not to construct an accurate AI model of human morality, but only to keep machines in check to prevent moral disasters; (2) that we have yet to find viable ways of approximating reliable particularist solutions to moral decision-making that is at the same time executable by machines.

References

Download references

Acknowledgements

We are grateful to all the audiences who gave us inspiring feedbacks at the CEPE/IACAP joint Conference 2021: Philosophy and Ethics of Artificial Intelligence.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

FS: conception and design of the work; drafting the articles. YSHF: critical revision of the article.

Corresponding author

Correspondence to Fei Song.

Ethics declarations

Conflicts of interests

All authors declare that they have no conflict of interest.

Consent for publication

The publisher has the authors’ permission to publish the research findings.

Ethical approval and consent to participate

Not applicable, no human participants are involved.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Song, F., Yeung, S.H.F. A pluralist hybrid model for moral AIs. AI & Soc 39, 891–900 (2024). https://doi.org/10.1007/s00146-022-01601-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-022-01601-0

Keywords

Navigation