Ethical Conditions of the Use of Artificial Intelligence in the Modern Battlefield—Towards the “Modern Culture of Killing”

  • Chapter
  • First Online:
Artificial Intelligence and Its Contexts

Abstract

The use of artificial intelligence (AI) on the modern battlefield causes justified semantic and ethical controversies. Semantic, i.e., relating to meaning, controversies are related to delineating artificial intelligence (AI) used by a human during military operations, e.g., precision weapons capable of independent decision-making about the choice of target; and completely autonomous machines, i.e. thinking and deciding autonomously/independently from the human being. In this broad, and still nascent filed, several legal and ethical challenges emerge. This chapter sheds light on the emerging ethical considerations pertinent to the use of robots and other machines relying on unsupervised learning in resolving conflicts and crises of strategic importance for security.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  • Antal, J. (2016). The next wave racing toward robotic war. Military Review. https://en.topwar.ru/101998-sleduyuschaya-volna-naperegonki-k-voynam-robotov.html (2020.09.30).

  • Arkin, R. (2011). Governing lethal behavior: Embedding ethics in a hybrid deliberative/reactive architecture. Georgia Institute of Technology, http://www.cc.gatech.edu/ai/robot-lab/online-publications/formalizationv35.pdf. (2015.10.30).

  • Avin, S., & Amadae, S. M. (2019). Autonomy and Machine Learning as Risk Factors at the Interface of Nuclear Weapons, Computers and People, The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk: Euro-Atlantic Perspectives, SIPRI, May. 105–118.

    Google Scholar 

  • Biziewski, J. (1994). Pustynna Burza. Część 2, [Desert Storm. Part 2.], Warszawa: Altair.

    Google Scholar 

  • Bendett, S. (2018). Here’s how the Russian military is organizing to develop AI, Defense One, July 20, 2018. https://www.defenseone.com/ideas/2018/07/russian-militarys-ai-development-roadmap/149900/.

  • Bendett, S. (2017). Red robots rising: Behind the rapid development of Russian unmanned military systems, The Strategy Bridge, December 12, 2017, https://thestrategybridge.org/the-bridge/2017/12/12/red-robots-rising-behind-the-rapid-development-of-russian-unmanned-military-systems. Bird, E., Fox-Skelly, J., Jenner, N., Larbey, R., Weitkamp, E., & Winfield, A. (2020), The Ethics of Artificial Intelligence. Study Panel for the Future of Science and Technology, EPRS-European Parliamentary Research Service, Scientific Foresight Unit (STOA), PE 634.452 – March 2020.

  • Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In K. Frankish, W. Ramsey (Eds.), Cambridge handbook of artificial intelligence (pp. 314–334). New York: Cambridge University Press.

    Google Scholar 

  • Boulanin, V. (2019). The impact of artificial intelligence on strategic stability and nuclear risk, Vol. I, Euro-Atlantic Perspectives, SIPRI.

    Google Scholar 

  • Boussios, E. G., & Visvizi, A. (2017). Drones in War: The controversies surrounding the United States expanded use of drones and the European Union’s disengagement. Yearbook of the Institute of East-Central Europe, 15(2), 123–145.

    Google Scholar 

  • Coeckelbergh, M. (2011). Artificial companions: Empathy and vulnerability mirroring in human-robot relations. Studies in Ethics, Law and Technology, 4(3), 1–17.

    Article  Google Scholar 

  • Coker, Ch. (2002). Towards post-human warfare: Ethical implications of the Revolution in Military Affairs, Die Friedens-Warte. Journal of International Peace and Organization, nr 77/4.

    Google Scholar 

  • Cummings, M. L. (2017). Artificial Intelligence and the Future of Warfare, International Security Department and US and the Americas Programme January 2017, Chataman House, The Royal Institute of International Affairs.

    Google Scholar 

  • Danet, D., Hanon, J.-P. (2014). Digitization and robotization of the battlefield: evolution or robolution? In R. Doaré et al. (Eds.), Preface, in: robots on the battlefield, contemporary issues and implications for the future. Fort Leavenworth.

    Google Scholar 

  • Darling, K. (2016). Extending legal protection to social robots: The effects of anthropomorphism, empathy, and violent behavior towards robotic objects, Calo, Froomkin.

    Google Scholar 

  • De Spiegeleire, S., Maas, M., & Sweijs, T. (2017). Artificial intelligence and the future of defense strategic implications for small- and medium-sized force providers. The Hague Centre for Strategic Studies, Hague.

    Google Scholar 

  • DIB (2019) AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense, Supporting document, Defense Innovation Board (DIB), October 2019.

    Google Scholar 

  • European Commission (2019) A definition of AI: Main Capabilities and Scientifi c Disciplines. (2019). High-Level Expert Group on Artifi cial Intelligence, Brussels.

    Google Scholar 

  • Frey, C. B., Osborne, M. A. (2013). The future of employment: How susceptible are jobs to computerisation? Oxford Martin Programme on the Impacts of Future Technology.

    Google Scholar 

  • Geist, E., & Andrew, J. L. (2018). How might artificial intelligence affect the risk of nuclear war? RAND Corporation.

    Book  Google Scholar 

  • Goertzel, B., & Pennachin, C. (Eds.). (2007). Artificial general intelligence. Cognitive technologies. Springer.

    MATH  Google Scholar 

  • Green, B. P. (2016). Ethical reflections on artificial intelligence. Scientia Et Fides, 6(2), 9–31.

    Article  Google Scholar 

  • Grossman, D. (2010). O zabijaniu. Psychologiczny koszt kształtowania gotowości do zabijania w czasach wojny i pokoju (About killing. The psychological cost of sha** the willingness to kill in times of war and peace) (pp. 185–189), Warszawa.

    Google Scholar 

  • Grosz, B. J., Russ, Ch., Altman, Horvitz, E., Mackworth, A., Mitchell, T., Mulligan, D., & Shoham, Y. (2016). One hundred year study on artificial intelligence, artificial intelligence and life in 2030. Report of the 2015, Study Panel Stanford University, September 2016.

    Google Scholar 

  • Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786), 504–507.

    Google Scholar 

  • IFR (2008) World Robotics Report 2008, Statistical Department, Frankfurt am Main, Germany.

    Google Scholar 

  • IFR (2020) World Robotics Report 2020, Statistical Department, Frankfurt am Main, Germany.

    Google Scholar 

  • Kamieński, Ł. (2014). Nowy wspaniały żołnierz. Rewolucja biotechnologiczna i wojna XXI wieku (Great New Soldier. The Biotechnological Revolution And The War Of The 21st Century). Kraków, Wydawnictwo Uniwersytetu Jagiellońskiego.

    Google Scholar 

  • Kopeć, R. (2016). Robotyzacja wojny (Robotization of war). Społeczeństwo i Polityka, Nr, 4(49), 37–53.

    Google Scholar 

  • Leslie, D. (2019). Understanding artificial intelligence ethics and safety A guide for the responsible design and implementation of AI systems in the public sector. Public Policy Programme.

    Book  Google Scholar 

  • Lin, P., Bakey, G., & Abney, K. (2008). Autonomous military robotics: Risk, ethics, and design. Waszyngton.

    Google Scholar 

  • Lucas, G. R. (2014). Automated warfare. Stanford Law & Policy Review, 25, 317–339.

    Google Scholar 

  • Luttwak, E. N. (1995). Toward post-heroic warfare: The obsolescence of total war. Foreign Affairs, Nr, 74(3), 109–122.

    Article  Google Scholar 

  • Luxton, D. D. (2014). Artificial intelligence in psychological practice: Current and future applications and implications. Professional Psychology: Research and Practice, 45(5), 332–339.

    Article  Google Scholar 

  • Madiega, T. (2019). EU guidelines on ethics in artificial intelligence: Context and implementation. EPRS, European Parliamentary Research Service, Members’ Research Service, PE 640.163 – September.

    Google Scholar 

  • Malik, R., Visvizi, A., & Skrzek-Lubasińska, M. (2021). (2021) The gig economy: Current issues, the debate, and the new avenues of research. Sustainability, 13, 5023. https://doi.org/10.3390/su13095023

    Article  Google Scholar 

  • Marra, W. C., & McNeil, S. K. (2012). Understanding “the Loop”: Regulating the next generation of war machines. Harvard Journal of Law & Public Policy, Nr, 36, 1139–1185.

    Google Scholar 

  • Mike, M. W., & Schinzinger, R. (2010). Introduction to engineering ethics (2nd ed.). McGraw-Hill.

    Google Scholar 

  • Minsky, M. (1961). Steps toward artificial intelligence. In Proceedings of the IRE (Vol. 49, No. 1, pp. 8–30).

    Google Scholar 

  • Mearsheimer, J. (2001). The tragedy of great power politics. W. W. Norton & Company.

    Google Scholar 

  • Morgenthau H. J. (1973). Politics among nations. The struggle for power and peace. New York: Alfred A. Knopf.

    Google Scholar 

  • Netczuk-Gwoździewicz, M. (2020), Psychologiczne uwarunkowania bezpieczeństwa żołnierzy – uczestników misji poza granicami kraju [Psychological conditions for the safety of soldiers - participants of missions abroad]. Wrocław: Wydawnictwo AWL.

    Google Scholar 

  • Nye, J. S. (2011). The Future of Power. Public Afairs.

    Google Scholar 

  • Olson, J., & Rashid, M. (2013). Modern Drone Warfare: An Ethical Analysis, American Society for Engineering Education Southwest Section Conference, http://se.asee.org/proceedings/ASEE2013/Papers2013/157.PDF (30.10.15).

  • Özdemir, S. G. (2019). Artificial intelligence application in the military the case of United States and China, SETA Analysis No.51, Istambul, Ekonomi Ve Toplum Araştirmalari Vakfi.

    Google Scholar 

  • Pathe Duarte, F. (2020). Non-kinetic hybrid threats in Europe—The Portuguese case study (2017–18). Transforming Government: People, Process and Policy, 14(3), 433–451. https://doi.org/10.1108/TG-01-2020-0011

    Article  Google Scholar 

  • Peart, A. (2017). Homage to John McCarthy, the father of Artificial Intelligence (AI), https://www.artificial-solutions.com/blog/homage-to-john-mccarthy-the-father-of-artificial-intelligence (2019.09.06).

  • Puckett, Ch. B. (2004). In this era of “smart weapon”. Is a state under an international legal obligation to use precision-guided technology in armed confl ict. Emory International Law Review, nr 18/2.

    Google Scholar 

  • RAND. (2019). The Department of Defense Posture for Artificial Intelligence (p. 2019). RAND Corporation, Santa Monica.

    Google Scholar 

  • Remmers, P. (2019). The ethical significance of human likeness in robotics and AI. Ethics in Progress, 10(2), 52–67. Art. #6.

    Google Scholar 

  • Różanowski, K. (2007). Sztuczna inteligencja: rozwój, szanse i zagrożenia (Artificial intelligence: development, opportunities and threats), Zeszyty Naukowe Warszawskiej Wyższej Szkoły Informatyki, nr 2.

    Google Scholar 

  • Sayler, K. M. (2020). Artificial intelligence and national security. Congressional Research Service. R45178, version 8.

    Google Scholar 

  • Safder, I., UlHassan, S., Visvizi, A., Noraset, T., Nawaz, R., & Tuarob, S. (2020). Deep learning-based extraction of algorithmic metadata in full-text scholarly documents. Information Processing and Management, 57(6), 102269. https://doi.org/10.1016/j.ipm.2020.102269

    Article  Google Scholar 

  • Sharkey, N. (2008). Computer science: The ethical frontiers of robotics. Science, 322(5909), 1800–1801.

    Article  Google Scholar 

  • Shaw, M. (2002). Risk-transfer militarism, small massacres and the historic legitimacy of war. International Relations, nr 16(3), 343–360.

    Google Scholar 

  • Simmons, A. J. (1981). Moral principles and political obligations. Princeton University Press.

    Google Scholar 

  • Sparrow, R. (2006). Killer robots. Journal of Applied Philosophy, 24(1).

    Google Scholar 

  • Sullins, J. (2006). When is a robot a moral agent? International Journal of Information Ethics, 6, 12.

    Google Scholar 

  • Troisi, O., Visvizi, A., & Grimaldi, M. (2021), The different shades of innovation emergence in smart service systems: the case of Italian cluster for aerospace technology. Journal of Business & Industrial Marketing, Vol. ahead-of-print https://doi.org/10.1108/JBIM-02-2020-0091

  • Tonin, M. (2019). Artificial intelligence: Implications for Nato’s armed forces, Science And Technology Committee (STC), Sub-Committee on Technology Trends and Security (STCTTS), 149 STCTTS 19 E rev. 1 fin.

    Google Scholar 

  • Turing, A. (1960). Computing machinery and intelligence. Mind, nr 59/236, 433–460.

    Google Scholar 

  • Visvizi, A., & Lytras, M. D. (Eds.). (2019). Politics and technology in the post-truth era. Emerald Publishing.

    Google Scholar 

  • Visvizi, A., Lytras, M. D., & Daniela, L. (2020). The future of innovation and technology in education: Policies and practices for teaching and learning excellence. Emerald Publishing,

    Google Scholar 

  • Walzer, M. (1977). Just and unjust wars, 4th edn. Basic Books.

    Google Scholar 

  • Walzer, M. (2004). Arguing about war. Yale University Press.

    Google Scholar 

  • Windeck, A. (2014). Preface. In R. Doaré, D. Danet, Jean-Paul Hanon, & G. de Boisboissel (Eds.), Robots on the battlefield, contemporary issues and implications for the future. Combat Studies Institute Press, US Army Combined Arms Center, Fort Leavenworth, Kansas.

    Google Scholar 

  • Wong, Y. H., Yurchak, J. M., Button, R. W., Frank, A., Laird, B., Osoba, O. A., Steeb, R., Harris, B. N., & Bae, S. J. (Eds) (2020). Deterrence in the age of thinking machines. RAND Corporation, Santa Monica, California.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marek Bodziany .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Bodziany, M. (2021). Ethical Conditions of the Use of Artificial Intelligence in the Modern Battlefield—Towards the “Modern Culture of Killing”. In: Visvizi, A., Bodziany, M. (eds) Artificial Intelligence and Its Contexts. Advanced Sciences and Technologies for Security Applications. Springer, Cham. https://doi.org/10.1007/978-3-030-88972-2_4

Download citation

Publish with us

Policies and ethics

Navigation