Safeguarding the Future of Artificial Intelligence: An AI Blueprint

  • Chapter
  • First Online:
Artificial Intelligence for Security

Abstract

Current developments in artificial intelligence, such as ChatGPT, Stable Diffusion or Deep Fakes, pose new challenges to our society. It is becoming increasingly difficult to distinguish whether we are dealing with real, human-generated content or fictitious works. Nevertheless, these developments show the possibilities that artificial intelligence offers and new potential applications for companies and society. The challenges of our time, such as climate change, energy crisis, and wars, require people to rely on technology and are able to deploy it successfully. The path to a safe world with AI includes several topics. It starts with the experts who work in this field. In addition to sound training in the technologies, they must also adhere to ethical and moral standards. A diversity among experts is a basic prerequisite for the development of algorithms and models that are energy efficient. When experts therefore work toward the ethical and sustainable implementation of their work, they align with the Sustainable Development Goals. The time of AI black boxes is over. Only explainable, trustworthy, and transparent solutions have a chance to prevail. Since this may not be the most important concern to large populations of the field, efforts to change that have to be taken. The sole development of models is no longer sufficient. In order to roll out AI safely in companies or in everyday life, many aspects must be taken into account. For this purpose, this chapter provides a construction plan that covers all important aspects of building sustainably feasible AI.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (Canada)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (Canada)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 199.99
Price excludes VAT (Canada)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Barocas, S., Selbst, A.D.: Big data’s disparate impact. In: California Law Review, pp. 671–732 (2016)

    Google Scholar 

  2. Bhatt, U., **ang, A., Sharma, S., Weller, A., Taly, A., Jia, Y., Ghosh, J., Puri, R., Moura, J.M., Eckersley, P.: Explainable machine learning in deployment. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 648–657 (2020)

    Google Scholar 

  3. Biswas, A., Kolczynska, M., Rantanen, S., Rozenshtein, P.: The role of in-group bias and balanced data: a comparison of human and machine recidivism risk predictions. In: Proceedings of the 3rd ACM SIGCAS Conference on Computing and Sustainable Societies, pp. 97–104 (2020)

    Google Scholar 

  4. Björck, F., Henkel, M., Stirna, J., Zdravkovic, J.: Cyber resilience–fundamentals for a definition. In: New Contributions in Information Systems and Technologies, vol. 1, pp. 311–316. Springer, Berlin (2015)

    Google Scholar 

  5. Brock, J., von Enzberg, I.S., Kühn, I.A., Dumitrescu, I.R.: Process mining data canvas: a method to identify data and process knowledge for data collection and preparation in process mining projects. Proc. CIRP 119, 602–607 (2023)

    Article  Google Scholar 

  6. Cao, L., Yu, P.S., Zhang, C., Zhao, Y.: Domain Driven Data Mining. Springer, Berlin (2010)

    Book  Google Scholar 

  7. Cohen, J.: A coefficient of agreement for nominal scales. Educ. Psychol. Measur. 20(1), 37–46 (1960). https://doi.org/10.1177/001316446002000104

    Article  Google Scholar 

  8. Cowls, J., Tsamados, A., Taddeo, M., Floridi, L.: A definition, benchmark and database of AI for social good initiatives. Nat. Mach. Intell. 3(2), 111–115 (2021)

    Article  Google Scholar 

  9. Deng, C., Ji, X., Rainey, C., Zhang, J., Lu, W.: Integrating machine learning with human knowledge. Iscience 23(11), 101656 (2020)

    Article  Google Scholar 

  10. Donnelly, M.: Data management plans and planning. In: Managing Research Data, pp. 83–103 (2012)

    Google Scholar 

  11. Du, M., Liu, N., Hu, X.: Techniques for interpretable machine learning. Commun. ACM 63(1), 68–77 (2019)

    Article  Google Scholar 

  12. El Emam, K., Dankar, F.K., Issa, R., Jonker, E., Amyot, D., Cogo, E., Corriveau, J.P., Walker, M., Chowdhury, S., Vaillancourt, R., et al.: A globally optimal k-anonymity method for the de-identification of health data. J. Am. Med. Inform. Assoc. 16(5), 670–682 (2009)

    Article  Google Scholar 

  13. European Commission: Proposal for a Regulation of the European Parliament and of the Council on European Data Governance (Data Governance Act). European Commission (2020). https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020PC0767. COM/2020/767 final

  14. European Commission: Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. European Commission (2021). https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=celex:52021PC0206. Proposal for a Regulation of the European Parliament and of the Council, No. COM/2021/206 final

  15. European Commission: Directive (EU) 2022/2555 of the European Parliament and of the Council of 14 December 2022 on measures for a high common level of cybersecurity across the Union, amending Regulation (EU) No 910/2014 and Directive (EU) 2018/1972, and repealing Directive (EU) 2016/1148 (NIS 2 Directive). European Commission (2022). https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32022L2555&qid=1694550065818. PE/32/2022/REV/2

  16. European Commission: Proposal for a Regulation of the European Parliament and of the Council on harmonised rules on fair access to and use of data (Data Act). European Commission (2022). https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52022PC0068. COM/2022/68 final

  17. European Union Agency for Cybersecurity (ENISA): AI Cybersecurity Challenges—Threat Landscape for Artificial Intelligence. https://www.enisa.europa.eu/publications/artificial-intelligence-cybersecurity-challenges (2020). Accessed Dec 2021

  18. European Union Agency for Cybersecurity (ENISA): Securing Machine Learning Algorithms. https://www.enisa.europa.eu/publications/securing-machine-learning-algorithms (2021). Accessed Mar 2022

  19. Fayyad, U.M., Piatetsky-Shapiro, G., Smyth, P., et al.: Knowledge discovery and data mining: towards a unifying framework. In: KDD, vol. 96, pp. 82–88 (1996)

    Google Scholar 

  20. Felzmann, H., Fosch-Villaronga, E., Lutz, C., Tamò-Larrieux, A.: Towards transparency by design for artificial intelligence. Sci. Eng. Ethics 26(6), 3333–3361 (2020)

    Article  Google Scholar 

  21. Fu, R., Huang, Y., Singh, P.V.: Artificial intelligence and algorithmic bias: source, detection, mitigation, and implications. In: Pushing the Boundaries: Frontiers in Impactful OR/OM Research, pp. 39–63. INFORMS (2020)

    Google Scholar 

  22. Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F., Giannotti, F.: Local rule-based explanations of black box decision systems (2018). ar**v preprint ar**v:1805.10820

    Google Scholar 

  23. Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., Yang, G.Z.: Xai—explainable artificial intelligence. Sci. Robot. 4(37), eaay7120 (2019)

    Google Scholar 

  24. Haakman, M., Cruz, L., Huijgens, H., van Deursen, A.: AI lifecycle models need to be revised. An exploratory study in Fintech (2020). ar**v preprint ar**v:2010.02716

    Google Scholar 

  25. Hernández, M.A., Stolfo, S.J.: Real-world data is dirty: data cleansing and the merge/purge problem. Data Mining Knowl. Discovery 2, 9–37 (1998)

    Article  Google Scholar 

  26. High-Level Expert Group on Artificial Intelligence: Ethics Guidelines for Trustworthy AI. Publications Office of the European Union, Luxembourg (2019). https://doi.org/10.2759/346720

  27. Holzinger, A.: The next frontier: AI we can really trust. In: Machine Learning and Principles and Practice of Knowledge Discovery in Databases: International Workshops of ECML PKDD 2021, Virtual Event, September 13–17, 2021, Proceedings, Part I, pp. 427–440. Springer, Berlin (2022)

    Google Scholar 

  28. Kieseberg, P., Weippl, E.: Security challenges in cyber-physical production systems. In: Software Quality: Methods and Tools for Better Software and Systems: 10th International Conference, SWQD 2018, Vienna, Austria, January 16–19, 2018, Proceedings 10, pp. 3–16. Springer, Berlin (2018)

    Google Scholar 

  29. Kieseberg, P., Weippl, E., Tjoa, A.M., Cabitza, F., Campagner, A., Holzinger, A.: Controllable ai-an alternative to trustworthiness in complex AI systems? In: International Cross-Domain Conference for Machine Learning and Knowledge Extraction, pp. 1–12. Springer, Berlin (2023)

    Google Scholar 

  30. Li, X., **ong, H., Li, X., Wu, X., Zhang, X., Liu, J., Bian, J., Dou, D.: Interpretable deep learning: interpretation, interpretability, trustworthiness, and beyond. Knowl. Inform. Syst. 64(12), 3197–3234 (2022)

    Article  Google Scholar 

  31. Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.I.: From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2(1), 56–67 (2020)

    Article  Google Scholar 

  32. McCarthy, J., Minsky, M., Rochester, N., Shannon, C.E.: A proposal for the dartmouth summer research project on artificial intelligence. http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf

  33. McDermott, D.: Artificial intelligence meets natural stupidity. ACM SIGART Bull. (57), 4–9 (1976)

    Article  Google Scholar 

  34. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. 54(6), 1–35 (2021)

    Article  Google Scholar 

  35. Mishra, S., Sturm, B.L., Dixon, S.: Local interpretable model-agnostic explanations for music content analysis. In: ISMIR, vol. 53, pp. 537–543 (2017)

    Google Scholar 

  36. MITRE: MITRE-ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems). https://atlas.mitre.org/. Accessed Nov 2021

  37. MITRE: MITRE-ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge). https://attack.mitre.org/. Accessed Nov 2021

  38. Papernot, N., McDaniel, P., Sinha, A., Wellman, M.: Towards the science of security and privacy in machine learning (2016). ar**v preprint ar**v:1611.03814

    Google Scholar 

  39. Qiu, S., Liu, Q., Zhou, S., Wu, C.: Review of artificial intelligence adversarial attack and defense technologies. Appl. Sci. 9(5), 909 (2019)

    Article  Google Scholar 

  40. Robinson, J.P., Livitz, G., Henon, Y., Qin, C., Fu, Y., Timoner, S.: Face recognition: too bias, or not too bias? In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 0–1 (2020)

    Google Scholar 

  41. Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.R.: Explainable AI: interpreting, explaining and visualizing deep learning, vol. 11700. Springer, Berlin (2019)

    Book  Google Scholar 

  42. Samuel, A.L.: Some moral and technical consequences of automation—refutation. Science 132(3429), 741–742 (1960)

    Article  Google Scholar 

  43. Scholes, K., Johnson, G., Scholes, K.: Stakeholder map**. In: Exploring Public Sector Strategy, p. 165 (2001)

    Google Scholar 

  44. Shannon, C.E.: The bandwagon. IRE Trans. Inform. Theory 2(1), 3 (1956)

    Article  Google Scholar 

  45. Shearer, C.: The crisp-DM model: the new blueprint for data mining. J. Data Warehousing 5(4), 13–22 (2000)

    Google Scholar 

  46. Slijepčević, D., Henzl, M., Klausner, L.D., Dam, T., Kieseberg, P., Zeppelzauer, M.: k-anonymity in practice: how generalisation and suppression affect machine learning classifiers. Comput. Secur. 111, 102488 (2021)

    Google Scholar 

  47. Stöger, K., Schneeberger, D., Kieseberg, P., Holzinger, A.: Legal aspects of data cleansing in medical AI. Comput. Law Secur. Rev. 42, 105587 (2021)

    Article  Google Scholar 

  48. Sweeney, L.: k-anonymity: a model for protecting privacy. Int. J. Uncertainty, Fuzziness Knowl.-Based Syst. 10(5), 557–570 (2002)

    Google Scholar 

  49. Tabassi, E.: Artificial intelligence risk management framework (AI RMF 1.0). NIST AI 100-1 (2023). https://doi.org/10.6028/NIST.AI.100-1

  50. United Nations SDGs. https://sdgs.un.org/#goal_section. Accessed 30 May 2023

  51. Waller, M.A., Fawcett, S.E.: Data science, predictive analytics, and big data: a revolution that will transform supply chain design and management. J. Bus. Logist. 34(2), 77–84 (2013)

    Article  Google Scholar 

  52. Wiener, N.: Some moral and technical consequences of automation: as machines learn they may develop unforeseen strategies at rates that baffle their programmers. Science 131(3410), 1355–1358 (1960)

    Article  Google Scholar 

  53. Williams, M., Bagwell, J., Zozus, M.N.: Data management plans: the missing perspective. J. Biomed. Inform. 71, 130–142 (2017)

    Article  Google Scholar 

  54. Wing, J.M.: Trustworthy AI. Commun. ACM 64(10), 64–71 (2021)

    Article  Google Scholar 

  55. Wu, X., **ao, L., Sun, Y., Zhang, J., Ma, T., He, L.: A survey of human-in-the-loop for machine learning. In: Future Generation Computer Systems (2022)

    Google Scholar 

  56. Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., Zhu, J.: Explainable AI: a brief survey on history, research areas, approaches and challenges. In: Natural Language Processing and Chinese Computing: 8th CCF International Conference, NLPCC 2019, Dunhuang, China, October 9–14, 2019, Proceedings, Part II 8, pp. 563–574. Springer, Berlin (2019)

    Google Scholar 

  57. Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. IEEE Trans,. Neural Netw. Learn. Syst. 30(9), 2805–2824 (2019)

    Google Scholar 

  58. Zafar, M.R., Khan, N.M.: Dlime: a deterministic local interpretable model-agnostic explanations approach for computer-aided diagnosis systems (2019). ar**v preprint ar**v:1906.10263

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexander Adrowitzer .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Adrowitzer, A., Temper, M., Buchelt, A., Kieseberg, P., Eigner, O. (2024). Safeguarding the Future of Artificial Intelligence: An AI Blueprint. In: Sipola, T., Alatalo, J., Wolfmayr, M., Kokkonen, T. (eds) Artificial Intelligence for Security. Springer, Cham. https://doi.org/10.1007/978-3-031-57452-8_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-57452-8_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-57451-1

  • Online ISBN: 978-3-031-57452-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation