Log in

Conformity assessment under the EU AI act general approach

  • Original Research
  • Published:
AI and Ethics Aims and scope Submit manuscript

Abstract

The European Commission proposed harmonised rules on artificial intelligence (AI) on the 21st of April 2021 (namely the EU AI Act). Following a consultative process with the European Council and many amendments, a General Approach of the EU AI Act was published on the 25th of November 2022. The EU Parliament approved the initial draft in May 2023. Trilogue meetings took place in June, July, September and October 2023, with the aim for the European Parliament, the Council of the European Union and the European Commission to adopt a final version early 2024. This is the first attempt to build a legally binding legal instrument on Artificial Intelligence in the European Union (EU). In a similar way as the General Data Protection Regulation (GDPR), the EU AI Act has an extraterritorial effect. It has, therefore, the potential to become a global gold standard for AI regulation. It may also contribute to develo** a global consensus on AI Trustworthiness because AI providers must conduct conformity assessments for high-risk AI systems prior to entry into the EU market. As the AI Act contains limited guidelines on how to conduct conformity assessments and ex-post monitoring in practice, there is a need for consensus building on this topic. This paper aims at studying the governance structure proposed by the EU AI Act, as approved by the European Council in November 2022, and proposes tools to conduct conformity assessments of AI systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Canada)

Instant access to the full article PDF.

Fig. 1
Fig. 2

(Source: The Future of Life Institute)

Similar content being viewed by others

Notes

  1. AI Transparency Institute’s website: https://aitransparencyinstitute.com/

  2. The OECD AI Principles identified some key principles for the evaluation of AI trustworthiness in 2019. The main principles are fairness, transparency, contestability, and accountability.

References

  1. Demetzou Katerina, Introduction to the conformity assessment under the draft of the EU AI Act, and how it compares to DPIA, Future of Privacy Forum, 12 Aug 2022. https://fpf.org/blog/introduction-to-the-conformity-assessment-under-the-draft-eu-ai-act-and-how-it-compares-to-dpias/

  2. European Union, Charter of fundamental rights, 2000/C, 364/01, Official Journal of the European Communities, 18 Dec 2020. https://www.europarl.europa.eu/charter/pdf/text_en.pdf

  3. EU Commission, Impact assessment of the EU AI Act, 21 Apr 2021. https://digital-strategy.ec.europa.eu/en/library/impact-assessment-regulation-artificial-intelligence

  4. EU Commission, Implementing and delegating acts. https://commission.europa.eu/law/law-making-process/adopting-eu-law/implementing-and-delegated-acts_en. Accessed 28 Dec 2023

  5. EU Parliament, Legislative Train Schedule, Artificial intelligence act, 20 Oct 2023. https://www.europarl.europa.eu/legislative-train/theme-a-europe-fit-for-the-digital-age/file-regulation-on-artificial-intelligence

  6. EURACTIV, Commission leaves European standardisation body out of AI standard-setting, Luca Bertuzzi, 7 Dec 2022. https://www.euractiv.com/section/artificial-intelligence/news/commission-leaves-european-standardisation-body-out-of-ai-standard-setting/

  7. Bradford, A.: The Brussels effect: how the European Union rules the world. Oxford University Press, USA (2020)

    Book  Google Scholar 

  8. Bouquet, C., Barsoux, J.L., Wade, M.: ALIEN Thinking: the unconventional path to breakthrough ideas. Hachette, UK (2021)

    Google Scholar 

  9. Floridi, L., Holweg, M., Taddeo, M., Amaya Silva, J., Mökander, J. and Wen, Y.: capAI-A procedure for conducting conformity assessment of ai systems in line with the EU artificial intelligence act. Available at SSRN 4064091 (2022)

  10. Mökander, J., Floridi, L.: Ethics-based auditing to develop trustworthy AI. Mind. Mach. 31(2), 323–327 (2021)

    Article  Google Scholar 

  11. Pouget, A. https://artificialintelligenceact.eu/context/. Accessed 28 Dec 2023

  12. Dandl, S., Molnar, C., Binder, M., Bischl, B.: Multi-objective counterfactual explanations. In: International conference on parallel problem solving from nature, pp. 448–469. Springer, Cham (2020)

    Chapter  Google Scholar 

  13. Molnar, C., Casalicchio, G., Bischl, B.: Interpretable machine learning–a brief history, state-of-the-art and challenges. In: ECML PKDD 2020 Workshops: workshops of the European conference on machine learning and knowledge discovery in databases (ECML PKDD 2020), pp. 417–431. Springer International Publishing, Cham (2021)

    Google Scholar 

  14. NIST, EU-US AI Roadmap, 4 Dec 2022. https://www.nist.gov/system/files/documents/2022/12/04/Joint_TTC_Roadmap_Dec2022_Final.pdf

  15. Veale, M., Borgesius, F.Z.: Demystifying the Draft EU Artificial Intelligence Act—Analysing the good, the bad, and the unclear elements of the proposed approach. Comp Law Rev Int 22(4), 97–112 (2021)

    Article  Google Scholar 

  16. Stiftung Bertelsmann (Eds.), Hallensleben, S., Hustedt, C., et al.: From Principles to Practice: An Interdisciplinary Framework to Operationalise AI Ethics. 2020. https://www.bertelsmannstiftung.de/en/publications/publication/did/from-principles-to-practice-wie-wir-ki-ethik-messbar-machenkoennen. Accessed 29 Dec 2023

  17. Yeung, K.: Recommendation of the council on artificial intelligence (OECD). Int. Leg. Mater. 59(1), 27–34 (2020)

    Article  Google Scholar 

  18. Zicari, R.V., Amann, J., Bruneault, F., Coffee, M., Düdder, B., Hickman, E., Gallucci, A., Gilbert, T.K., Hagendorff, T., van Halem, I. and Hildt, E., 2022. How to assess trustworthy AI in practice. ar**v preprint ar**v:2206.09887

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eva Thelisson.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Thelisson, E., Verma, H. Conformity assessment under the EU AI act general approach. AI Ethics 4, 113–121 (2024). https://doi.org/10.1007/s43681-023-00402-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s43681-023-00402-5

Keywords

Navigation