1 Introduction

The EU Artificial Intelligence Act (AI Act) is the first comprehensive framework regulating AI systems in the world.Footnote 1 While the purpose of the AI Act is not to ensure gender equality and non-discrimination as such, a flavour of gender equality can be felt throughout the text of the future EU AI Regulation. For example, the term “biases” makes seventeen appearances Footnote 2 and “gender equality” appears four times. Also, the composition of the AI Office responsible for implementing the AI Act, and which will be advised by a Scientific panel of independent experts, shall be determined respecting gender balance (Art. 68 (2)). The AI Act also acknowledges that AI development teams shall be gender balanced (Recital 165). More importantly, the use cases that are “high-risk” and that fall under the scope of the AI Act with mandatory requirements to be respected, the enforcement tools and the mission of the AI Office Footnote 3 ensure some form of enforcement framework will exist that could be beneficial to addressing gender equality and non-discrimination.

The “Age of Algorithms”Footnote 4 arrived some time ago but since Large Language Models (LLMs) and Generative AI entered the public discussion and legislative proposals, the topic of AI regulation has received increased public attention which has translated into more attempts to create policy and regulation.Footnote 5

The AI Act acknowledges to some extent the need for gender balance in AI and the so-called “gender make-up of the AI community”Footnote 6 as one origin of potential discriminatory outcomes, alongside the design of the algorithm and underlying biased datasets.

This article will outline how the AI Act is designed to address fundamental rights risks caused by AI systems regarding gender equality and non-discrimination. It will do so by scrutinising the substantive provisions that deal with the risks of biases and discrimination, present some examples of relevance, and discuss the tools proposed in the AI Act to mitigate these risks to fundamental rights. Based on the substantive provisions and the legal framework on AI regulation, the procedural question of the role of the European AI Office will be assessed from the point of view of gender equality law, focusing on collaboration within the European Union and with external actors at Member State and international level. Finally, European Union regulatory efforts will be positioned within the global context of AI regulation, and potential ways to address adequately gender equality and algorithmic discrimination at Council of Europe or United Nations level explored.

2 The AI Act and gender equality

One of the stated goals of the AI Act is protection against fundamental rights violations by certain AI systems considered as high-risk (2.1). The AI Act foresees some categories of use cases that are considered high-risk and that could be updated in line with future developments by the European Commission via delegated acts (2.2). The European legislative framework on AI foresees some obligations and tools to achieve and support fundamental rights protection (2.3). While the system of protection can be of added value it is not sufficient to ensure adequately protection against algorithmic discrimination (2.4).

2.1 Risks for gender equality and “high-risk” AI systems

While having some elements of an ex ante regulation, the AI Act does not foresee a strict regulatory regime of the kind that is for example used for pharmaceuticals for which the European Medicines Agency is responsible at European Union level.Footnote 7 Increasingly, scholars are calling for such a strong ex ante regulatory regime for high-risk AI systems that would include prior authorisations or licences, notably to avoid discriminatory outcomes.Footnote 8 Considering that the AI Act is of a horizontal nature and its aim is not solely the reduction of biases and non-discrimination, the only way some kind of adequate protection against gender biases and discrimination can be achieved is in cases where AI systems are classified as “high risk”.Footnote 9

The classification of AI systems as high risk is conducted in accordance with Art. 6 AI Act. Whereas there are several ways to be considered as an AI system, the most relevant to gender equality and non-discrimination is Art. 6 (2), which views those AI systems listed in Annex III as high-risk. These examples include use cases, such as AI systems used which concern the labour market (e.g., AI recruitment systems) or education (e.g., AI test assessments). Some of those use cases coincide with the relevant gender equality directives, such as Directive 2006/54 and are therefore of particular interest here.Footnote 10

While falling within the scope of high-risk systems is certainly relevant for the purposes of coming within the fundamental rights protection of the AI Act, there are some important derogations that could become relevant for the purposes of our analysis. Art. 6(2a) foresees derogations for the use cases considered as high-risk under Art.6(2), if one of the following criteria is fulfilled: (a) the AI system is intended to perform a “narrow procedural task”; (b) the AI system is intended to improve the result of a previously completed human activity; (c) the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment without prior human review; or (d) the AI system is intended to perform a preparatory task to an assessment relevant for the purpose of the use cases listed in Annex III.

If one of these criteria is met then the AI system is not considered high-risk and the safeguards of the AI Act do not apply, which could be a particularly sensitive point insofar as concerns gender equality because some of the use cases risk falling out of the scope for specifically these reasons.

First, the concept of a “narrow procedural task” leaves room for interpretation. Art. 6 (3)(a) and Recital 53 try to define and specify what is to be understood by a “narrow procedural task”. However, it merely explains that

the first criterion should be that the AI system is intended to perform a narrow procedural task, such as an AI system that transforms unstructured data into structured data, an AI system that classifies incoming documents into categories or an AI system that is used to detect duplicates among a large number of applications. These tasks are of such narrow and limited nature that they pose only limited risks which are not increased through the use in a context listed in Annex III.”

Secondly, while at first sight, the mere improvement of a completed human task seems benign, it could nevertheless impact the final human decision (through automation bias) that the decision is reversed or altered to the detriment of fundamental rights, such as non-discrimination. People not only have “high expectations of AI’s consistent performance” but also often rely on and trust automated systems due to their seeming objectivity.Footnote 11

Thirdly, the same could be true for the detection of patterns or deviations because humans are not able to spot the same patterns or deviations that algorithms could, which could equally alter the decision-making process.

Fourthly, determining what constitutes a “preparatory task” could be a delicate issue as well, particularly considering the jurisprudence of the European Court of Justice in the field of gender equality that, for example, viewed preparatory steps to hiring a replacement for a woman on maternity leave as coming within protection against discrimination.Footnote 12

Therefore, the limited guidance contained in the recitals of the AI Act requires further interpretation by scholars and by the European Court of Justice in order to provide a better understanding of what falls and what does not fall under the scope of high-risk AI systems.

2.2 Use cases and regulation by delegated Act

The category of high-risk AI systems relies on and refers to Annex III, which lists use cases. Of particular relevance for the present analysis are (3) education and vocational training; (4) employment, workers management and access to self-employment; and (5) access to and the enjoyment of essential private services and essential public services and benefits - because most examples of gender bias and discrimination have occurred in one of these categories and the likelihood of harm in relation to gender equality seems very high for these areas.

In the area of employment, which is covered by Directive 2006/54 when it comes to the principle of equality between women and men, the use cases foresee (a) “AI systems intended to be used for recruitment or selection of natural persons, notably to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates”. An often-cited example concerns a recruitment algorithm used by a tech company which potentially had discriminatory effects on women.Footnote 13 The second category of use cases concerns (b) “AI intended to be used to make decisions affecting terms of the work-related relationships, promotion and termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics and to monitor and evaluate performance and behaviour of persons in such relationships.” AI systems are increasingly used in human resources management which entails the risk of gender-based discrimination in relation to promotion, performance evaluation, pay and benefits.Footnote 14

Given the fast developments in AI, the European Commission is empowered to adopt by delegated act Footnote 15 in line with Article 290 TFEU changes to the Annex III of the AI Act and to include new use cases as high-risk AI systems.Footnote 16 The conditions for making such amendments either by adding or modifying use cases of high-risk AI systems to Annex III via delegated act are laid down in Articles 7 and 97 and require the fulfilment of the following two conditions:

“(a) the AI systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III;

(b) the AI systems pose a risk of harm to health and safety, or an adverse impact on fundamental rights, and that risk is equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.”Footnote 17

The reference in Art. 7(1) (b) shows the importance of adverse impacts on fundamental rights which enables the Commission to add new AI systems to Annex III, for example if the principle of equality between women and men is negatively affected. This opportunity for the Commission to review Annex III by delegated act as well as the general obligation of the Commission to review the AI Act after three years and thereafter every four years are aimed to ensure the adequacy of the Regulation in the light of new AI developments.Footnote 18

2.3 The tools of the AI Act relevant to gender equality

If an AI system has been classified as “High-Risk” and needs to fulfil the above-mentioned obligations in line with Art. 6(2), the AI Act foresees several tools to ensure that biases are detected, and that harm to fundamental rights and negative impacts of AI systems are prevented.

A particularly useful tool in principle consists of fundamental rights impact assessments (FRIA) under Art. 27 of the AI Act. Fundamental rights impact assessments are an obligation for providers of AI systems to conduct an analysis of the possible impacts of the AI system on fundamental rights prior to its deployment.Footnote 19 Art. 27 imposes this obligation on a number of deployers, notably public bodies and those who provide public services. A fundamental rights impact assessment contains inter alia an assessment of an AI system’s processes, affected groups, specific risks of harm, human oversight measures, risk mitigation and complaint mechanisms (Art. 27(1)). However, while the obligation to conduct such assessments is to be welcomed as contributing to the detection and mitigation of risks for gender equality, it does not apply to all AI systems. If a private company deploys an AI system for recruitment, this will not be covered by such an obligation.

Bias audits Footnote 20 are another tool foreseen in Art. 10(2)(f)(g) of the AI Act in order to identify possible biases that are likely to “negatively impact fundamental rights or lead to discrimination prohibited by Union law”. The AI Act requires “appropriate measures to detect, prevent and mitigate possible biases identified” (Art. 10(2)(g)). When it comes to audits, one of the key elements is that they should be as independent as possible, preferably by third parties that do not have contractual links.Footnote 21

Transparency, explainability as well as knowledge of AI decisions are intricately linked together as a first step in building a non-discrimination claim. High-risk systems need to comply with specific requirements (Art. 8), such as establishing a risk management system (Art. 9), ensuring appropriate data and data governance structures (Art. 10), drawing up technical documentation (Art. 11), kee** records of the AI system’s processes and decisions (Art. 12), complying with certain transparency requirements (Art. 13) and providing human oversight (Art. 14).

Finally, the individual complaint in Art. 85 is an important complement to public enforcement the application of which is ensured via fines (Art. 99). This is further reinforced by a right to explanation of individual decision-making (Art. 86 (1)) given to “any affected person subject to a decision which is taken by the deployer on the basis of the output from a high-risk AI system listed in Annex III”. The decision needs to produce legal effects or similarly significantly affect the individual who considers their fundamental rights to have been adversely impacted. Art. 86 (1) ensures the right to request from the deployer clear and meaningful explanations on the role of the AI system in the decision-making procedure and the main elements of the decision taken.Footnote 22

However, if AI systems are not high-risk, then voluntary rules could be a way to ensure compliance with the help of codes of conduct. The AI Act suggests the following:

Providers of non-high-risk AI systems should be encouraged to create codes of conduct, including related governance mechanisms, intended to foster the voluntary application of some or all of the mandatory requirements applicable to high-risk AI systems, adapted in light of the intended purpose of the systems and the lower risk involved and taking into account the available technical solutions and industry best practices such as model and data cards.” Footnote 23

For such low-risk AI systems, technical solutions and industry best practices are suggested, notably model Footnote 24 and data cards Footnote 25 which give an insight into the functioning of the AI system. Looking at examples of model or data cards reveals that not a lot of information in relation to biases and discrimination risks is provided.

Providers and, as appropriate, deployers of all AI systems, high-risk or not, and models should also be encouraged to apply on a voluntary basis additional requirements related, for example, to the elements of the European ethic guidelines for trustworthy AI, environmental sustainability, AI literacy measures, inclusive and diverse design and development of AI systems, including attention to vulnerable persons and accessibility to persons with disability, stakeholders’ participation with the involvement as appropriate, of relevant stakeholders such as business and civil society organisations, academia and research organisations, trade unions and consumer protection organisation in the design and development of AI systems, and diversity of the development teams, including gender balance.”Footnote 26

Unlike high-risk AI systems, where obligations are imposed under the AI Act, as regards low-risk AI systems, a number of voluntary requirements are suggested that are particularly relevant from the gender equality perspective. AI literacy for example can be an aim that empowers women and girls and leads to a more inclusive and diverse workforce that designs and develops AI systems, changing a gender employment gap in the AI profession that is around 21%.Footnote 27 Calling for inclusive and diverse design is equally a way to encourage AI developers and deployers to achieve gender equality-friendly AI systems. Following the European Ethics guidelines for trustworthy AI could be a good guide in this respect, but including the mandatory requirements of high-risk AI systems would be even better.

2.4 Conclusion

The AI Act addresses some of the issues of relevance to gender equality and non-discrimination and foresees specific obligations for high-risk AI systems and specific tools that enable victims of (algorithmic) discrimination to understand and enforce their rights. While provisions aimed at transparency, information and explainability will put European Union citizens in a better position to understand the decisions of AI systems, other provisions giving concrete rights such as the right to explanations and the right to complain equip victims of discrimination with some tools. The combination of private and public enforcement as well as some general provisions on fundamental rights, impact assessments and bias audits could ensure that the principles of gender equality and non-discrimination are protected both at an individual and societal level. The AI Office has a clear role in ensuring respect for these norms but could also serve as a voice to draw attention to gaps, shortcomings, and reform needs when it comes to AI systems and their impact on gender equality. However, while high-risk AI systems benefit from the highest protection against discrimination, many exceptions and a reduced scope will exclude some AI systems, including those that are high-risk, from the safeguards of the AI Act because some AI systems with potentially negative impacts for gender would not have these mandatory compliance obligations.

3 The new AI governance structure, the European AI office and gender equality

In this section, the mission, and main tasks of the AI Office of relevance for gender equality will be outlined (see 3.1 below), together with the working methods foreseen for cooperation within the European Commission and other European Union bodies regarding the enforcement of gender equality law (see 3.2) and enforcement at European Union and Member State level (see 3.3).

3.1 The mission and main tasks

The creation of the European Artificial Intelligence Office (AI Office) with Commission Decision on 24 January 2024, which entered into force on 21 February 2024, can be considered as an attempt to give a face to the European AI enforcer.Footnote 28 The Decision highlights in its first recital that while AI has positive social benefits, “AI can generate risks and cause harm to public interests and fundamental rights that are protected by Union law.” In other words, it sets out from the outset the terrain on which the mission of the AI Office shall be conducted.

The AI Office will be part of the administrative structure of the Directorate-General for Communication Networks, Content and Technology (Art. 1 of the AI Act). Art. 3 (47) also defines the Artificial Intelligence Office as “the Commission’s function of contributing to the implementation, monitoring and supervision of AI systems, general purpose AI models and AI governance.”. This shows that the AI Office could be seen rather as a Commission service than a completely independent body within the European Union. The mission and tasks which consist of enforcing and implementing the AI Act, also include contributing to AI policies within the European Union and at international level, supporting the development of trustworthy AI and monitoring the evolution of AI markets and technologies (Art. 2).

One of the key tasks entrusted to the Office is the specification and drafting of guidance documents. The AI Office is to assist the Commission in the preparation of guidance and guidelines to support the practical implementation of the AI Act, develop supportive tools, such as standardised protocols and best practices.Footnote 29 Furthermore, it will play a key role regarding the preparation of Commission decisions, and of implementing and delegated acts and has therefore a substantial influence on what extent gender equality and non-discrimination considerations will be reflected. Notably as new use cases will be included by delegated act, the AI Office can monitor closely the situation in relation to gender and discriminatory impacts of certain AI systems that need to be included and thereby expand the scope of the AI Act. The same is true for current and future standardisation requests which play a fundamental role in compliance with the AI Act, and which need to incorporate gender equality and non-discrimination principles.Footnote 30

In addition, the AI Office has specific enforcement powers in terms of monitoring and supervision when it comes to General Purpose AI systems (GPAI) (Art. 3(63) AI Act). A GPAI model is defined as

an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications.” Footnote 31

The AI Act foresees in Sect. 2 and Art. 53 specific obligations, notably transparency obligations for providers and users of certain AI systems and GPAI models. Article 53 lays down the rules for when such an AI system is classified as a GPAI model with systemic risks.

3.2 Cooperation on gender equality with the Commission and European Union bodies

One open question concerns cooperation on matters of gender equality considering the competencies of the different Directorates General (DGs) within the European Commission. While it seems that the AI Office can deliver expertise on AI systems to relevant Commission services, the question is whether Directorates General will rely on the AI Office for potential cases involving AI systems. In addition, the question of who will take the lead on cases of algorithmic discrimination is not clear. Guidelines on the collaboration would achieve some clarity on the competencies and roles of the respective actors in relation to AI systems and issues of non-discrimination.

Cooperation with stakeholders is central to the AI Office and notably Art. 4 requires “conducting regular consultation of stakeholders, including experts from the scientific community and the educational sector, citizens, civil society and social partners, where relevant, to collect input for the performance of its tasks under Article 3(2)”.Footnote 32

Art. 5 establishes cross-sectoral cooperation within the Commission, which includes two main tasks. First, the AI Office is to “work with other relevant Directorate-Generals and services of the Commission in the performance of its tasks pursuant to Article 2, notably with the European Centre for Algorithmic Transparency as regards the evaluation and testing of general-purpose AI models and systems”.Footnote 33 Secondly, the AI Office is to “support other relevant Directorate-Generals and services of the Commission with a view to facilitate the use of AI models and systems as transformative tools in the relevant domains of Union policies, as well as to raise awareness about emerging risks. Such risks can include risks to fundamental rights and non-discrimination.

Finally, Art. 6 governs inter-institutional cooperation, enabling the AI Office to cooperate with bodies, offices, and agencies of the Union. One can imagine here for example close collaboration with the Fundamental Rights Agency in relation to fundamental rights, such as gender equality and non-discrimination, and with the European Data Protection Supervisor (EDPS) regarding questions of data protection.

3.3 Enforcing gender equality and non-discrimination in AI at European Union and member state level

The Artificial Intelligence Act has established new rules on AI governance and foresees a more centralised system of oversight for some aspects of the AI Act, notably for General Purpose AI models. The governance structure is composed of the AI Office (Art. 64); the AI Board (Art. 65) the tasks of which are laid down in Art. 66; and two new advisory bodies, namely the scientific panel of independent experts (Art. 68) and an advisory forum (Art. 67).

The scientific panel of independent experts provides technical advice and input for the AI Office, national market surveillance authorities and an advisory forum (Art. 58a). One of the additional tasks of the scientific panel is to provide qualified alerts for systemic risks posed by GPAI models (Art. 68(3)(a)). The scientific panel can also request for its work, in the framework of a European Commission’s request for information any documentation and information necessary for the fulfilment of its tasks in relation to GPAI models. Experts in the scientific panel can also be called upon to conduct evaluations of AI systems (Art. 68(a)).

The advisory forum (Art. 67) provides stakeholder input to the European Commission, the AI Office and the AI Board. In addition, national competent authorities need to be designated (Art. 74).

Finally, public oversight and market surveillance is complemented by the individual right to lodge a complaint (Art. 85) if the natural or legal person has any “grounds to consider that there has been an infringement of the provisions of this Regulation.“The right to an explanation (Art. 86) complements these rights for persons affected by the decision of an AI system.

In conclusion, it can be observed that while some elements of this system (such as GPAI models) will be subject to European Union supervision, the role of Member States remains central. When it comes to gender equality issues, given the lack of a European Union agency with enforcement powers, enforcement and supervision traditionally lies mainly at Member State level, and European Union enforcement is focused on infringement procedures that address issues of implementation.Footnote 34 When it comes to AI systems and algorithmic discrimination, the cooperation of relevant non-discrimination and AI supervisory bodies will become crucial in order to ensure adequate protection against algorithmic discrimination and negative impacts from AI systems.

4 The international dimension of the AI Act

This section describes the substantive provisions on international cooperation that are relevant in the light of current negotiations at Council of Europe and United Nations level. The question is to what extent gender equality will be reflected in those proposals. Sub-Sect. 4.2 below will shed light on the Council of Europe proposals whereas sub-Sect. 4.3 will briefly analyse United Nations work on AI and the newly established United Nations AI body and its first interim report.

4.1 Scope and cooperation

The mandate of the AI Office to contribute to international cooperation on AI in relation to innovation and policy is laid down in Art. 7. The contributions of the AI Office are seen in “advocating the responsible stewardship of AI and promoting the Union approach to trustworthy AI”.Footnote 35 The Union approach to AI includes the protection of fundamental rights such as gender equality and non-discrimination. The European Union should therefore support this inclusion as regards all international AI proposals. Furthermore, AI regulation and governance as well as the implementation of international agreements on rules on AI and supporting Member States, are listed as goals for international cooperation.Footnote 36 The inclusion of dedicated goals in relation to international cooperation with third countries and international organisations shows the importance the European Union attaches to a fundamental rights-based approach to AI. The support of Member States regarding the implementation of international agreements shows that the European Union aims for coherence and broad inclusion of its approach to AI regulation across different regulatory proposals. Once the AI Office is operational and has established expertise in AI regulation, the European Union will be able to speak with one voice and share best practices and knowledge of the European Union approach to AI regulation.

4.2 The Council of Europe, AI and gender

The Council of Europe adopted a Framework Convention on AI.Footnote 37 While earlier drafts of the legal text included some substantial wording on gender equality and the principle of non-discrimination, more recent drafts seem to have watered down the content and language on discrimination. In addition, the scope has been reduced to exclude the applicability of the Convention to private parties. The European Union has been trying to align the AI Convention with the AI Act and has argued for the inclusion of private actors in the former’s scope.Footnote 38

In 2023, a study was published that assessed the feasibility of a gender equality and non-discrimination specific legal instrument at Council of Europe level.Footnote 39

The latest available version of the final draft now recalls in its preamble the importance of gender equality and the empowerment of all women. In addition, besides the risks, it also underlines in general the opportunities of AI to protect human rights.Footnote 40 The preamble is also mindful of the principle of equality and non-discrimination including gender equality.Footnote 41 It also expresses

deep concern that discrimination in digital contexts, particularly those involving artificial intelligence systems, prevent women, [girls/children], and members of other groups from fully enjoying their human rights and fundamental freedoms, which hinders their full, equal and effective participation in economic, social, cultural and political affairs”.Footnote 42

The Framework Convention now includes again more gender equality friendly wording in Article 10, entitled “Equality and non-discrimination”, which reads:

1. Each Party shall adopt or maintain measures with a view to ensuring that activities within the lifecycle of artificial intelligence systems respect equality, including gender equality, and the prohibition of discrimination, as provided under applicable international and domestic law..

2. Each Party undertakes to adopt or maintain measures aimed at overcoming inequalities to achieve fair, just and equitable outcomes, in line with its applicable domestic and international human rights obligations, in relation to activities within the lifecycle of artificial intelligence systems.”

From an equality and non-discrimination perspective, it is regrettable that the scope of the Convention does not include private actors (Art. 3(1)), the suggesting wording on gender equality and non-discrimination of the Council of Europe is at least in principle slightly more ambitious than the European Union AI Act.

4.3 The United Nations, AI and gender

A trend towards international regulation of AI systems has been observed for some time, including issues of gender equality and non-discrimination.Footnote 43 One major development at UN level is the creation of the High-Level Body on AI, which was set up in October 2023 and is composed of 32 experts in AI.Footnote 44

The United Nations AI Body delivered its first interim report “Governing AI for Humanity” in December 2023 which also included aspects of gender equality. In terms of categorising risks that could be caused by AI, the report clearly identifies discrimination and unfair treatment on basis of gender as one group. In the preliminary recommendations, more specifically Guiding Principle 1 (AI should be governed inclusively, by and for the benefit of all), the report states that

affirmative and corrective steps, including access and capacity building, will be needed to address the historical and structural exclusion of certain communities, for instance women and gender diverse actors, from the development, deployment, use, and governance of technology, and to turn digital divides into inclusive digital opportunities.” Footnote 45

Furthermore, a working group on cross-cutting issues was set up to deal with questions of gender.

The United Nations AI Body involves stakeholders and civil society and launched an open consultation on the Interim report to gather views and feedback. Within the process of the Global Digital Compact and the Summit of the Future it remains to be seen what role gender equality and non-discrimination will play within any future AI governance structure at United Nations (UN) level.

More broadly, the United Nations AI Body has given an overview of proposed AI Governance functions, ranging from norm elaboration, compliance and accountability to horizon scanning and the building of scientific consensus. The work of the UN AI Body also needs to be seen in the context of the United Nation Pact for the Future that will be adopted at the Summit of the Future in September 2024.Footnote 46 The Zero Draft embraces gender equality and highlights that the Pact is steered by the principles of human rights and gender equality and recalls that human rights cannot be defended if gender equality is not guaranteed.Footnote 47 The Committee on the Elimination of Discrimination against Women (CEDAW) will publish the draft General Recommendation No 40 on the equal and inclusive representation of women in decision-making systems on 8 March 2024. This also addresses the impacts of AI systems on gender equality and is scheduled for adoption in the 89th session of the CEDAW Committee in October 2024.Footnote 48

5 Conclusion

The AI Act is clearly an ambitious legislative framework to regulate AI systems in Europe. Given that the European Union’s approach to AI regulation is based on fundamental rights, the AI Act could have addressed more concretely some of the underlying gender and non-discrimination issues. While it is true that a general AI regulation with specific obligations for AI systems will improve regulatory oversight and might reduce the possibility of biases and discriminatory risk arising within AI systems, the shortcomings of the AI Act in relation to gender will have to be addressed via new instruments specifically designed for gender equality and non-discrimination or by revising existing instruments in the light of new technological developments.Footnote 49 In any revised or newly proposed framework, besides the mandate and tasks of oversight bodies being established, the role of national equality bodies would need to be defined so as to best assist potential victims of algorithmic discrimination.

The increasing attention being paid to addressing risks of gender inequalities not only at European Union level, but also at Council of Europe and United Nations level as well as in many countries shows the need to develop frameworks that ensure adequate protection against algorithmic discrimination and other negative effects on gender equality that are caused by AI systems.

Regarding the role of the AI Office for ensuring gender equality and non-discrimination, the Decision establishing the AI Office remains silent, except for mentioning the risks that can be generated by AI and the harm caused to fundamental rights. One can expect close collaboration between European Union bodies and relevant Directorates General of the European Commission that are competent when it comes to ensuring gender equality and non-discrimination. Shared competences with national enforcement bodies - which could also involve national equality and non-discrimination bodies (notably in lighty of Art. 77 and the powers of contained therein) - seem to be most pertinent way to foster protection against algorithmic discrimination and other gender-inequalities caused by AI systems. A bigger role for the AI Office could arise if specific AI systems, such as GPAI models, are found to be systematically posing risks for fundamental rights, such as gender equality and non-discrimination on a European wide scale because this would fall within the competence of the AI Office which will have at its disposal a range of enforcement powers. Nevertheless, neither the existing substantive nor the procedural enforcement architecture at European Union level is sufficient to detect, protect against and remedy gender-based algorithmic discrimination. Targeting AI systems directly, with specific requirements for high-risk AI systems and an obligation to conduct bias audits or fundamental rights impact assessments can be a step to detect or mitigate some of the risks of gender biases and algorithmic discrimination, but they are not sufficient. But en attendant an adequate legislative framework and specific enforcement tools for algorithmic discrimination in European Union non-discrimination law, using the available tools and institutions, and bringing cases of algorithmic discrimination before national courts, is currently the best available option to ensure gender equality in the algorithmic age.