1 Introduction

In March 2023, the Italian Data Protection Authority (Garante per la protezione dei dati personali) suspended the

‘triumphal procession’ of ChatGPT, the first such attempt among the EU countries, on the ground that this OpenAI tool did not meet the requirements for lawful personal data collection and that there was no proper age verification system in place for children.Footnote 1 Less than a month later, ChatGPT was unblocked in Italy, but this episode once again stirred up the debate about the proportionality of bans and potentially disruptive innovations, as well as the effectiveness and possible mis- or over-regulation of AI.

Recently, the concept of the ‘European digital legal order’ seems to have gained more importance than the overarching concept of European legal order, of which the former is arguably a modern manifestation. The European legal order traditionally entails a set of fundamental human rights, Rule of Law principles and Democratic values as enshrined in the UN Charter,Footnote 2 the Council of Europe Statute,Footnote 3 the European Convention for the Protection of Human Rights and Fundamental Freedoms (ECHR),Footnote 4 as well as the EU TreatiesFootnote 5 and the Charter of Fundamental Rights of the European Union (EU Charter).Footnote 6 From maintaining the Rule of Law derive the sustainability of Democratic values, and freedoms under the law enshrined in fundamental human rights.Footnote 7 To the extent that the European digital legal order is the manifestation of the European legal order in the modern digital world, the fundamental question of the nature, scope and upholding of fundamental human rights, Rule of Law principles and Democratic values remains. Without disputing the need for digital transformation and its proper regulation, this paper will turn its attention to the current status of fundamental principles in the modern setting of democratic societies. This will include a review in the digital legal order of fundamental human rights as enshrined in the ECHR and interpreted by the European Court of Human Rights (ECtHR) and, at the same time, as may be developed in the EU Treaties, the EU Charter and the Court of Justice of the European Union’s case law, in the framework of Rule of Law principles and the values of European democracy as enshrined in Article 2 TEU.Footnote 8 It is important to emphasise the convergence of the two European fundamental human rights instruments that represent the ECHR and the EU Charter as, jointly and severally, they constitute the foundations of the European legal order as far as fundamental human rights are concerned. Across their jurisprudence, both European courts interpreting and preserving fundamental human rights in Europe have used similar and/or complementary mechanisms upholding fundamental human rights in Europe, providing prima facie equivalent protection to rights,Footnote 9 whereas these very same rights are most likely to be affected by AI in a modern setting.Footnote 10 The strengthening of the mutual cooperation of the Court of Justice of the European Union (CJEU) and of the European Court of Human Rights (ECtHR) could only reinforce the protection afforded to fundamental human rights in AI cases.

While there is no uniform definition of Artificial Intelligence (AI) or Artificial Intelligence Systems (AIS) in the European legal order at large – several attempts have been made to provide ‘all-encompassing but changeresistant’ definitionsFootnote 11 – AIS’s serious impact on fundamental human rights is not doubtful anymore. For this reason, the European Declaration on Digital Rights and Principles for the Digital DecadeFootnote 12 proposes an anthropocentric interaction with such systems. As will be discussed in this paper, being human-centred in the field of AI and AIS can become more and more difficult, as we move along the path of digitalisation and algorithmisation.

Taking this into account, this paper reviews the regulatory framework of AI and proposes potential new/renewed/modernised rights that should enhance and/or supplement the current catalogue of fundamental human rights, as contained inter alia in the EU Charter and the ECHR. This paper also argues that regulatory standards, especially in relation to AI, should be clearer and not be based on a half-hearted approach or on a “muddling through”.Footnote 13 Some wordings of rights and standards will be suggested in this paper.

2 Technological determinism and the legal order

In the EU, incredibly detailed, cumbersome and extraterritorial regulations in the last decade are designed to strengthen the foundation of the European legal order so that it can withstand the challenges of the digital age. The core framework of this approach is already formed by the General Data Protection Regulation (GDPR)Footnote 14 and, more recently, by the Digital Markets Act (DMA)Footnote 15 and the Digital Services Act (DSA).Footnote 16 It remains to be seen whether and, if so, how this framework will be supplemented by the widely discussed proposal for an Artificial Intelligence Act (AIA).Footnote 17 With these regulatory tools, the EU and its Member States are trying to achieve the goal of develo** and implementing legislation that is thoughtful, effective and progressive, while respecting fundamental human rights and the well-being of societies. These acts represent an overall compromise. First of all, it is a compromise between the requirements of legal principles and norms and the freedom to innovate. On the one hand any proper regulation should be aimed at protecting fundamental human rights and consistent with legal certainty. On the other hand, it should not multiply gaps and contradictions in which technologies are allowed to proliferate uncontrolled and could significantly im**e on human rights, fundamental freedoms and legitimate interests. In addition, the final versions of these acts seem to be a compromise not only between the European Parliament, the Council and the European Commission, but also between legislators representing the interests of states and their citizens vis-à-vis businesses representing the industry. Technologies pushed by business with the help inter alia of lobbyingFootnote 18 and innovation may be an almost invisible component in this trade-off, spurring action and contributing to some of the regulation becoming obsolete before it even goes out to print. This is especially indicative of the legal framework regarding AI. While fierce discussions have been going on about whether a model based on assigning various levels of risk to AIS is good enough and whether it is right to have technological details in annexes to the act, fresh problems surface, including technologies based on large linguistic models, bringing us closer to generative AI. The development of AI systems probably also brings us closer to turning to technological determinism in its, if not hard, then at least soft version.

Technological determinism claims that technology determines the development of society, and in some extreme manifestations, this concept considers technology as an independent agent. In general, this term refers to the belief that technology is ‘a key governing force in society’.Footnote 19 This kind of determinism includes, among other things, the notion that people can – only – adapt to the development of technology, which has its own internal logic.Footnote 20 This is also a view that can be valuable when we consider social-sha** tendencies of technology.Footnote 21

Besides, technological determinism draws attention to the impact of technology at both the macro and micro levels and suggests that cautions about over-determination be taken seriously. One of the reasons for this is ‘the fact that many modern technological artefacts and systems are so complicated that no single person, or group of persons, has an overall grasp of them or knows the design in full, which means that the risk of unforeseen consequences of technology increases’.Footnote 22 In the light of the addition of a digital dimension to almost all human activity, and as a result arguably also to human rights, and of the widespread deployment of increasingly sophisticated algorithms, this may be an especially useful approach.

For the purposes of this paper, we propose to consider technological determinism as a trend in which technology largely determines modern society in general and the European legal order in particular. We argue that technologies have already begun to shape the European legal order at large, towards a renewed digital legal order.Footnote 23 As such, breakthrough technologies of AIS may shift the fundamental pillars of this order, if not alienate them altogether, unless these technologies are integrated ‘by design’, i.e. at the conception phase and in their subsequent use/refinement/upgrades. Targeted yet all-encompassing influence, profiling and manipulation with the assistance of AI can undermine democracy. Decision-making, when based on algorithmic recommendations, on lack of clarity and on the erosion of the public debate can be detrimental to the Rule of Law and democratic values. But perhaps the most immediate and visibly devastating effect of AI is for fundamental human rights.Footnote 24

3 AI impact on fundamental human rights

The impact AI has on fundamental human rights can be seen primarily along two lines. Firstly, how AI affects fundamental human rights may affect the ideal of human rights in general through the erosion of value bases and the recourse to technological determinism and a more utilitarian approach to regulation and practice. Secondly, AIS can attack individual rights in overt and covert manners as will be shown in this paper. Such attacks may affect primarily, but not only, rights enshrined in the EU Charter and the ECHR, such as the rights to respect for private and family life,Footnote 25 to protection of personal data,Footnote 26 to freedom of expression and information,Footnote 27 to freedom of thought, to conscience and religion,Footnote 28 to rights of liberty and security,Footnote 29 to the right to a fair trial,Footnote 30 to the right to non-discrimination,Footnote 31 to equality of men and women,Footnote 32 to rights of the childFootnote 33 and/or to the principle of no punishment without law.Footnote 34 These must also be seen in the global socio-political context of external factors, crisis situations and shocks involving the use of AI.Footnote 35 A particular feature of the impact of AIS on human rights is what could be referred to as cross-cutting impact where not one, but a number of rights can be affected by the deployment of a particular technology. For example, content moderation algorithms may affect not only freedom of expression, but also freedom of thought, conscience and religion, the right to non-discrimination, equality of men and women, and the rights of the child, since these algorithms in their design and/or use may be invasive, selective, promote polarisation of opinions and dilute discussions, as well as generally contribute to the formation of a certain picture of the world among users of digital content. Therefore, when describing the impact of AIS on fundamental human rights, it is not always possible to single out specific rights that are affected by these technologies. Thus, the question arises as to how to best prepare and protect them.

The ability of AIS to track users both in the public and the private sphere of life is outstanding. That is so particularly because it is not necessary to use technological artefacts directly to be the object of certain tracking actions. Bits of information put into the digital space by others can make it easier for non-users to track them because AI can search, process, combine and analyse those bits with astonishing accuracy, as well as keep track of what people have been interested in and weave it into their online searches, intrusively or more subtly.Footnote 36 For example, algorithms can establish a match on a photo with a person who did not take or post this photo on the network and may even not have known that it was taken, then determine the location of this person at a certain time.

AI technologies used in public spaces by public authorities can go far beyond what is considered acceptable in a democratic society upholding Rule of Law principles and European values as well as fundamental human rights.Footnote 37 Given the ‘progressive datafication of reality’, the introduction of AI-based surveillance systems puts the public at an increased risk of power imbalances, whereby public authorities have excessive access to privileged information on individuals’ private lives.Footnote 38 When it comes to biometric data the intrusion into one’s private life could be even more seriously invasive. AI may track or process personal biometric data including micro expressions, tone of voice, heart rate or temperature data. This opens the field not only for an overly accurate picture of how a particular person breathes, moves and lives, but also for planning a very targeted impact on this person if this data is used beyond the goals declared by public authorities.

By the same token private actors can impact people extremely successfully. For example, fitness bracelets or rings that track heart rate and body temperature advertised by companies provide them with extremely sensitive and intimate information. Such information then processed by AI can serve to influence or impose something on specific people using their personal vulnerabilities. Children may be particularly at risk because their cognitive and socio-emotional skills manifest rapid growth and they lack fully mature abilities.Footnote 39 AIS makes it possible to get close to children and influence them even if they do not use social networks but only educational applications.

AIS can easily rank information by choosing what people should see when using search or turning to daily news in the media, visiting websites or simply scrolling through social media feeds. Given that a giant number of people today are looking for information provided in digital form and not in print, this opens the door for manipulation by those players who dominate the digital space, especially big tech companies. At the same time, companies do not miss opportunities to present themselves as a neutral side and as those who only provide access to content – as facilitators. For example, Google presented itself as a mere ‘transmitter of popular preference’, as processed through its algorithms with no obligation to adhere to social values when a Holocaust denial site appeared on the top ten of search results for the query ‘Jew’. Then Google claimed that an anti-Semitic site could rise to the top search results based on certain algorithms.Footnote 40

Big tech companies claim a degree of power that approaches the public one and actually become actors in the public sector. At the same time, they try to avoid/minimise public responsibility – the kind of responsibility that high courts or government agencies bear as actors in the public sector and public power bearers – and even that kind of responsibility that traditional media bear, named editorial responsibility. Such a lack of responsibility as well as accountability coupled with serious powers is one point of concern especially when things are moving slowly in terms of regulation. The potential intrusion AIS, with the help of companies that develop and maintain them, may be even more threatening than power over data which was discussed in early GDPR times. The reasons for this concern may be the ability of algorithms to manipulate public opinion relatively easily, their predictive power and seemingly depersonalised character which influence the responsibility issues.Footnote 41

Undoubtedly, there is some positive movement in matters of responsibility and accountability of AI owners and/or developers. On 24 May 2023, the General Court of the EU issued a judgment in which it dismissed the appeals of Meta Platforms in cases T- 451/20 Meta Platforms Ireland v. Commission and T- 452/20 Meta Platforms Ireland v. Commission establishing that the contested decision did meet objectives of general interest recognised by the European Union.Footnote 42 Meta Platforms Ireland Ltd tried to challenge the request to provide documents to be identified by search terms because the European Commission sent a request for information to Meta Platforms Ireland Ltd based on suspicions of anticompetitive behaviour in its use of data and in the management of its social network platform. However, the Court did not find that the disputed request went beyond what was necessary. Also, the Court did not find that establishing a virtual data room failed to ensure that sensitive personal data was sufficiently protected. On the other hand, the European Commission had found Google in abuse of dominant position in national markets and imposed a penalty of €2.42 billion on Google for its use of algorithms reducing the ranking of competing services in search results, while Google’s own services had a prominent position.Footnote 43

As such, the regulation of companies that use AI to manipulate information widely requires strategic decisions like those made for ‘very large’ digital platforms. In fact, there should be clear and even strict standards that apply both to any company that owns AI (since AI tools can elevate even the smallest and most inconspicuous company to the top of power) and to those companies that, owning platforms and search engines, have a significant impact on societies. At the moment, the standards that apply to very large digital platforms are formulated as half-hearted or muddled through. In particular, the DSA imposes additional obligations on providers of very large online platforms and search engines, applying the logic that these platforms and search engines must bear obligations that are proportionate to their societal impact. Yet, the concept of active recipients of the service as ‘all the recipients actually engaging with the service at least once in a given period of time’ – that does not necessarily coincide with those of a registered user of a service Footnote 44 – is rather weak to assess the power and influence of such platforms. Besides, the question arises as to how the unique recipients of the service will be determined when the DSA does not require providers to perform specific tracking of individuals online but does not prohibit it simultaneously.

On the other hand, big tech companies on their online platforms are utilising AI to identify and remove content that breaches their terms of service. However, that means that legitimate content may be flagged or removed.Footnote 45 Cases where legitimate content has been removed include examples of well-known paintings that contain nudity, photographs, and other significant evidence of historical events. These cases also illustrate a deeper problem than the AI bug and the subsequent bug of human content moderators controlling takedowns. It seems that the deeper problem here is the governance of human rights issues by companies through corporate policies rather than based on rights-based provisions enshrined in European and national laws.

The increasing interaction with AIS may aggravate the lack of control which should remain in the hands of people over their lives. However, the more data about people it becomes possible to receive and process, the less this control remains. As a result, and as rightly noted: ‘The vast amounts of sensitive data required in algorithmic profiling and predictions, central to recommender systems, pose multiple issues regarding individuals’ informational privacy’.Footnote 46 Algorithmic predictions not only narrow the scope of some human rights, but also undermine justice when they become part of a judicial process, or democracy and openness when they seem to make public discussion about public decisions unnecessary. Pre-emptive power of AIS makes possible both: narrowly targeted and very precise intrusions into the sphere of life of a specific person protected by human rights, as well as the governing of people who were algorithmically sorted into groups based on certain characteristics of these people. Profiling, for instance, sorts of people in the way ‘in which mechanisms that generate demarcations become increasingly opaque and incomprehensible for those who are objects of profiling’.Footnote 47 AIS consider characteristics people have or probably have and use them for imposing goods, services or opinions, as well as for nudging humans to certain actions or decisions. For example, during the COVID 19 pandemic, both digital and analogue nudges were actively used by many governments to effectively influence people’s behaviour, especially regarding maintaining physical distance, wearing medical masks, and performing certain hygiene procedures. This stimulated a discussion about the necessity and ethics of nudging in times of crisis.Footnote 48

The ways AI developers use data or particular datasets themselves can lead to unequal treatment of the human being. If ‘structural differences’ exist for protected attributes such as gender, ethnic origin or political opinion, the AI through its output can discriminate against certain groups or individuals. Examples include a hiring algorithm favouring men over women, an online chatbot becoming racist after a few hours of use, and face recognition systems working better for white people in comparison to people of colour.Footnote 49 When it comes to machine learning, ‘performance criteria such as reliability, efficiency, and accuracy, addressing bias should be an integral part of any machine learning application’.Footnote 50 However, eliminating bias is not as easy as technical experts and managers at AI development companies often declare it to be. There is ‘an implicit assumption that once we collect enough data, bias will no longer be a problem—an assumption that in general is not justified’.Footnote 51 Biases might be a deep problem because they can reflect not only poor approach to data used for AI but also entrenched social practices or reproduce practices that societies tend to move away from.

Biased data (when biased datasets are results of historical discrimination in some domains or lack of diversity) as well as biased people (when algorithms have been designed specifically to create discriminatory outcomes)Footnote 52 lead to massive violations of the right to non-discrimination. It may include differential treatment based on protected characteristics, such as discrimination and bias-motivated crimes, differentiation, statistical bias and offset from origin. Footnote 53 As AI is deployed in all areas and increasingly used to automate decision-making processes, inequality

– as the contrary of ‘equality before the law’Footnote 54 – could affect large numbers, and disproportionately affect vulnerable groups and marginalised communities, etched into a more technologically advanced future society.

The European Union has stressed the importance for ‘European AI [to be] grounded in our values and fundamental rights such as human dignity and privacy protection’.Footnote 55 To achieve this goal, it is necessary to have a vision of the future with AIS that is inclusive of all stakeholders and scenarios, but clearly adheres to the European values of fundamental human rights and democracy at the core of Rule of Law principles.Footnote 56

4 Vision of the future with AI

The threats posed by AIS to human rights do not – and cannot – mean we need to abandon AI altogether.Footnote 57 Overall, it can create efficiency benefits that businesses can use to optimise their production, increase production quality, minimise production stoppages, optimise transportation logistics and reduce maintenance, provide a safer and more effective training and guidance through the use of augmented reality, reduce human error,Footnote 58 etc. At the same time, it is necessary to consider that we are not discussing some hypothetical distant future, but we consider the future knowing that AI already occupies a significant part of the current life of people and societies.

4.1 The dependence on AIS

The dependence of the public sector on private actors who create, modify, adjust and maintain algorithms could be one of the scenarios that may have adverse consequences for the Pillars of democratic societies, fundamental human rights, the Rule of Law and democratic values in Europe. For example, AI owners may legally refuse to disclose source codes, thereby depriving users, including government organisations and institutions that may face emergency situations, from the opportunity to check potential discriminative vulnerabilities of the algorithmic tool, investigate security threats as well as technical errors.

Such dependence may be exacerbated by the monopoly position of some AI developers. This monopoly includes large online platforms which ‘operate at an unprecedented scale’, and ‘have a[n ever growing] market value of over $400 billion’.Footnote 59 Additionally, these giant companies often acquire smaller companies or startups, effectively eliminating competition and cementing their monopoly. The monopoly position of big tech companies allows them to dictate terms to both governments and users, which means more and more people, since AI affects not only direct users, but also people whose information enter the digital space without their direct participation (indirect users). Moreover, in the future, the impact of AI will affect those whose information do not enter the digital space, making these people or groups invisible and contributing to their digital exclusion from society.

4.2 The effective regulation of AIS

Any vision for AI must include proper and effective regulation, which is an extremely difficult task given the rapidity and unpredictability of the development of these technologies. On the one hand, overdetailed regulation may lead to the limitation of innovation by the technology companies, while making it even more difficult for lawmakers to update a certain set of regulations, following technological advancement. On the other hand, broader regulation might create loopholes that companies will use to act for profit rather than human rights-based approaches where possible. Whether we accept or reject technological determinism, it is clear that AI is an area where legislators, especially in democratic societies, inevitably lag behind.

Many hopes are placed on the transparency of AIS, a requirement well documented at the regional and national level, urging for the explain ability of decisions made by AI.Footnote 60 Beyond the Rule of Law principle of transparency and in more practical terms, transparency has been described in many ways. Some claim AI should be open to inspection and evaluation; others that the core idea is reliability; while others that transparency means to report unexpected behaviour. However, most frequently, transparency is about making the ‘decision-making processes accessible to users, so that they can understand and judge how an autonomous system has reached a certain decision.’Footnote 61 The principle of transparency seems to be too fundamental to be applicable without the interpretation and guidance from courts, International Organisations and civil society rather than in the hands of companies and other AI developers.

Further deployment and use of AI will exacerbate the issue of responsibility for its actions and decisions, or – since AI has not yet reached such a level of development to be completely independent and self-governing – for those types of actions and decisions in taking and implementing which people significantly rely on AIS. The responsibility and role of internet intermediaries has been highlighted in various documents. In particular, the Committee of Ministers of the Council of Europe has observed that states should ensure that fundamental human rights are upheld through the use of such intermediaries. At the EU level, AI deemed as ‘high risk’ under the proposed AIA, may be held at a higher standard of liability. Conversely, AI not labelled as high risk, should follow suit with ‘consumer AI’, and be governed by the existing legal framework. Current EU legislation moves along the path to be adequate to accommodate modern challenges. In particular, a new version of the Product Liability Directive should contain clear liability rules for certain products such as software including AIS and digital services.Footnote 62

In this sense, it is encouraging that on 14 June 2023 the European Parliament voted to adopt its position for the upcoming AIA proposing stricter rules following a risk-based approach.Footnote 63 Amendment 27 deserves special attention because it clearly states that AIS ‘should make best efforts to respect general principles establishing a high-level framework that promotes a coherent human-centric approach to ethical and trustworthy AI in line with the EU Charter and the values on which the Union is founded’.Footnote 64

4.3 The ‘new’ fundamental rights

A vision of the future with AIS could open the possibility to create new rights and/or (significantly) change/upgrade the essence and scope of already existing rights. Introducing new rights may also mean changing their status from rights that apply to certain categories of persons (such as user rights or data subject rights) to fundamental human rights that are of utmost importance to all human beings.

Among such (re-)new(ed) rights could be the ‘right not to be subjected to automatic decision-making and automatic processing’ in the broadest sense. The beginning of this right is laid down by the GDPR in Article 22 (Automated individual decision-making, including profiling).Footnote 65 This appears to only have effect on ‘serious impactful events’, without further explanation of what this could entail.Footnote 66 While not disputing that some elements in the decision-making chain should be automated for speed, better analysis and cost-effectiveness, we argue that human withdrawal from semi or fully automated decision-making is one of the red lines of the European digital public legal order. The new dimension or broader sense of the right must include the requirement to have humancentered decision-making process controlling the AI decision and being ultimately responsible for it.

Another right that should gain wider meaning is the ‘right to influence one’s digital footprint’. Its forerunner is the right to be forgotten, developed in the decisions of the CJEUFootnote 67 and enshrined in the GDPR in Article 17 (Right to erasure or ‘right to be forgotten’).Footnote 68 In terms of influencing the digital footprint, individuals should have the right to participate in their digital lives in such a way that information is reviewed in accordance with time passed and its significance to the individual and not to society. One red line is that this should not provide loopholes for those who seek amnesty from their crimes against humanity or otherwise, to be erased from history. But it should give the proper tools to control one’s image over time to avoid or put an end to the indelible past endlessly stalking people. This is all the more important as these people and/or their representatives at the time could not even imagine that AI tools are able to find and associate rare and extremely outdated data with them.Footnote 69

In addition, the European Commission should consider introducing new rights in the AIA, with the rights enshrined in the EU Charter as a basis, similar to the right to be forgotten in the GDPR. For instance, the Regulation does refer to transparency obligations by AI systems, whereas the magnitude of certain situations merits genuine human contact, such as medical decisions. The rights proposed elsewhere by the authors of this piece and others are the ‘right not to be manipulated’, the ‘right to be neutrally informed online’Footnote 70 and the ‘right to meaningful human contact’.Footnote 71 The latter is especially important when considering which human activities can be fully automated and which cannot, and moreover, which human activities can, but should not be fully automated. Such a right should include the obligation to state to natural persons when they are interacting with an AIS system.

Besides, these new rights may include the ‘right not to be measured, analysed or coached’,Footnote 72 since both states and companies are increasingly resorting to mass surveillance and collecting the smallest detailed information about people. Such a right could include obligations not to resort to mass surveillance, at least in some places that should remain private, and not to resort to 24/7 surveillance. In addition, the very legality of mass surveillance must be questioned. The legality of such excessive surveillance was questioned by the ECtHR,Footnote 73 but the Court preferred to focus on the details of the surveillance, in particular what conditions should be met by proper surveillance.

5 Conclusion

Interaction with artificial intelligence systems requires some courage and cautions at the same time and in the right doses. It may appear that there are at least three characteristics we would need to live with AIS in social harmony, namely potentially (re-)new(ed) fundamental rights, core values as part of AI design, and a noncompromised regulatory framework on issues of principal importance for fundamental human rights, Rule of Law and democratic values protection. To meet these goals we suggest to enhance, supplement and/or expand the catalogue of (digital) fundamental human rights in the European Legal Order with such rights as described in this paper and by others as the ‘right not to be subjected to automatic decision-making and automatic processing’ (in the broadest sense), the ‘right to influence one’s digital footprint’, the ‘right not to be manipulated’, the ‘right to be neutrally informed online’, the ‘right to meaningful human contact’, and the ‘right not to be measured, analysed or coached’.

This paper shows the extent to which fundamental human rights, the Rule of Law and European values based on democracy must be embedded in all areas of the digital legal order, aiming at their effective and meaningful rather than formal inclusion. This paper calls on all proposed regulatory standards regarding AIS to be clear and strict in the sense that they do not allow putting human rights at risk and of driving people and societies away from the fundamental benefits of digitalisation at a higher cost vis-a-vis the benefits of technological developments and innovation. Achievements in the development of AI should not be evaluated from the standpoint that it is a race between democratic societies and future technologies. Ultimately, we want to have both: democratic societies based on the Rule of Law and fundamental human rights in which everyone benefits equally from technologies