There is a notorious and unavoidable technological advance in artificial intelligence, despite the constant development that is still taking place.
AI not only presents a way of solving many of the challenges facing today’s societies, but also carries risks related not only to security and privacy, but also to ethical dilemmas in decision-making contexts.
In this way, the European Artificial Intelligence Regulation aims to guarantee a technological advance that every direct or indirect user can trust.
The Regulation (EU) 2024/1689, June 13th, thus establishes:
-
Harmonized rules concerning market placement, service entry and use of AI-systems in the Union;
-
Prohibitions on certain AI practices;
-
Specific requirements for high-risk AI systems and obligations for operators of such systems;
-
Harmonized transparency rules for certain AI systems;
-
Harmonized rules for market placement of general purpose AI models;
-
Rules on market monitoring, market surveillance, governance and enforcement;
-
Measures to support innovation, with special emphasis on SMEs, including start-ups.
First of all, an “AI system” is considered to be any machine-based system designed to operate with varying levels of autonomy, which can be adapted after deployment and which, for explicit or implicit purposes, and based on the input data it receives, infers how to generate results, such as predictions, content, recommendations or decisions that can influence physical or virtual environments.
When it comes to prohibitions on certain AI practices, we are entering a minefield full of subjectivism and indeterminate concepts that will still generate a lot of controversy.
Let’s focus, for example, on the use of AI systems for remote biometric identification in “real time” in spaces accessible to the public for law enforcement purposes, a practice prohibited by the law being analyzed here, with the exception of cases where the aim is to:
a) selective search for specific victims of kidnapping, trafficking in human beings or sexual exploitation of human beings, as well as the search for missing persons;
b) preventing a specific, substantial and imminent threat to the life or physical safety of natural persons or a real and present or real and foreseeable threat of a terrorist attack;
c) locating or identifying a person suspected of having committed a criminal offence, for the purposes of conducting a criminal investigation, or prosecuting or enforcing a criminal penalty for any of the offences listed in Annex II and punishable in the Member State concerned by a custodial sentence or detention order for a maximum period of at least four years;
if the use in question is strictly necessary.
In this regard, it is further specified that the use of AI Systems for the purposes listed in point a) to c), described in the previous paragraph, must be for the sole purpose of confirming the identity of the person specifically targeted and must weigh, on the one hand, the seriousness, likelihood and magnitude of the damage caused in the absence of the use of the system and, on the other hand, the consequences of the use of the system for the rights and freedoms of all persons affected.
The use of a remote biometric identification system “in real time” in spaces accessible to the public is subject to prior authorization granted by a judicial authority, or an independent administrative authority whose decision is binding, of the Member State in which the use will take place, without prejudice to situations of urgency, in which the request for use must be requested within a maximum of 24 hours and, if rejected, will require the immediate suspension of the use and the elimination of its results.
In addition, the Artificial Intelligence Regulation defines four levels of risk for AI systems:
-
Unacceptable Risk, whereby all AI Systems that are considered a threat to security, livelihoods and people’s rights are prohibited;
-
High Risk, it should be noted that all high risk AI Systems must be subject to a compliance assessment and allow for effective supervision by individuals during the period in which they are in use;
-
Transparency Risk, which is essentially related to the need to inform direct or indirect users that they are interacting with an AI engine or to clearly identify that they are dealing with AI-generated content;
-
Minimal or Nil Risk, which poses no or minimal inconvenience and therefore no rules are introduced for systems that fall into this category.
Whilst we are dealing with broad regulatory rules, it should be noted that each Member State must designate or create at least one notifying authority responsible for establishing and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for supervising these bodies, as well as deciding that the assessment and supervision should itself be carried out by a national accreditation body.
It will also be up to the member states to determine the system of sanctions and other enforcement measures, which may also include warnings and non-pecuniary measures, establishing from the outset that the fines to be imposed could be up to €35,000,000.00 or more, depending on the annual worldwide turnover of the company concerned.