Article 21 Sep. 2023

EU Artificial Intelligence Act: A General Overview

Download the full article with footnotes here.

Introduction

Throughout the last year, Artificial Intelligence and its perils have likely been the most discussed topic by the public among new technologies, however concerns in relation to its application are certainly not new and have existed ever since the dawn of this technology, being one of sci-fi greatest topoi.
It shouldn’t come as a surprise, then, that most industrialized countries are beginning to regulate the industry, urged by several public appeals issued by a significant number of experts of the field.

One of those regulatory entities is the European Union, where the first draft of the Artificial Intelligence Act (“AI Act” ), following the adoption of a white paper in February 2020 and several ad-hoc studies, was published by the European Commission in April 2021.
The AI Act, after several procedural steps, including a new amended draft proposed by the Council of the EU in November 2022 , is currently in the middle of the trialogue negotiations procedure, in accordance with the ordinary EU legislative process.
Most recently, the European Parliament, on June 14th, approved a large number of amendments on the Commission’s first draft, significantly expanding the scope of many provisions.

The AI Act sets out a legal framework for the development, placing on market, and use of AI products and services in EU countries, structuring a risk-based system that imposes duties on parties developing or deploying AI systems.
This article provides a very general overview of the content of the AI Act, mainly based on the most recent draft mentioned above, proposed by the European Parliament.

The AI Act is divided into twelve Titles, however Titles II, III and IV seem to be the most relevant, setting out the core concepts underlying the whole piece of legislation, and each governing practices posing, from the legislation perspective, a different level of risk for citizens, in decreasing order; it goes without saying that the higher the risk, the stricter the rules governing the relative AI practice.

In the following, we will provide a more detailed description of those three “core” Titles, and a brief summary of the others eight (i.e. Titles V to XII).

Title II: Prohibited AI practices

Under Article 5 of the AI Act, certain AI practices are perceived as detrimental to a person’s safety, livelihoods and rights. Such practices include, for example, AI systems (i) deploying manipulative or deceptive techniques, (ii) exploiting people’s vulnerabilities, (iii) for social scoring purposes leading to detrimental or unfavorable treatments, (iv) using real-time remote biometric identification systems in public accessible spaces. Due to the unacceptable risk entailed, the regulation bans the use of such AI practices.

Title III: High-risk AI systems

According to the AI Act, high-risk AI systems are those used as a safety component of a product or being themselves a product covered by EU health and safety legislation as well as those falling within the areas listed in Annex III to the AI Act .
High-risk AI systems shall comply with the requirements set out under Articles 8 to 15 of the AI Act, including:

  1. the establishment, implementation and maintenance of a risk management system;
  2. data training, validation and testing as well as data governance;
  3. draw up the technical documentation before a system is placed on the market to demonstrate that the high-risk AI system complies with the requirements imposed by the regulation;
  4. record keeping as to ensure a level of traceability of the AI system’s functioning throughout its entire lifetime;
  5. transparency as to enable users to interpret the system’s output and use it appropriately ;
  6. human oversight ;
  7. an appropriate level of accuracy, robustness and cybersecurity.

Consistently, the regulation imposes a range of obligations on providers, deployers, importers, distributors and users of high-risk AI systems.

As to providers, they are required to, inter alia:

  1. ensure that the high-risk systems are compliant with the requirements listed under Articles 8 to 15 of the AI Act;
  2. indicate their name, registered trade name or registered trade mark, and their address and contact information on the high-risk AI system;
  3. draw-up and keep the technical documentation of the high-risk AI system;
  4. ensure that the high-risk AI system undergoes the relevant conformity assessment procedure pursuant to Article 43 before it is placed on the market or put into service;
  5. affix the CE marking to the high-risk AI system to indicate conformity with the regulation, in accordance with Article 49;
  6. have a quality management system, in the form of written policies, procedures or instructions, and that ensures compliance with the regulation;
  7. take the necessary and appropriate corrective actions in the event a high-risk AI system placed on the market or put into service is not compliant with the regulation;
  8. promptly inform the national supervisory authorities of the Member States in which the high-risk AI system has been made available, in the event the system presents a risk within the meaning of Article 65, indicating the nature of the non-compliance and of any corrective measure adopted.

With regard to importers, before a high-risk AI system is placed on the market, they shall ensure that the conformity assessment procedure ex Article 43 has been carried out by the provider, the technical documentation requested under the regulation has been drawn-up and, where applicable, an authorized representative has been appointed.

On the other hand, distributors shall verify, before making a high-risk AI system available on the market, that such system bears the CE conformity marking, is accompanied by the required documentation together with instructions of use, and both the provider and the importer have fulfilled their obligations under the regulation.

Finally, also deployers shall comply with a range of obligations including, inter alia, the adoption of appropriate technical and organizational measures to ensure high-risk AI systems are used in accordance with the relevant instructions of use, the implementation of human oversight, monitoring the effectiveness as well as adjusting and updating cybersecurity measures, the conduction of an assessment of the high-risk AI systems’ impact in the specific context of use prior to putting such systems into use.

Title IV: Transparency obligations

According to Article 52 of the AI Act, providers of systems that interact with natural persons (such as chatbots) shall comply with transparency obligations and, in particular, shall ensure that persons exposed thereto are made aware they are interacting with an AI system.

Likewise, users of emotion recognition systems or biometric categorization systems, not prohibited under Article 5, shall inform humans exposed to such systems of the operation of the same and obtain their prior consent to the processing of their biometric and other personal data in accordance with the applicable EU legislation. Also, users of AI systems generating or manipulating text, audio or visual content that would falsely seem to be authentic or truthful (i.e. deep fake) shall disclose that the content has been artificially generated or manipulated.

The above transparency obligations do not apply to AI systems authorized by law to detect, prevent, investigate and prosecute criminal offences.

Titles V to XII

  • Title V (“Measures in support of Innovation”) is focused on the creation of, and access to, AI regulatory sandboxes, i.e. a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan.
  • Title VI (“Governance”) sets up a governance systems at Union and national level, establishing in the latter national supervisory authorities to supervise the implementation of the regulation, and in the former a board providing expertise to the national supervisory authorities and the Commission.
  • Title VII (“Eu Database for stand-alone High-Risk AI Systems”) aims to establish an EU-wide database for stand-alone high-risk AI systems, operated by the Commission and provided with data by the providers of the AI systems.
  • Title VIII (“Post-Market Monitoring, Information Sharing, Market Surveillance”) sets out the post-market monitoring and reporting obligations for providers of AI systems with regard to AI-related incidents and malfunctioning. Additionally, it deals with rules enforcement and market surveillance, stating that national appointed authorities and the Commission would investigate the compliance with the regulation, being empowered by Regulation no. 1020/2019. Member States may existing sectorial authorities, on national level, for the above.
  • Title IX (“Codes of conduct”) sets out the basis for the creation and implementation of codes of conduct needed to encourage providers of non-high-risk AI systems to voluntary apply the mandatory requirements for high-risk AI systems listed under Title III of the regulation;
  • Title X (“Confidentiality and penalties”) contains measures to ensure the effective implementation of the regulation through administrative fines up to Euro 40 million or if the offender is a company up to 7% of its total worldwide annual turnover for violation of the AI Act;
  • Title XI (“Delegation of power and committee procedure”) concerns the exercise of delegation and implementing powers assigned to the Commission to ensure a uniform application of the regulation;
  • Title XII (“Final provisions”) deals with the enter into force and application of the regulation, which will enter into force on the 20th day following that of its publication in the Official Journal of the EU and will apply from 24 months following its entering into force.

Conclusions

Negotiations on the final text of the AI Act between the European institutions – i.e. the European Commission, the European Parliament and the European Council – have already begun and are at an advanced stage.
Once adopted, the AI Act will apply after 24 months its entry into force. Therefore, if an agreement on the final text can be reached in trialogue negotiations later this year, the AI Act will apply in late 2025, at the earliest.
What is certain is that the European Union is the first in the world to have almost definitively adopted a legislation that will broadly regulate AI systems, and it would be reasonable to expect other legislators to follow the European path and to commence drafting their own legislation to regulate AI.
The above can also be already relevant to companies operating in AI-related industries, as, even in this non-definitive text much can be read into the concerns and the perspective of public authorities.

Related resources

article

Curtis Partner John Balouziyeh Quoted in an Article Analyzing War Crimes Litigation

Read

article

Curtis Lawyers Featured in Fortune Article on Future of Accessible Luxury and Antitrust Challenges

Read

event

Counsel Valerio Salvatori Spoke on Investment Arbitration at the University of Turin

View