Client Alert 14 Feb. 2024

Recent Developments in the EU AI Act

Click here to download the client alert with footnotes.

On Friday, February 2nd, representatives from EU member states unanimously voted in favor of advancing the European Union’s Artificial Intelligence Act (“AI Act”) to its next stages. EU member states made changes to the “provisional agreement” – an earlier version of the text agreed on December 8, 2023 by the European Commission –, passing a new “compromise text” that moved to European Parliament committees for further approval.

On Tuesday, February 13th, the European Parliament’s Committee on Internal Market and Consumer Protection and the Committee on Civil Liberties, Justice and Home Affairs voted 71-8 in favor of advancing the AI Act to its final stage: a plenary vote of the AI Act by the EU Parliament (currently scheduled for April 10th-11th).

Below we highlight the notable differences between the “provisional agreement” and the new “compromise text.”

Revised Scope: National Security Excluded

The scope of the Act has been slightly altered. The compromise text makes clear that national security is excluded from the scope of the Act. This change aligns the compromise text “more closely with the respective language used in recently agreed legal acts” in the EU, such as the Cyber Resilience Act and the Data Act.

New Definition of an AI System That Sets It Apart from Plain Software

The compromise text now defines an AI system as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” See Art. 3(1).

This definition has been modified to align more closely with the recognized definition used by international organizations working on artificial intelligence, like the Organization for Economic Co-operation and Development (OECD). The new definition is more tailored, and differentiates AI from other simpler software.

Prohibited AI Practices are Further Detailed

The compromise text adds detail to the list of AI practices that are always prohibited or prohibited in certain circumstances. Article 5 prohibits real-time biometric identification by law enforcement in public areas, with listed exceptions in Article 5(1)(d). The compromise text now notes safeguards to this provision, including monitoring, oversight measures, and limited reporting obligations at the EU level.

Additional prohibited uses of AI include untargeted scraping of facial images for creating or expanding facial recognition databases, emotion recognition (at the workplace or educational institutions), a limited prohibition of biometric categorization based on certain beliefs or characteristics, and a “limited and targeted” ban on individual predictive policing. The ban on predictive policing covers systems that “assess or predict the risk of a natural person to commit a criminal offense, based solely on the profiling of a natural person or on assessing their personality traits and characteristics.” See Art. 5(1)(da).

Fundamental Rights Impact Assessment

Article 29a(1) of the compromise text includes obligations for certain entities using AI to perform an assessment of the impact on fundamental rights that the use of the system may produce. This provision outlines the specific inquiries that the assessment must take, including a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose, the categories of natural persons and groups likely to be affected by its use in the specific context, and a description of the implementation of human oversight measures. See Art. 29a(1)(a)-(f).

Testing High-Risk AI Systems in Real World Conditions

The compromise text now includes provisions on testing high-risk AI systems in real world conditions, outside of AI regulatory sandboxes. See Art. 54a-54b. This means that testing high-risk AI systems in real world conditions will be possible, subject to a range of safeguards.

General Purpose AI Models

The compromise text includes new provisions concerning general purpose AI (“GPAI”), i.e., systems that have several possible uses, both intended and unintended by the developers.

Changes include new obligations for providers of GPAI models, which include keeping up-to-date and making available, upon request, technical documentation to the AI Office and national competent authorities. See Art. 52c(1)(a). A “provider” is defined as a person or entity that develops an AI system or a GPAI model and places them on the market or puts the system into service. See Art 3(2).

There are also new obligations for providers of GPAI models to provide information and documentation to downstream providers. See Art. 52c(1)(b). Downstream providers are providers that integrate an AI model that may have been provided by another entity into a product. See Art. 3(44g).

Providers of GPAI models will be required to adopt a policy to respect EU copyright law, as well as “make publicly available a sufficiently detailed summary” about how the GPAI was trained. See Art. 52c(1)(c).

In addition, providers of GPAI models “presenting systemic risks” will face additional requirements, which include “performing model evaluation, making risk assessments and taking risk mitigation measures, ensuring an adequate level of cybersecurity protection, and reporting serious incidents to the AI Office and national competent authorities. See Art. 52a. A GPAI model may be classified as a model with systemic risk if it has “high impact capabilities.” The designation of “high impact capabilities” can be given to a GPAI model either when it reaches a certain benchmark of computation ability, or if it is given such a designation by the Commission. See Art. 52a(1).

New Compliance Deadlines for AI Systems Already Deployed and Available

The compromise text also provides compliance deadlines for providers or deployers of AI systems that are already on the market or in service.

For public authorities that are acting as providers/deployers of high-risk AI systems, they will have 4 years from the entry into application to make their systems compliant. See Art. 83(2).

Further, every GPAI model already deployed before the enactment of the AI Act will have a total of 3 years after the date of enactment of the AI act to be brought into compliance. See Art. 83(3).

Raised Penalties for Non-Compliance

The compromise text raises the penalty for non-compliance with the provisions specifically concerning prohibited AI practices outlined in Article 5 from the higher of €35 million or 6.5% annual turnover, to the higher of €35 million or 7% of annual turnover. See Art. 71.

Further, there are new fines specifically for GPAI providers for non-compliance with certain enforcement measures, such as requests for information. See Art. 72a.

Entry into Application

The compromise text provides for a general 24-month window after ratification for the AI act to go into effect. See Art. 85. However, there a slightly shorter windows for certain elements to go into effect, such as a 6-month period for certain prohibited uses of AI and a 12-month period for provisions concerning “notifying authorities and notified bodies, governance, general purpose AI models, confidentiality and penalties.” See Art. 85. There is a slightly longer window of 36-months before provisions regarding high-risk AI systems listed in Annex II go into effect.

About Curtis

Curtis, Mallet-Prevost, Colt & Mosle LLP is a leading international law firm. Headquartered in New York, Curtis has 19 offices in the United States, Latin America, Europe, the Middle East and Asia. Curtis represents a wide range of clients, including multinational corporations and financial institutions, governments and state-owned companies, money managers, sovereign wealth funds, family-owned businesses, individuals and entrepreneurs.

For more information about Curtis, please visit

Attorney advertising. The material contained in this Client Alert is only a general review of the subjects covered and does not constitute legal advice. No legal or business decision should be based on its contents.

Please feel free to contact any of the persons listed on the right if you have any questions on this important development.

Related resources

client alert

The American Privacy Rights Act of 2024



Charlie Howland Serves as Justice in Moot Court Competition



Elisa Botero Appointed Co-chair of the AIEN’s Hydrogen Joint Development and Operating Agreement Committee