On 9 December 2023, trilogue negotiations on the EU’s Artificial Intelligence (“AI”) Act reached a key inflection point, with a provisional political agreement reached between the European Parliament and Council.  As we wait for the consolidated legislative text to be finalised and formally approved, below we set out the key points businesses need to know about the political deal and what comes next.

  1. The political agreement on the scope of certain rules is provisional – there is still some way to go before a final consolidated legislative text is ratified.

There is currently no consolidated text of the AI Act reflecting the rules agreed during the trilogue meetings between participating institutions.  According to some reports, technical meetings between experts are expected to start on Tuesday 19 December to hash out details of the text, including the scope of certain rules (e.g., the legal basis for use of AI in biometric surveillance and the rules applicable to major AI systems) and how they will work in practice.[1] In light of criticism following the announcement of the political deal – and renewed warnings that the AI Act risks hampering innovation in the European market – there may still be room for debate over the final terms of the AI Act before ratification.[2] In particular, it has been reported that France, Germany and Italy – who have previously been critical of strict rules on foundation models – may seek alterations to related rules in the final text (which could further delay, or possibly prevent, passage of the law).[3]

The consolidated text will then need to be formally approved by both the European Parliament and Council (which could happen as early as January 2024 if there is political consensus).  Once approved and published in the Official Journal, the AI Act would apply after a transitional period of two years after its entry into force, although it is envisaged that certain rules will come into effect at an earlier time.  In particular, the ban on prohibited AI systems would apply after 6 months and the rules on General Purpose AI would come into force after 12 months.[4] 

To bridge the transitional period, the Commission has initiated an AI Pact – a scheme to encourage companies to voluntarily communicate the processes and practices they are putting in place to prepare for compliance and ensure that the design, development and use of AI is trustworthy.[5]

  1. The risk-based approach is maintained under the political agreement.  Certain types of AI systems – including chatbots and deepfakes – will be subject to transparency obligations.

The political agreement reached on 9 December maintains the risk-based approach proposed in the Commission’s original legislative draft.[6]  Under this approach, the majority of AI systems are likely to fall into the category of minimal risk (e.g., AI-enabled recommender systems or spam filters); such AI systems will not be covered by binding rules under the regulation but their providers may commit to voluntary codes of conduct.  The bulk of the obligations under the AI Act will be imposed in respect of AI systems – such as AI deployed in products subject to certain EU health and safety harmonisation legislation (e.g., medical devices) or used in university admissions or grading – which are classified as high-risk.[7] A narrow set of AI system applications (e.g., biometric categorisation systems that use sensitive characteristics, AI systems that underpin social scoring or manipulate human behaviour) will be banned outright. Similarly, some uses of biometric identification systems will be prohibited (e.g., real time remote biometric identification for law enforcement purposes in publicly accessible spaces), with limited exceptions for pre-authorised national security reasons.[8] 

Providers of certain types of popular consumer-facing AI systems (e.g., chatbots) will be subject to specific transparency obligations, such as a requirement to make users aware that they are interacting with a machine. Deepfakes and other AI-generated content will also have to be labelled as such, and users will need to be informed when biometric categorisation or emotion recognition systems are being used. Additionally, providers will have to design systems in a way that ensures any synthetic audio, video, text, images or other content is marked in a machine-readable format, and detectable as artificially generated or manipulated.

  1. All General Purpose AI (“GPAI”) systems and foundation models will be subject to certain copyright-related disclosure obligations.

One of the key areas of scrutiny and debate throughout the trilogue negotiation process has been the extent to which foundation models and generative AI systems should be regulated.  As part of the political deal struck last week, the relevant institutions have agreed to introduce specific guardrails for a new category of “General Purpose AI (GPAI)” systems (and the GPAI models they are based on).[9]  Although there is not yet an official definition for what models fall within the GPAI bucket, previous proposals suggest that these are systems trained on a large amount of data and capable of performing a wide range of distinct tasks.[10]

Providers of such GPAI systems and models will be required to draw up and make publicly available a “sufficiently detailed summary” about the content used for training such systems or models.  This disclosure obligation was originally introduced in the European Parliament’s compromise text[11] but, unlike the Parliament’s proposal, it no longer refers to a summary of training data “protected under copyright law” – which would have required providers to distinguish between copyright-protected and public domain training materials.[12]  We understand a template will be developed by the AI Office for such a summary, which would allow the AI system provider to provide the required summary in a narrative form.[13] 

  1. Additional obligations are contemplated for “high-impact” GPAI models that could pose a systemic risk.

Under the provisional political agreement, “very powerful” or “high-impact” GPAI models that could pose systemic risks would be subject to additional binding obligations, including to conduct model evaluations and adversarial testing, assess and mitigate systemic risks, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency.[14]  These new obligations would be operationalised through “codes of practices developed by industry, the scientific community, civil society and other stakeholders together with the Commission”.[15]  It remains to be seen what criteria are used to determine whether a GPAI model falls within the category of “very powerful” or “high-impact” models – based on prior proposals discussed during the trilogue, we understand that GPAI models which have been trained using an amount of compute exceeding a certain number of floating point operations (“FLOPs”) may be presumed to have high-impact capabilities.[16]

  1. Open-source AI models might benefit from an exclusion from certain obligations under the AI Act.

According to certain reports, open-source AI models[17] might fall outside the scope of certain rules under the AI Act, although the position is not yet clear.  For example, it was reported that open-source models would still be subject to the copyright-related disclosure provisions and would be captured as high-risk or prohibited AI systems if the associated requirements under the legislation were satisfied.[18]  Whether the reports are accurate and legislators ultimately draw a distinction between open-source and proprietary models in the final text remains to be seen. 

  1. Companies may face fines of up to €35 million or 7% of global annual turnover for failing to comply with the AI Act rules.

Non-compliance with the rules set forth in the AI Act could see companies facing substantial fines ranging from:

  • €35 million or 7% of global annual turnover (whichever is higher) for violations with respect to banned AI applications;
  • €15 million or 3% of global annual turnover for violations of other obligations; and
  • €7.5 million or 1.5% of global annual turnover for supplying incorrect information.

The Commission has indicated that “more proportionate caps” are likely to be imposed for SMEs and start-ups.[19] National compliance is to be dealt with by national authorities, with the new European AI Office overseeing implementation and enforcement of the rules. It appears independent scientific experts will also have a role to play in the regulation of GPAI models by, for example, issuing alerts on systemic risks.

  1. Now is the time to act.

Businesses – in Europe and elsewhere – will need to account for relevant measures under the legislation in their AI initiatives. As the EU pushes forward with complementary measures in this space – such as the new Product Liability Directive (to address liability for digital technologies, including AI-enabled goods and services) and the proposed AI Liability Directive (to introduce uniform rules for civil law claims for damage caused by AI systems)[20] – now is the time to start taking the necessary steps for compliance.


[1] Reuters, EU begin to hash out EU AI Act details starting Tuesday (12 December 2023).

[2] See e.g., trade associations responses to the political agreement: Digital Europe, A milestone agreement, but at what cost? Response to the political deal on the EU AI Act and CCIA, AI Act Negotiations Result in Half-Baked EU Deal; More Work Needed, Tech Industry Emphasises.

[3] Financial Times, EU’s new AI Act risks hampering innovation, warns Emmanuel Macron (11 December 2023).

[4] The exact timings for implementation are subject to further confirmation and alignment. However, the Commission press release notes that “the rules on General Purpose AI will apply after 12 months” (see EC, Commission welcomes political agreement on AI Act (9 December 2023)). A report by Reuters, What the EU’s AI Act means for service firm professionals (11 December 2023) also suggests that Thierry Breton (the European Commissioner for the Internal Market) has said that “draft transparency and governance requirements should be published in about 12 months”.

[5] European Commission, AI Pact | Shaping Europe’s digital future.

[6] For further background on the AI Act and this risk-based approach, see our separate blog post here.

[7] Such systems would have to undergo a conformity assessment before being placed on the market, and a ‘CE’ mark attached. But providers would also have ongoing obligations to monitor incidents and malfunctions post-deployment; how exactly this will work and be enforced in practice is not altogether clear.

[8] For example, real-time biometric surveillance in public spaces may be permitted under certain conditions and where it is limited to protecting victims of certain crimes, or for the prevention of genuine, foreseeable threats, such as terrorist attacks.

[9] See Europa, Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world (9 December 2023).

[10] Under the most recent available compromise proposal published by POLITICO on 6 December (which we understand to be the basis for the political agreement on GPAIs), General Purpose AI Model is defined as “an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is released on the market and that can be integrated into a variety of downstream systems or applications.” The definition and extent to which such systems were to be regulated underwent various iterations during the trilogue process. See Open Future – GPAI Compromise proposal.

[11] See Parliament draft of 14 June 2023, Article 28(b)(4)(c), available here.

[12] According to the language of a compromise proposal published by POLITICO on 6 December – see Open Future – GPAI Compromise proposal.

[13] Ibid.

[14] See European Parliament press release,Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI (9 December 2023).

[15] See European Commission press release, Commission welcomes political agreement on AI Act (9 December 2023).

[16] See e.g., Article A(2) of the compromise proposal published by POLITICO on 6 December: “A general-purpose AI model shall be presumed to have high impact capabilities […]when the cumulative amount of compute used for its training measured in floating point operations (FLOPs) is greater than 10^25.See Open Future – GPAI Compromise proposal.

[17] Defined under the compromise proposal published by POLITICO on 6 December as “AI models that are made accessible to the public under a free and open-source licence whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available”. See Open Future – GPAI Compromise proposal.

[18] See e.g., EURACTIV, AI Act: EU policymakers nail down rules on AI models, butt heads on law enforcement.

[19] See European Commission press release, Commission welcomes political agreement on AI Act (9 December 2023).

[20] For further information on the new Product Liability Directive and the proposed AI Liability Directive, see our blog-post: Modernising Liability Rules for Products and AI in the Digital Age.