EU AI Act Crosses the Finish Line: What's Actually in the Final Text
On May 21, the Council of the EU gave its final green light to the AI Act. Once it appears in the Official Journal — expected in the next several weeks — the twenty-day clock begins, and the Act will technically be in force. The bulk of its substantive obligations, however, do not bite for another six to thirty-six months. So we have time. What we do not have, anymore, is uncertainty about what the obligations look like.
This post is not a comprehensive walk-through. The Act runs to roughly 460 pages with annexes, and serious treatments of it will be book-length. Instead, I want to flag the parts of the final text that meaningfully changed during the trilogue and tell you what surprised me on a careful re-read.
The risk pyramid, mostly intact
The four-tier structure — prohibited, high-risk, limited-risk, minimal-risk — survives intact from the Commission's 2021 proposal. The list of prohibited practices in Article 5 grew during negotiations and now covers eight categories, including untargeted scraping of facial images for facial recognition databases (a clear shot at Clearview AI), emotion inference in workplaces and schools (with limited safety exceptions), and most real-time remote biometric identification in publicly accessible spaces (with carve-outs for serious crime).
Article 5 prohibitions take effect six months after entry into force — so roughly February 2025. That deadline is the one that practitioners should be marking on their calendars now, because it does not require any further regulatory build-out to enforce.
High-risk: Annex III is what to read
The high-risk regime covers two buckets. The first, under Article 6(1), captures AI systems that are themselves safety components of products already regulated under EU product safety law (medical devices, machinery, toys, vehicles). The second, under Article 6(2) and Annex III, lists eight high-risk use cases independent of any product-safety hook: biometric identification, critical infrastructure, education, employment, access to essential services, law enforcement, migration and border control, and administration of justice.
Most readers should focus on Annex III. That is where the surprises are, and where the scope is broadest. The employment category alone, for example, is broad enough to capture most résumé screening, performance-evaluation, and task-allocation tools used in a workplace context. Compliance for high-risk systems involves a familiar suite of obligations — risk management, data governance, technical documentation, human oversight, accuracy and robustness — but the conformity assessment regime is where the actual enforcement bite lives.
The new GPAI chapter
The most genuinely new piece of the final text is Chapter V on general-purpose AI models. This was added largely in response to the late-2022 generative AI surge and went through several rounds of revisions in trilogue. The result is a two-tier structure:
- All GPAI models: technical documentation, training-data summaries (in a template the AI Office will publish), and a copyright policy demonstrating compliance with EU copyright law.
- GPAI models with "systemic risk": additional obligations including model evaluations, adversarial testing, serious-incident reporting, and cybersecurity safeguards. The triggering threshold is a 10^25 FLOP cumulative training compute, which today captures a small handful of frontier models.
The GPAI obligations come into effect twelve months after entry into force. For the systemic-risk tier, the AI Office is supposed to publish a code of practice that providers can sign onto in lieu of demonstrating compliance directly. How that code is drafted will be one of the most important inside-baseball stories of late 2024.
Enforcement architecture
The Act sits on top of a dual structure. National market surveillance authorities handle high-risk system compliance within their territories. The new AI Office in Brussels, sitting under DG CNECT, handles GPAI model oversight and acts as the secretariat for the AI Board (representatives from each member state). Penalties can reach the higher of €35M or 7% of global turnover for prohibited-practice violations — higher than the GDPR ceiling, deliberately so.
What surprised me on re-read
Three things. First, the fundamental rights impact assessment requirement (Article 27) only applies to specific deployers — public bodies and certain private bodies providing public services — not to all high-risk-system deployers as some earlier drafts suggested. That is a meaningful narrowing.
Second, the Act's interaction with the GDPR is left almost entirely implicit. Where there are conflicts, the GDPR generally wins, but the seams will get litigated. Expect the EDPB to weigh in early.
Third, the extraterritorial reach is broader than people realize. The Act applies not just to providers placing systems on the EU market, but also to providers and deployers established outside the EU where the output of the system is used in the EU. That is a "use of output" test, not a "targeting" test, and it is going to surprise a lot of U.S. providers.
What to do now
For most clients, the immediate action is inventory and classification. Identify which AI systems your organization develops or deploys, and slot each one into the Act's categories. The Annex III categorization exercise is harder than it looks, and getting it right early is much cheaper than getting it wrong and having to retrofit compliance later.
Beyond that, the next thing to watch is the AI Office's standard-setting work. The conformity assessment harmonized standards, the GPAI training-data template, and the systemic-risk code of practice will collectively determine how onerous the compliance regime actually is. Final text or no, the rules of this game are still being written.