Counting Down to February 2: EU AI Act Prohibitions Take Effect

February 2, 2025 is six weeks away. That is when Article 5 of the EU AI Act becomes enforceable — the prohibited-practices regime, the first piece of the Act to bite. There is no transition period for Article 5; on February 3, deploying a prohibited system in the EU is a violation. The penalties go up to the higher of €35 million or 7% of global annual turnover, the highest tier in the Act.

This is the post most of our clients have asked us to write. Below, each of the eight prohibitions, with a focus on the practical edges — what is in, what is out, what is genuinely unclear.

1. Subliminal techniques and manipulation (Art. 5(1)(a))

Prohibits AI systems that deploy subliminal techniques beyond a person's consciousness or purposefully manipulative or deceptive techniques, with the objective or effect of materially distorting behavior in a manner that causes or is reasonably likely to cause significant harm.

Practical edge: the "purposefully manipulative or deceptive" language is broad enough to capture aggressive personalization, dark patterns, and persuasion-optimization features in consumer products. Where it stops is unclear. Recital 29 says "ordinary commercial communications" are not covered, but that line will be litigated. Conservative posture: review high-engagement personalization features, especially anything optimizing against vulnerable users.

2. Exploitation of vulnerabilities (Art. 5(1)(b))

Prohibits AI exploiting vulnerabilities of a person or group due to their age, disability, or specific socio-economic situation, with the objective or effect of materially distorting behavior in a manner that causes or is reasonably likely to cause significant harm.

Practical edge: overlaps significantly with consumer protection law. Note "socio-economic situation" — broader than vulnerability frameworks in many U.S. statutes. Predatory-lending-adjacent applications are in the crosshairs.

3. Social scoring (Art. 5(1)(c))

Prohibits AI systems that evaluate or classify natural persons based on social behavior or known, inferred, or predicted personality characteristics, where the resulting score leads to detrimental or unfavorable treatment in social contexts unrelated to where the data was collected, or treatment that is unjustified or disproportionate.

Practical edge: this is meant to capture China-style social credit systems. The drafters were not subtle about this. Where it gets interesting: cross-context risk scoring is also in scope. A fraud-detection score generated from financial data that ends up affecting employment decisions, for example, is the kind of thing this provision is reaching. Insurance underwriting that pulls broad lifestyle inputs is on the bubble.

4. Predictive policing on individuals (Art. 5(1)(d))

Prohibits AI systems making risk assessments of natural persons solely based on profiling or assessment of personality traits to predict criminal offending. Excepted: AI supporting human assessment based on objective and verifiable facts directly linked to a criminal activity.

Practical edge: "solely based" is doing the load-bearing work. AI-assisted human review is allowed; AI-led prediction is not. The Article 6 list of high-risk law-enforcement AI continues to apply to the systems that fall outside this prohibition — they are heavily regulated, just not banned.

5. Untargeted facial-image scraping (Art. 5(1)(e))

Prohibits AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV.

Practical edge: this is the Clearview AI provision. Targeted scraping (e.g., for identifying specific individuals subject to legitimate process) is not covered. Practically: any client that has built or considered building a facial-recognition database from public web sources should have done their analysis already. If they have not, they need to.

6. Emotion inference at work and school (Art. 5(1)(f))

Prohibits AI inferring emotions in workplaces and educational institutions, except for medical or safety reasons.

Practical edge: the "emotion" definition in Recital 18 is narrower than common usage — it means basic emotions like happiness, sadness, fear, surprise, etc. Sentiment analysis on text, in some readings, is outside the scope; analysis of facial expressions or voice tones to infer affective state is squarely in. Workplace productivity tools that use webcam-based attention scoring are gone. Call center quality monitoring tools that analyze voice tone for "frustration" indicators are on the bubble.

7. Biometric categorization to infer sensitive attributes (Art. 5(1)(g))

Prohibits AI systems that categorize natural persons based on biometric data to deduce or infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. Exception for the labeling of lawfully acquired datasets in law enforcement.

Practical edge: straightforward in concept; the work is in identifying which features in commercial systems are de facto biometric categorization even when not labeled as such. Demographic-prediction models built on facial features are squarely in.

8. Real-time remote biometric identification in public for law enforcement (Art. 5(1)(h))

Prohibits real-time RBI in publicly accessible spaces for law enforcement, except for narrowly enumerated cases (victim search, terrorist threat prevention, suspect identification for serious crimes), all subject to authorization regimes.

Practical edge: the carve-outs are narrower than they look on first read. Member states must enact implementing legislation for the carve-outs to be available, and the Act sets minimum requirements for that legislation. As of December, only a handful of member states have moved on this.

What enforcement will look like

National market surveillance authorities will lead Article 5 enforcement, supported by the AI Office in Brussels. We expect:

Action items for the next six weeks

If your inventory work is not done, do it now. The list is short enough that a focused review is feasible:

  1. Identify any AI system in your stack that touches an EU end-user.
  2. Map each one against the eight prohibitions above.
  3. For anything that is on the bubble, document the reasoning for non-prohibited classification — this is the record you will want if a national authority comes asking.
  4. For anything on the wrong side of the line: switch it off, replace it, or sequester it from EU users by February 2.

The Act has a ten-month head start on its own enforcement; the prohibition regime is the only piece bite-ready in February. Use the early enforcement period to set the right tone with your supervisory authorities. The high-risk regime arrives August 2026, which feels far away but will arrive sooner than your compliance team expects.