February 2, 2025 was the first date the EU AI Act produced binding legal consequences. Not guidelines. Not recommendations. Not a preparatory period for high-risk systems. A hard prohibition, backed by fines of up to EUR 35 million or 7% of global annual turnover — whichever is higher. Article 5 of Regulation (EU) 2024/1689 lists eight categories of AI practices that are now unlawful across the European Union.
Ten months later, the picture is clearer than the initial compliance panic suggested. The prohibited practices ban is narrower than many feared. But the provisions on social scoring and real-time biometric identification have real operational implications that extend beyond the obvious cases, and the Commission's interpretation guidance — while still developing — has started to draw lines that matter for enterprise compliance programs.
What Article 5 Prohibits: The Eight Practices in Plain Language
Article 5 enumerates eight categories of AI systems that are prohibited from being placed on the market, put into service, or used within the Union. Here they are, stripped of legislative syntax:
-
Subliminal manipulation (Art. 5(1)(a)): AI systems that deploy techniques beyond a person's consciousness to materially distort their behaviour in a way that causes or is reasonably likely to cause significant harm.
-
Exploitation of vulnerabilities (Art. 5(1)(b)): AI systems that exploit vulnerabilities related to age, disability, or specific social or economic situations to materially distort behaviour and cause significant harm.
-
Social scoring by public authorities (Art. 5(1)(c)): AI systems used by public authorities (or on their behalf) to evaluate or classify natural persons based on social behaviour or personal characteristics, where the resulting score leads to detrimental treatment that is unjustified or disproportionate.
-
Individual criminal risk assessment (Art. 5(1)(d)): AI systems that assess the risk of a natural person committing a criminal offence based solely on profiling or personality traits — except when used to augment human assessments based on objective, verifiable facts directly linked to criminal activity.
-
Untargeted facial image scraping (Art. 5(1)(e)): AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
-
Emotion recognition in workplace and education (Art. 5(1)(f)): AI systems that infer emotions of natural persons in workplaces or educational institutions, except where the system is intended for medical or safety purposes.
-
Biometric categorisation for sensitive characteristics (Art. 5(1)(g)): AI systems that categorise natural persons based on biometric data to deduce or infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation — except for lawful labelling or filtering of biometric datasets, or law enforcement categorisation of biometric data.
-
Real-time remote biometric identification in public spaces (Art. 5(1)(h)): AI systems used for real-time remote biometric identification in publicly accessible spaces for law enforcement purposes — with three narrowly defined exceptions.
The structure is precise. Each prohibition has qualifiers, exceptions, and conditions. This precision is both the strength and the challenge of Article 5: the boundaries are drawn tightly enough that legitimate use cases are preserved, but loosely enough that organisations need genuine legal analysis — not just a checklist — to determine whether their systems fall within scope.
What Changed in Practice on February 2, 2025
The direct operational impact on most enterprises was modest. The majority of the eight prohibited practices describe systems that mainstream European businesses were not deploying. Untargeted facial image scraping was already a GDPR liability. Criminal risk profiling based on personality traits is a niche application. Subliminal manipulation, as defined, requires a level of intent and technique that most commercial AI systems do not approach.
What did change was the compliance obligation to verify. After February 2, every organisation deploying AI systems within the EU needed to have conducted — or be able to demonstrate that it had conducted — an assessment of whether any of its AI applications fell within the Article 5 prohibitions. "We do not think any of our systems are prohibited" is not a compliance position. "We assessed all AI systems in our inventory against Article 5 criteria and documented the results" is.
This distinction is critical. The practical burden of February 2 was less about shutting down prohibited systems and more about establishing the inventory and assessment discipline that the full AI Act will eventually require for high-risk systems. Organisations that treated Article 5 as a forcing function for AI system cataloguing are now better prepared for the August 2026 high-risk obligations. Those that dismissed it as irrelevant lost eight months of preparation time.
The Social Scoring Ban: Who Was Actually Affected
Article 5(1)(c) prohibits social scoring by public authorities — AI systems that evaluate citizens based on social behaviour or known personal characteristics, where that evaluation leads to detrimental treatment in unrelated contexts or treatment that is disproportionate.
The immediate targets are obvious: any system resembling China's Social Credit System. No EU Member State was operating anything comparable. But the prohibition reaches further than sovereign-scale scoring programs.
Consider municipal authorities using algorithmic systems to prioritise service delivery based on behavioural data. A housing authority that deprioritises applications based on historical interactions with government services. A benefits agency that scores claimants based on social media activity or neighbourhood-level risk indicators. These are not hypothetical — welfare fraud detection systems in several Member States have been the subject of litigation and political controversy for years. The Dutch SyRI (Systeem Risico Indicatie) case, decided before the AI Act, is the canonical example: a system that aggregated government data to generate risk scores for welfare fraud, struck down by the Hague District Court in 2020 on ECHR grounds.
Article 5(1)(c) codifies and extends the principle from SyRI. Post-February 2, any public authority deploying algorithmic scoring systems that aggregate behavioural data and produce consequential decisions must verify that the system does not constitute prohibited social scoring. The assessment is not trivial. The provision requires evaluating whether the treatment is "detrimental," whether it relates to contexts "unrelated" to those in which the data was generated, and whether it is "disproportionate."
The private sector is not directly within scope of Article 5(1)(c) — the prohibition applies to public authorities or those acting on their behalf. But private companies contracting with governments to build or operate such systems are within scope as providers placing the system on the market.
Real-Time Biometric Identification: The Law Enforcement Exception and Its Limits
The prohibition on real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement (Art. 5(1)(h)) was the most politically contentious provision in the entire AI Act. The final text reflects that contest: the prohibition exists, but it has three exceptions.
Real-time RBI is permitted for: (i) the targeted search for specific victims of abduction, trafficking, or sexual exploitation, and for missing persons; (ii) the prevention of a specific, substantial, and imminent threat to life or a foreseeable terrorist attack; and (iii) the identification of suspects of certain serious criminal offences listed in an annex.
Each exception is subject to conditions: prior judicial authorisation (or ex post authorisation in cases of urgency), necessity and proportionality assessments, time and geographic limitations, and notification to the relevant data protection authority. Member States that wish to permit the exceptions must adopt national legislation authorising them. Those that do not adopt such legislation effectively maintain a total ban.
The practical impact is concentrated in law enforcement technology procurement. Any vendor offering real-time facial recognition for public space surveillance must now navigate a framework where the default is prohibition and the exceptions require national legislation, judicial oversight, and documented proportionality assessments. Several Member States have signalled that they will not adopt the exceptions at all, creating a fragmented landscape where the same technology is lawful in one jurisdiction and prohibited in the neighbouring one.
For private sector organisations, the real-time RBI prohibition has a simpler implication: do not deploy real-time facial recognition in publicly accessible spaces. The exception applies only to law enforcement, and even then only under strict conditions. Any commercial use — retail analytics, venue access control, event security — that involves real-time biometric identification in public spaces is prohibited without exception.
Emotion Recognition in Workplace and Education: The Practical Impact
Article 5(1)(f) prohibits AI systems that infer emotions in workplaces and educational institutions. This is more operationally relevant than the social scoring or biometric identification prohibitions for most enterprises, because emotion recognition technology has been marketed commercially for hiring, employee engagement, and educational assessment.
The prohibition covers systems that infer emotions — not systems that detect physiological states for safety purposes. An AI system that monitors a truck driver's drowsiness through eye-tracking is not prohibited (it falls under the medical/safety exception). An AI system that analyses a job candidate's facial expressions during a video interview to infer confidence, stress, or enthusiasm is prohibited.
The hiring technology sector was the most directly affected. Several vendors had built products that analysed candidate video interviews using facial analysis, voice tone analysis, or both, claiming to detect personality traits or emotional states. Post-February 2, these systems cannot be used in workplaces within the EU. This extends to educational institutions: systems that monitor student engagement through facial expression analysis or emotional state inference are prohibited.
The boundary between emotion recognition and physiological state detection will continue to generate interpretation questions. A system that detects that a factory worker is fatigued (safety purpose) is permitted. A system that detects that a factory worker is disengaged or frustrated (emotion inference) is prohibited. The intent and design of the system matter, and organisations deploying workplace monitoring technology must document which side of the line their systems fall on.
Commission Interpretation Guidance: What We Have Learned in Ten Months
The European Commission, through the AI Office established under the Act, has begun providing interpretation guidance — though less formally than many organisations would prefer. The AI Office has published initial guidance documents and responded to stakeholder queries, but as of December 2025, no formal implementing acts or delegated acts specific to Article 5 interpretation have been adopted.
The key clarifications that have emerged, through a combination of official communications and conference presentations by AI Office staff, include:
On the definition of "AI system": The Act uses the OECD-aligned definition. Rule-based systems without machine learning may still qualify if they are designed to operate with varying levels of autonomy. This matters for Article 5 because some prohibited practices could theoretically be achieved with non-ML systems.
On "subliminal techniques": The Commission has indicated that the threshold is high. Standard persuasive design, recommendation algorithms, and personalised content delivery do not constitute subliminal manipulation unless they deploy techniques "beyond a person's consciousness." Dark patterns may raise concerns, but the subliminal manipulation prohibition is not a general dark pattern ban.
On territorial scope: Article 5 applies to AI systems placed on the market, put into service, or used within the Union. A system operated by a non-EU entity that produces outputs affecting natural persons within the EU is within scope. This has implications for global SaaS providers serving EU customers.
Enforcement Landscape: Complaints, Investigations, and the AI Office Role
As of early December 2025, no formal enforcement action under Article 5 has resulted in a published decision or fine. This is unsurprising. The AI Office is still building its enforcement capacity, and the national market surveillance authorities designated by Member States under Article 70 are in various stages of operationalisation.
What has happened is less visible but arguably more significant: the AI Office has received complaints and has initiated preliminary assessments. The exact number is not public, but Commission communications reference "ongoing dialogues" with providers whose systems have been flagged as potentially falling within Article 5. Whether these dialogues result in formal proceedings will depend on both the substance of the complaints and the Office's strategic prioritisation.
The enforcement architecture is worth understanding. For prohibited practices specifically, the AI Office has direct enforcement powers over general-purpose AI systems. For other AI systems, enforcement responsibility lies with the national market surveillance authorities. This creates a coordination challenge: a prohibited practice deployed via a general-purpose AI system may involve both the AI Office and national authorities, with the attendant jurisdictional complexity.
For compliance teams building their AI governance programs, the practical implication is that enforcement will be complaint-driven in the near term. The AI Office does not have the resources for proactive market surveillance across all 27 Member States. This means that competitors, employees, civil society organisations, and data protection authorities (who may refer AI-related complaints to the AI Office) are the likely triggers for enforcement actions.
Key Takeaways
The prohibited practices ban is narrower than the initial reaction suggested. Most European enterprises were not deploying the types of systems described in Article 5. The direct operational disruption has been limited to specific niches: hiring technology using emotion recognition, certain public sector scoring systems, and law enforcement biometric identification procurement.
The real burden is assessment and documentation. The compliance obligation created by February 2 is the duty to have assessed all AI systems against Article 5 criteria and documented the results. This is an inventory and governance obligation, not just a technology prohibition.
Emotion recognition in hiring and education is the most commercially relevant prohibition. CTOs and compliance leaders should audit any workplace or educational AI that analyses facial expressions, voice tone, or behavioural signals for emotion-related inferences.
Enforcement is building slowly but will be complaint-driven. No fines yet, but the AI Office is receiving complaints and engaging with providers. The first formal enforcement action will set the tone for Article 5 interpretation across the Union.
Article 5 is a preview of August 2026. The organisations that used the prohibited practices ban as a forcing function for AI system inventory and risk assessment are eight months ahead of those that dismissed it. The high-risk obligations arriving in August 2026 will require the same discipline at much greater scale.
