EU AI Act Prohibited Practices
Complete reference on prohibited AI practices under Article 5 of the EU AI Act (Regulation 2024/1689), covering social scoring, real-time biometric identification, emotion recognition in workplaces and education, manipulation and exploitation prohibitions, and enforcement penalties up to EUR 35 million or 7% of turnover. Effective 2 February 2025.
- 1
Article 5 prohibitions became applicable on 2 February 2025 — the earliest enforcement date in the AI Act. All listed practices are categorically banned with no conformity assessment pathway.
- 2
Manipulation and exploitation prohibitions target subliminal, deceptive, or vulnerability-exploiting AI techniques that materially distort behaviour and cause significant harm.
- 3
Social scoring — using aggregated behavioural data across contexts to determine access to unrelated services — is prohibited. Narrow-purpose scoring systems (credit, fraud) are generally not captured.
- 4
Emotion recognition in workplaces and educational institutions is banned except for medical or safety purposes. Real-time biometric identification in public spaces is banned with narrow law enforcement exceptions requiring judicial authorisation.
- 5
Penalties for prohibited practices reach EUR 35 million or 7% of global turnover — the highest in the AI Act's enforcement framework.
1. The Prohibition Framework Under Article 5
Article 5 of the EU AI Act (Regulation 2024/1689) establishes an absolute prohibition on AI practices that are considered to pose an unacceptable risk to fundamental rights, safety, and democratic values. These prohibitions represent the top tier of the Act's risk-based regulatory pyramid: while high-risk AI systems are permitted subject to stringent compliance requirements, and limited-risk systems face transparency obligations, the practices listed in Article 5 are banned outright. No conformity assessment, risk mitigation, or regulatory approval can authorise these practices — they are categorically impermissible.
The prohibited practices became applicable on 2 February 2025, making them the first provisions of the AI Act to take effect. This early application date reflects the legislature's assessment that these practices pose such fundamental threats to rights and values that they cannot be tolerated even during the transitional period before the Act's other provisions apply. Organisations that were developing, deploying, or using AI systems falling within Article 5 at the time of the Act's entry into force were required to cease these activities by 2 February 2025 or face the Regulation's maximum penalties.
The prohibitions apply to all AI systems placed on the market, put into service, or used in the Union, regardless of whether the provider or deployer is established in the EU. Article 2(1) establishes the broad territorial scope: the Act applies to providers placing on the market or putting into service AI systems in the Union, deployers of AI systems established in or located within the Union, and providers and deployers in third countries where the output produced by the system is used in the Union. This extraterritorial reach means that non-EU companies whose AI systems produce outputs used within the EU are subject to Article 5 prohibitions, paralleling the GDPR's extraterritorial application model.
Article 5 prohibitions became effective on 2 February 2025 — earlier than all other substantive AI Act obligations. Organisations using any AI practice described in Article 5 must have ceased the practice by that date or face fines of up to EUR 35 million or 7% of global annual turnover.
2. Manipulation and Exploitation Prohibitions
Article 5(1)(a) prohibits the placing on the market, putting into service, or use of AI systems that deploy subliminal techniques beyond a person's consciousness, or purposefully manipulative or deceptive techniques, with the objective or the effect of materially distorting the behaviour of a person or group of persons by appreciably impairing their ability to make an informed decision, thereby causing that person or another person to take a decision that they would not have otherwise taken, in a manner that causes or is reasonably likely to cause significant harm. The prohibition targets AI-powered manipulation that operates below the threshold of conscious awareness or through deliberately deceptive means.
Article 5(1)(b) extends the prohibition to AI systems that exploit vulnerabilities of a specific person or group due to their age, disability, or specific social or economic situation, with the objective or effect of materially distorting their behaviour in a manner that causes or is reasonably likely to cause significant harm to that person or another person. This provision specifically protects vulnerable populations from AI-driven exploitation — for example, AI systems that target elderly individuals with manipulative purchasing interfaces, or systems that exploit the cognitive vulnerabilities of children to drive addictive usage patterns.
The scope of these prohibitions requires careful analysis by organisations deploying persuasive AI systems. Recommender systems, personalised advertising engines, dynamic pricing algorithms, and gamification mechanics all involve some degree of behavioural influence. The key thresholds are: subliminal or deceptive technique (not all persuasion is prohibited — only manipulation that operates through subliminal means, beyond conscious awareness, or through purposeful deception), material distortion of behaviour (trivial influence is insufficient — the manipulation must appreciably impair informed decision-making), and significant harm (the distorted decision must cause or be reasonably likely to cause significant harm to the person or another person). Organisations should audit their AI-powered engagement and personalisation systems against these thresholds, documenting the analysis to demonstrate that their systems remain on the permissible side of the line.
3. Social Scoring Prohibition
Article 5(1)(c) prohibits AI systems used for evaluating or classifying natural persons or groups based on their social behaviour or known, inferred, or predicted personal or personality characteristics, where the resulting social score leads to detrimental or unfavourable treatment of persons in social contexts unrelated to the contexts in which the data was originally generated or collected, or where the treatment is unjustified or disproportionate to the social behaviour or its gravity. This provision directly targets social credit scoring systems of the kind implemented in certain jurisdictions outside the EU.
The prohibition has two limbs, and either is sufficient to trigger the ban. The first limb prohibits social scoring that leads to detrimental treatment in contexts unrelated to the data's origin — for example, using social media behaviour to deny access to housing, or using shopping habits to determine insurance eligibility. The second limb prohibits social scoring that leads to treatment unjustified or disproportionate to the behaviour — for example, minor social transgressions leading to severe restrictions on access to services. Both limbs target the use of aggregated behavioural data to create a composite social score that determines an individual's access to rights, opportunities, or services.
For EU organisations, the social scoring prohibition intersects with existing data protection principles. GDPR's purpose limitation principle (Article 5(1)(b)) already restricts the repurposing of personal data collected in one context for unrelated purposes. The AI Act's prohibition goes further by categorically banning the AI system itself, rather than merely restricting the data processing. Organisations operating loyalty programmes, reputation systems, platform trust scores, or employee performance aggregation systems should assess whether their systems could be characterised as social scoring — not all scoring systems are prohibited, but any system that aggregates behavioural data across contexts to produce a score that determines access to services or treatment should be carefully evaluated against Article 5(1)(c).
Not all scoring systems are prohibited. Credit scoring for lending decisions, fraud risk scoring, and platform reputation systems for specific services are generally outside Article 5(1)(c) provided they do not aggregate social behaviour across unrelated contexts to produce a general-purpose social score that determines broader access to services.
4. Biometric Identification and Categorisation Restrictions
Article 5(1)(d) prohibits AI systems that perform risk assessments of natural persons to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics. This prohibition does not affect AI systems used to support the human assessment of a person's involvement in criminal activity based on objective and verifiable facts directly linked to the criminal activity — the prohibition targets predictive policing systems that profile individuals based on personality or behavioural characteristics rather than evidence of specific criminal conduct.
Article 5(1)(e) prohibits AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage. This provision addresses the specific practice of mass-harvesting biometric data without consent or legal basis to build identification databases — a practice that several commercial entities had engaged in prior to the Act's adoption, drawing enforcement action under GDPR from multiple European data protection authorities.
Article 5(1)(f) prohibits AI systems that infer emotions of natural persons in the areas of workplace and educational institutions, except where the system is intended to be placed on the market for medical or safety reasons. Emotion recognition technology in employment contexts — such as analysing job candidates' facial expressions during video interviews or monitoring employees' emotional states during work — is categorically banned. The medical and safety exception is narrow: it covers systems designed to detect, for example, driver fatigue (safety) or patient distress indicators (medical), but does not extend to general workplace productivity monitoring or educational engagement assessment.
Article 5(1)(h) addresses real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement purposes. This prohibition is subject to narrowly defined exceptions: real-time RBI may be permitted for the targeted search for specific victims of abduction, trafficking, or sexual exploitation, the prevention of a specific, substantial, and imminent threat to life or a reasonably foreseeable terrorist attack, and the localisation or identification of a person suspected of committing specific serious criminal offences listed in Annex IIa. Even where these exceptions apply, their use requires prior authorisation by a judicial authority or independent administrative authority, is limited in time and scope, and is subject to fundamental rights safeguards. Member States that choose to allow any of these exceptions must adopt specific national legislation.
Emotion recognition AI in workplaces and educational institutions is prohibited outright under Article 5(1)(f), effective 2 February 2025. The only exception is for systems placed on the market for medical or safety reasons. Organisations using emotion AI for hiring, performance monitoring, or student engagement must have discontinued these systems.
5. Enforcement and Penalties
Violations of Article 5 prohibited practices attract the AI Act's maximum penalties under Article 99(3): administrative fines of up to EUR 35 million, or, if the offender is an undertaking, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher. These penalties exceed even the GDPR's maximum fines (EUR 20 million or 4% of turnover) and reflect the legislature's assessment of the severity of the harm caused by prohibited AI practices. For SMEs and startups, the lower of the two figures applies, but the proportionality principle does not reduce the fine to trivial levels — the penalty must remain effective, proportionate, and dissuasive.
Enforcement responsibility lies with national market surveillance authorities designated by each Member State under Article 70. These authorities have broad investigatory powers including the ability to access AI systems and their documentation, conduct inspections, require information from providers and deployers, and order the withdrawal or recall of non-compliant AI systems from the market. The European AI Office, established within the Commission, provides coordination, guidance, and — for general-purpose AI models — direct enforcement authority. The dual enforcement structure (national authorities for most provisions, the AI Office for GPAI models) creates a matrix that organisations must navigate.
Beyond administrative fines, enforcement may include corrective measures with direct operational impact: orders to cease the prohibited practice immediately, withdrawal of the AI system from the market, destruction or rendering non-functional of the system, and publication of the infringement decision. The reputational consequences of being found to have deployed a prohibited AI practice — particularly manipulation, social scoring, or unlawful biometric surveillance — may exceed the financial penalty in many cases. Organisations should conduct an Article 5 audit across all AI systems in their portfolio, documenting the analysis and maintaining it as a living record that is updated whenever new AI systems are deployed or existing systems are materially modified.
6. Conducting an Article 5 Compliance Assessment
Every organisation that develops, deploys, procures, or uses AI systems should conduct a systematic assessment against Article 5 prohibitions. The assessment is not limited to systems that the organisation considers 'AI' — the AI Act's definition of an AI system in Article 3(1) is broad, encompassing machine-based systems designed to operate with varying levels of autonomy that may exhibit adaptiveness and that infer, from the input received, how to generate outputs such as predictions, content, recommendations, or decisions. Rule-based systems, statistical models, and optimisation algorithms may fall within scope depending on their design characteristics.
Structure the assessment as follows: (1) Inventory all AI systems within the organisation's operational scope, including internally developed, procured, and embedded systems; (2) For each system, assess against every Article 5 prohibition — manipulation, exploitation, social scoring, criminal risk profiling, facial recognition database scraping, emotion recognition in workplaces/education, biometric categorisation based on sensitive attributes, and real-time remote biometric identification in public spaces; (3) Document the analysis for each system, noting whether the prohibition is clearly inapplicable (e.g., the system does not process biometric data), requires deeper assessment (e.g., a personalisation system that may cross the manipulation threshold), or is triggered (in which case the system must be decommissioned); (4) For systems in the grey zone, conduct a detailed legal analysis with reference to the Act's recitals, EDPB and AI Office guidance, and fundamental rights impact assessment.
The Article 5 assessment should be integrated into your AI governance framework as a recurring control. New AI system deployments should undergo Article 5 screening as part of the procurement or development approval process. Existing systems should be reassessed when materially modified — a system that did not initially involve emotion recognition may acquire that capability through a software update. Maintain an Article 5 compliance register that records the assessment date, the assessor, the conclusion, and the supporting rationale for each AI system. This register serves as your primary evidence of compliance should a national authority request it.
Integrate Article 5 screening into your AI procurement and deployment approval process. Every new AI system — whether developed internally, purchased from a vendor, or accessed as a service — should be assessed against the prohibited practices before it goes live.
Are all emotion recognition systems banned under the AI Act?
No. Article 5(1)(f) prohibits emotion recognition systems specifically in workplace and educational institution contexts, except where intended for medical or safety reasons. Emotion recognition in other contexts — such as market research (with appropriate consent), healthcare diagnostics, or automotive safety (driver fatigue detection) — is not categorically prohibited under Article 5, though such systems may be classified as high-risk under Annex III and subject to corresponding obligations. The prohibition is context-specific, not technology-wide.
Does the social scoring prohibition affect credit scoring and fraud detection?
Generally, no. Article 5(1)(c) prohibits social scoring that aggregates social behaviour across contexts to produce a general-purpose score that determines access to unrelated services or leads to treatment disproportionate to the behaviour. Credit scoring that assesses an individual's creditworthiness for lending purposes based on financial data is a purpose-specific assessment, not a general social score. Similarly, fraud risk scoring within a specific service context is not social scoring. However, a system that combined financial behaviour, social media activity, movement patterns, and purchase history into a composite score used to determine eligibility across multiple unrelated services would likely fall within the prohibition.
Can law enforcement use real-time biometric identification at all?
Under strictly limited conditions. Article 5(1)(h) prohibits real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement, but Article 5(2) permits exceptions for three purposes: searching for specific victims of kidnapping, trafficking, or sexual exploitation; preventing a specific, substantial, and imminent threat to life or a foreseeable terrorist attack; and locating or identifying suspects of serious criminal offences listed in Annex IIa. Each use requires prior judicial or independent administrative authorisation, must be limited in time, geographic scope, and the individuals searched for, and must comply with fundamental rights safeguards. Member States must adopt specific national legislation to permit any of these exceptions — the exceptions are not automatically available.
How do the AI Act prohibited practices interact with GDPR?
The AI Act and GDPR apply cumulatively. An AI practice that is prohibited under Article 5 of the AI Act would also likely violate multiple GDPR provisions — for example, social scoring may violate purpose limitation (Article 5(1)(b)), mass facial recognition scraping violates lawfulness (Article 6) and specific conditions for biometric data (Article 9), and manipulative AI systems may violate the fairness principle (Article 5(1)(a)). The AI Act prohibition adds a categorical ban on the AI system itself, beyond the GDPR's restrictions on the underlying data processing. Enforcement may proceed under both frameworks simultaneously, with data protection authorities enforcing GDPR and market surveillance authorities enforcing the AI Act.
What should we do if we discover a system that may fall under Article 5?
Cease use immediately pending a thorough legal assessment. The prohibitions have been in force since 2 February 2025, and continued use of a prohibited AI system exposes the organisation to the Act's maximum penalties. Commission a detailed analysis by qualified legal and technical experts, assessing the system against the specific elements of the applicable Article 5 provision. If the assessment confirms the prohibition applies, decommission the system, document the decommissioning, and assess whether any data collected by the system was unlawfully obtained (which may trigger GDPR obligations including data deletion). If the assessment concludes the system falls outside the prohibition, document the analysis comprehensively and implement monitoring to ensure the system does not drift into prohibited territory through updates or evolving use patterns.
Ready to Operationalise This?
Turn this guide into working compliance workflows. Create an account or schedule a personalised demo.