Skip to main content
FORTISEU
EU-AI-ACTEstablished

Prohibited AI Practices Under Article 5 of the EU AI Act

12 min readUpdated 2026-03-12

Overview: Why Certain AI Practices Are Banned

Article 5 of the EU AI Act establishes an absolute boundary: certain AI practices are deemed to pose such an unacceptable risk to fundamental rights, safety, and democratic values that they cannot be permitted under any regulatory conditions. Unlike high-risk systems — which may be placed on the market subject to conformity assessment — prohibited practices are banned outright. No conformity assessment, mitigation measure, or regulatory sandbox can authorise their deployment.

The philosophical foundation for these prohibitions is rooted in the EU Charter of Fundamental Rights and the values enshrined in Article 2 TEU: human dignity, freedom, democracy, equality, the rule of law, and respect for human rights. Recital 28 of the AI Act emphasises that certain AI techniques have the potential to manipulate persons through subliminal techniques beyond their consciousness, or exploit vulnerabilities of specific groups in ways that are incompatible with these fundamental values.

The prohibitions became applicable on 2 February 2025 — six months after the Act's entry into force — making them the earliest-enforced provisions. This accelerated timeline reflects the urgency: if these practices are fundamentally incompatible with EU values, waiting 24 months for the main provisions would be unconscionable. Violations attract the highest penalty tier: up to EUR 35 million or 7% of total worldwide annual turnover, whichever is higher.

Organisations should note that the prohibition is on 'placing on the market, putting into service, or using' such AI systems within the EU. This covers providers (who develop and offer the system), deployers (who use the system), and extends to systems developed outside the EU but used within its territory. The extraterritorial reach ensures that non-EU providers cannot circumvent the prohibitions by establishing outside the Union.

Art. 5
Art. 99(3)

The Seven Categories of Prohibited Practices

Article 5 prohibits the following AI practices within the EU, each reflecting a distinct threat to fundamental rights.

1. Subliminal manipulation (Art. 5(1)(a)): AI systems that deploy subliminal techniques beyond a person's consciousness, or purposefully manipulative or deceptive techniques, with the objective or effect of materially distorting a person's behaviour in a manner that causes or is reasonably likely to cause that person or another person significant harm. The key elements are that the technique operates below conscious awareness and results in behaviour distortion causing significant harm. Standard advertising, persuasive design, or recommendation algorithms do not fall within this prohibition unless they employ truly subliminal or deceptive techniques causing demonstrable harm.

2. Exploitation of vulnerabilities (Art. 5(1)(b)): AI systems that exploit any of the vulnerabilities of a person or a specific group of persons due to their age, disability, or a specific social or economic situation, with the objective or effect of materially distorting their behaviour in a manner likely to cause significant harm. This targets AI systems designed to exploit cognitive or situational vulnerabilities — for example, predatory lending algorithms targeting elderly persons or gambling systems exploiting addictive behaviours in vulnerable populations.

3. Social scoring (Art. 5(1)(c)): AI systems for the evaluation or classification of natural persons or groups based on their social behaviour or known, inferred, or predicted personal or personality characteristics, with the social score leading to detrimental or unfavourable treatment in social contexts unrelated to those in which the data was generated, or treatment that is unjustified or disproportionate. This prohibition addresses government and private social scoring systems that create pervasive, context-crossing profiles affecting access to services, social standing, or rights.

4. Predictive policing (Art. 5(1)(d)): AI systems making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics. The prohibition is nuanced: it bans individual-level predictive policing based solely on profiling, but does not prohibit AI systems that support human assessment based on objective, verifiable facts directly linked to criminal activity, or crime analytics at aggregate levels.

5. Untargeted facial recognition scraping (Art. 5(1)(e)): AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage. This targets services like Clearview AI that have compiled massive facial recognition databases by indiscriminately scraping publicly available images without consent.

6. Emotion recognition in sensitive contexts (Art. 5(1)(f)): AI systems used to infer the emotions of a natural person in the areas of the workplace and education, except where the AI system is intended for medical or safety reasons. This prohibition recognises that workplace and educational emotion monitoring creates chilling effects on freedom of expression and can be used for discriminatory surveillance. Medical uses (e.g., detecting pain in non-communicative patients) and safety uses (e.g., driver drowsiness detection) are explicitly exempted.

7. Real-time remote biometric identification in public spaces (Art. 5(1)(h)): AI systems for real-time remote biometric identification of natural persons in publicly accessible spaces for the purposes of law enforcement, except in exhaustively listed and narrowly interpreted situations: targeted search for specific victims (abduction, trafficking, sexual exploitation), prevention of a specific, substantial, and imminent threat to life or a foreseeable terrorist attack, or identification of a suspect in serious criminal offences. Even these exceptions require prior judicial or administrative authorisation and are subject to strict proportionality assessments.

Art. 5(1)(a)
Art. 5(1)(c)
Art. 5(1)(h)
Warning

These prohibitions have been in force since 2 February 2025. Organisations currently deploying any of these AI practices must cease immediately. Violations carry the highest penalty tier: up to EUR 35 million or 7% of global annual turnover.

Exceptions for Biometric Identification

The prohibition on real-time remote biometric identification in publicly accessible spaces for law enforcement purposes is the most debated provision in the AI Act. The final text, negotiated during the trilogue, represents a carefully calibrated compromise between the European Parliament's push for an outright ban and the Council's insistence on law enforcement flexibility.

Article 5(2) permits real-time remote biometric identification only in three exhaustively listed situations, each subject to strict conditions. First, the targeted search for specific victims of abduction, trafficking, or sexual exploitation, and the search for missing persons. Second, the prevention of a specific, substantial, and imminent threat to the life or physical safety of natural persons, or a genuine and present or foreseeable threat of a terrorist attack. Third, the localisation or identification of a person suspected of having committed a specific criminal offence punishable by a maximum sentence of at least four years (with reference to serious offences listed in the Act).

Even where an exception applies, Article 5(3) imposes stringent safeguards. Each use must be: necessary and proportionate (i.e., other less intrusive means are insufficient); authorised by a judicial authority or an independent administrative authority — ex ante in most cases, or within 24 hours in duly justified cases of urgency; limited in time, geography, and personal scope; and subject to an assessment of the impact on fundamental rights before deployment. The authorising authority must consider the seriousness and probability of harm from non-use against the impact on the rights of those affected.

Member States wishing to provide for the possibility of real-time biometric identification under these exceptions must lay down specific national rules, including additional safeguards and conditions. Not all Member States are obligated to permit these exceptions — they may maintain stricter national prohibitions. Each individual use must be registered in a national database and reported annually to the European Commission and the European Data Protection Board.

The prohibition does not affect post-remote biometric identification (forensic facial recognition after an event), which is classified as high-risk under Annex III rather than prohibited. However, post-biometric identification for law enforcement still requires judicial authorisation for offences punishable by at least three years' imprisonment.

Art. 5(2)
Art. 5(3)

Enforcement and Compliance Implications

The enforcement of Article 5 prohibitions creates immediate compliance obligations for all organisations operating AI systems in the EU market, regardless of when their systems were originally deployed. Unlike the high-risk provisions (which apply from August 2026 with transition periods), the prohibitions apply retroactively to existing AI systems: if a system currently in operation falls within Article 5, it must be withdrawn, decommissioned, or fundamentally modified by 2 February 2025.

National market surveillance authorities are the primary enforcement bodies, empowered to: inspect AI systems and request access to training data, models, and source code; order the withdrawal or recall of non-compliant AI systems; impose administrative fines within the penalty framework; and refer cases for criminal prosecution where national law provides for criminal penalties.

The penalty ceiling for Article 5 violations — EUR 35 million or 7% of global annual turnover — is the highest in the Act and exceeds the GDPR's maximum of EUR 20 million or 4% of turnover. This signals the EU's view that prohibited AI practices represent a more severe threat to fundamental rights than data protection violations.

Organisations should conduct an immediate AI system inventory and screen every deployed AI system against the seven prohibited categories. This screening should examine: the system's intended purpose and actual use (including downstream uses by customers or partners); the populations affected, with particular attention to vulnerable groups; the data inputs and whether they include biometric data, behavioural profiling, or emotion indicators; the system's operation — whether it functions in real-time, uses subliminal techniques, or creates comprehensive profiles across contexts; and the deployment context, particularly law enforcement, workplace, and educational settings.

For borderline cases, organisations should apply the precautionary principle and document their analysis thoroughly. Given the severity of penalties and the enforcement-first timeline, conservative interpretation of the prohibitions is prudent. Engaging with national competent authorities and participating in regulatory sandboxes can provide additional compliance assurance.

Art. 99(3)
Art. 74
Tip

FortisEU's AI system inventory tool helps organisations screen their entire AI portfolio against Article 5 prohibitions, generating documented classification decisions and identifying borderline systems requiring deeper analysis.

FAQ

Frequently Asked Questions

When did the AI Act's prohibited practices come into force?

The prohibited practices under Article 5 became applicable on 2 February 2025 — six months after the Act's entry into force on 1 August 2024. This is the earliest enforcement date in the Act's phased timeline, reflecting the urgency of banning AI practices that are fundamentally incompatible with EU values. Organisations must have ceased all prohibited practices by this date.

Is all facial recognition banned under the AI Act?

No. The AI Act prohibits specific forms of facial recognition: (1) real-time remote biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions), and (2) untargeted scraping of facial images from the internet or CCTV to build facial recognition databases. Post-remote biometric identification (forensic, after-the-fact) is classified as high-risk rather than prohibited. Private-sector facial recognition for access control (e.g., phone unlock, building access) is also not prohibited, though it may be high-risk under Annex III and must comply with GDPR.

Can employers use AI for emotion recognition in the workplace?

Generally, no. Article 5(1)(f) prohibits AI-based emotion recognition in workplace and educational settings. However, two exceptions apply: medical reasons (e.g., detecting pain in non-communicative patients, monitoring stress for occupational health) and safety reasons (e.g., driver fatigue detection for transport workers, alertness monitoring for machinery operators). Even where exceptions apply, GDPR obligations regarding the processing of biometric and health data (special category data under Article 9 GDPR) must be met.

What constitutes 'social scoring' under the AI Act?

Social scoring under Article 5(1)(c) refers to AI systems that evaluate or classify natural persons based on their social behaviour or personal characteristics, where the resulting score leads to detrimental treatment that is: (a) in social contexts unrelated to those where the data was originally generated (cross-context use), or (b) unjustified or disproportionate to the social behaviour. This covers both government-run scoring systems and private-sector systems that create pervasive behavioural profiles affecting access to services across unrelated domains. Standard credit scoring based on financial behaviour for financial decisions is not captured, as it does not involve cross-context application.

This content is for informational purposes only and does not constitute legal advice. Consult qualified legal counsel for compliance decisions.

Automate EU-AI-ACT Compliance with FortisEU

Turn regulatory obligations into actionable controls with evidence workflows, real-time dashboards, and EU-sovereign AI assistance.