Skip to main content
FORTISEU
Back to Blog
AI Governance4 March 202610 min readAttila Bognar

EU AI Act Risk Classification: A Practical Decision Tree for Your AI Systems

The EU AI Act's four-tier risk classification sounds simple on paper. In practice, classifying your AI systems requires navigating prohibited practices under Art. 5, high-risk pathways through Annex I and III, GPAI obligations, and transparency requirements. This decision tree gives you a structured approach.

EU AI Act Risk Classification: A Practical Decision Tree for Your AI Systems featured visual
EU AI Actrisk classificationhigh-risk AIprohibited AI

The EU AI Act organises artificial intelligence into four risk tiers. Prohibited, high-risk, limited risk, minimal risk. On a slide, this looks elegant. In an enterprise with forty-seven AI-adjacent tools, three ML pipelines, and a handful of vendor-embedded models nobody fully inventoried, it is anything but.

The real difficulty is not understanding the tiers. It is deciding which tier applies to a specific system in a specific context. A chatbot answering customer queries about insurance products is not the same as a chatbot pre-screening insurance claims. Same technology, different risk classification. Context determines everything.

This post provides a practical decision tree you can apply system by system. Walk each AI deployment through four sequential gates. By the end, you will have a defensible classification rationale — the kind that survives regulatory scrutiny, not just internal sign-off.

The Four-Tier Framework in Thirty Seconds

Before the decision tree, the tiers themselves.

Prohibited (Art. 5): AI practices that are banned outright. No compliance pathway exists — the only compliant action is to not deploy. These prohibitions apply since February 2, 2025.

High-risk (Art. 6, Annex I, Annex III): AI systems subject to the full conformity assessment regime: risk management systems, data governance, technical documentation, human oversight, accuracy and robustness requirements, and post-market monitoring. Obligations phase in from August 2, 2025 (for GPAI) and August 2, 2026 (for other high-risk systems).

Limited risk (Art. 50): Systems with specific transparency obligations. Users must know they are interacting with AI, and certain outputs must be labelled.

Minimal risk: Everything else. No specific obligations beyond voluntary codes of conduct.

The trap most organisations fall into: treating this as a label to assign once. It is not. Classification must be re-evaluated when the system's purpose changes, when its deployment context shifts, or when you fine-tune a model for a new use case.

Decision Tree Step 1: Is It Prohibited?

Start here. Article 5 draws hard lines.

Your AI system is prohibited if it performs any of the following:

Social scoring by public authorities — or private actors where the scoring leads to detrimental treatment in unrelated contexts or disproportionate treatment relative to social behaviour. The Commission's February 2025 guidelines clarified that this covers systems that aggregate social behaviour data to produce generalised assessments of trustworthiness, even when marketed as "reputation scoring" or "behavioural analytics."

Real-time remote biometric identification in publicly accessible spaces for law enforcement — with narrow exceptions for targeted search for specific crime victims, prevention of imminent threats to life, and identification of suspects for serious criminal offences. If you are not a law enforcement body invoking one of these exceptions with prior judicial authorisation, real-time RBI in public spaces is off the table.

Emotion recognition in workplace and education settings — a prohibition that catches more systems than people expect. If your HR tool infers emotional states from facial expressions, voice patterns, or physiological signals during interviews or performance reviews, it is prohibited. Same for student engagement monitoring tools that classify emotional states.

Predictive policing based on profiling — AI systems that assess the risk of a natural person committing a criminal offence based solely on profiling or personality traits.

Untargeted scraping for facial recognition databases — building or expanding facial recognition datasets through untargeted scraping from the internet or CCTV footage.

Subliminal manipulation and exploitation of vulnerabilities — systems designed to materially distort behaviour in ways that cause significant harm, particularly when targeting children, elderly persons, or persons with disabilities.

Practical checkpoint: review your AI inventory against each Art. 5 category. If any system touches emotion inference in HR or education contexts, flag it immediately. This is the prohibition that most frequently surprises enterprise teams, because the technology is often embedded in vendor tools marketed as "engagement analytics" or "candidate assessment."

If no Art. 5 prohibition applies, move to Step 2.

Decision Tree Step 2: Is It High-Risk?

High-risk classification flows through two distinct pathways. Both can apply simultaneously.

Pathway A: Safety Components Under Annex I

If your AI system is a safety component of a product — or is itself a product — covered by the Union harmonisation legislation listed in Annex I, and it requires a third-party conformity assessment under that legislation, it is high-risk under the AI Act.

Annex I covers machinery, toys, recreational craft, lifts, equipment in explosive atmospheres, radio equipment, pressure equipment, cableway installations, personal protective equipment, medical devices, in-vitro diagnostics, civil aviation, motor vehicles, and several other product safety domains.

The key question: does the AI component affect the safety function of the product? A machine learning model that optimises energy consumption in a lift is probably not a safety component. A model that controls the lift's door-closing logic based on sensor data probably is. The distinction matters enormously.

Pathway B: Annex III Use Cases

Even if your system is not a safety component under Annex I, it is high-risk if it falls within one of the use-case categories in Annex III:

  1. Biometric identification and categorisation (remote biometric identification systems, biometric categorisation by sensitive attributes)
  2. Critical infrastructure management (AI systems used as safety components in the management and operation of road traffic, water, gas, heating, and electricity supply)
  3. Education and vocational training (determining access to education, evaluating learning outcomes, assessing appropriate level of education, monitoring prohibited behaviour during tests)
  4. Employment and workforce management (recruitment screening, CV filtering, interview evaluation, promotion/termination decisions, task allocation based on behavioural analysis, monitoring/evaluation of employee performance)
  5. Access to essential services (credit scoring, insurance risk assessment, emergency services dispatch prioritisation)
  6. Law enforcement (individual risk assessments, polygraphs, evidence reliability assessment, profiling during investigations)
  7. Migration and border control (risk assessment for irregular migration, application processing assistance)
  8. Justice and democratic processes (sentencing assistance, alternative dispute resolution, influencing voting behaviour)

Practical example: your HR department uses an AI-powered screening tool that ranks CVs and recommends candidates for interview. This falls squarely into Annex III, category 4. It is high-risk regardless of whether the vendor calls it "AI-powered" or "algorithm-assisted" or "smart matching." The classification follows the function, not the marketing.

Another example: your credit risk team uses a model that contributes to lending decisions. Annex III, category 5. High-risk. Even if the model is one input among many, and a human makes the final decision, the system itself is classified based on its intended purpose.

Important exception under Art. 6(3): An AI system listed in Annex III is not considered high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights. The provider must document this assessment. But the bar is high, and regulators will scrutinise self-assessments that conveniently conclude "not significant." Do not treat this exception as a routine opt-out.

Decision Tree Step 3: Is It GPAI?

General-purpose AI models — foundation models and large language models — have their own regulatory track under Articles 51 through 56. This classification is independent of the risk tiers above.

A model is GPAI if it displays significant generality, is capable of competently performing a wide range of distinct tasks, and can be integrated into a variety of downstream systems or applications. If you are deploying GPT-4-class models, Claude, Mistral Large, Llama, or similar foundation models, they are GPAI.

All GPAI providers must:

  • Maintain technical documentation including training and testing processes
  • Provide information and documentation to downstream deployers
  • Comply with the EU Copyright Directive (training data transparency)
  • Publish a sufficiently detailed summary of training data content

Systemic risk GPAI (Art. 51(2)): If the cumulative compute used for training exceeds 10^25 FLOPs — or if the Commission designates it based on capabilities — additional obligations apply. These include adversarial testing (red-teaming), incident tracking and reporting to the AI Office, cybersecurity protections, and energy consumption documentation.

For most enterprises, you are a deployer of GPAI, not a provider. Your obligations centre on using the model in accordance with the provider's instructions, implementing appropriate human oversight, and — critically — correctly classifying the downstream system you build on top of the GPAI model. A chatbot built on a GPAI model is not automatically high-risk, but a recruitment screening tool built on the same model is.

Decision Tree Step 4: Transparency Obligations for All Other AI

If your system passed through Steps 1-3 without triggering prohibited or high-risk classification, you still need to check Art. 50 transparency requirements.

Chatbots and conversational AI: Users must be informed they are interacting with an AI system, unless this is obvious from the circumstances. The "obvious from circumstances" exception is narrower than vendors claim — a text-based support chat that mimics human conversation patterns is not obviously AI to most users.

Deepfakes and synthetic content: AI-generated or manipulated image, audio, or video content must be machine-readably labelled as artificially generated. This applies even to marketing content, product demonstrations, and internal communications.

Emotion recognition and biometric categorisation (where not prohibited): If deployed in contexts outside the Art. 5 prohibitions, users must be informed that such a system is operating.

AI-generated text on matters of public interest: Text generated by AI that is published to inform the public on matters of public interest must be labelled as AI-generated, unless a human reviewed and is editorially responsible for the content.

Practical Examples: Walking Through the Tree

HR screening tool (ranks CVs, recommends shortlists): Step 1 — not prohibited (no emotion recognition, not social scoring). Step 2 — high-risk via Annex III category 4. Classification: high-risk. Full conformity assessment required.

Customer service chatbot (answers product questions, no decision authority): Step 1 — not prohibited. Step 2 — not a safety component, not in Annex III categories. Step 3 — built on GPAI, but the downstream system is not high-risk. Step 4 — Art. 50 transparency obligation applies. Classification: limited risk. Must disclose AI interaction to users.

Credit scoring model (contributes to lending decisions): Step 1 — not prohibited. Step 2 — high-risk via Annex III category 5. Classification: high-risk.

Medical diagnostic AI (assists radiologists in detecting anomalies): Step 1 — not prohibited. Step 2 — high-risk via Annex I (medical devices regulation) AND potentially Annex III. Classification: high-risk through both pathways. Dual conformity assessment under both the AI Act and the Medical Devices Regulation.

Internal analytics dashboard (visualises sales data, no autonomous decisions): Step 1-4 — none triggered. Classification: minimal risk. No specific obligations.

Common Classification Mistakes to Avoid

Mistake 1: Classifying the technology instead of the use case. The same NLP model is minimal risk when summarising meeting notes and high-risk when screening job applications. Classification follows purpose, not architecture.

Mistake 2: Assuming "human in the loop" removes high-risk status. A system designed to inform or influence a human decision in an Annex III domain is still high-risk even if a human makes the final call. The human oversight requirement is an obligation within the high-risk category, not an escape from it.

Mistake 3: Treating vendor classification as authoritative. Your vendor may classify their tool as "limited risk." But if you deploy it in an Annex III context, your deployment is high-risk regardless of the vendor's self-assessment. The deployer bears independent classification responsibility.

Mistake 4: Ignoring embedded AI. Many enterprise tools now contain AI features that were not present at procurement. A CRM that added "AI-powered lead scoring" in a software update may have shifted its classification without anyone in your compliance team noticing.

Mistake 5: One-time classification. The AI Act requires ongoing monitoring. When you fine-tune a model, expand its training data, or deploy it in a new business context, re-classification is mandatory.

Key Takeaways

  • Walk every AI system through the four-step decision tree: prohibited practices first, then high-risk pathways, then GPAI status, then transparency obligations.
  • Classification follows the deployment context and intended purpose, not the underlying technology. The same model can land in different tiers depending on how you use it.
  • Art. 5 prohibitions on emotion recognition in workplace and education settings catch more enterprise systems than most teams expect. Audit your vendor tools for embedded emotion inference.
  • High-risk classification triggers through two independent pathways (Annex I safety components and Annex III use cases). Check both.
  • Deployers bear independent classification responsibility regardless of what the vendor claims.
  • Classification is not a one-time exercise. Re-evaluate when purpose, context, or capability changes.

Your AI governance control model starts with correct classification. If that foundation is wrong, every downstream obligation — risk management, documentation, conformity assessment — is built on sand. For high-risk systems, explore how automated compliance workflows can operationalise the conformity assessment requirements. And if your systems touch prohibited practices or risk classification boundaries, get the analysis right before you get the audit finding.

Next Step

Turn guidance into evidence.

If procurement is involved, start with the Trust Center. If you want to see the product, create an account or launch a live demo.