Skip to main content
FORTISEU
EU-AI-ACTEstablished

AI Risk Classification: The Four-Tier Framework Under the EU AI Act

12 min readUpdated 2026-03-12

Overview of the Four Risk Tiers

The EU AI Act's regulatory framework is built around a four-tier risk pyramid that calibrates obligations to the severity of potential harm. This graduated approach — inspired by the EU's New Legislative Framework for product safety — ensures that the most burdensome requirements apply only where the stakes are highest, while leaving the vast majority of AI applications unregulated or lightly regulated.

At the apex of the pyramid sits unacceptable risk: AI practices so fundamentally incompatible with EU values that they are banned outright. Article 5 enumerates these prohibited practices, which include social scoring, exploitation of vulnerabilities, subliminal manipulation, and most real-time remote biometric identification. These prohibitions became applicable on 2 February 2025 — the earliest enforcement date under the Act's phased timeline.

The second tier — high risk — encompasses AI systems that pose significant risks to health, safety, or fundamental rights. These are defined through two pathways in Article 6: AI systems used as safety components of products covered by EU harmonisation legislation (Annex I), and standalone AI systems in specific critical areas listed in Annex III. High-risk AI systems face the Act's most comprehensive obligations, including risk management systems, data governance, technical documentation, transparency, human oversight, and accuracy and robustness requirements (Articles 8-15).

Limited risk constitutes the third tier, covering AI systems that interact with natural persons or generate content. Article 50 imposes specific transparency obligations: users must be informed when interacting with a chatbot, content generated by AI must be marked as such, deep fakes must be labelled, and emotion recognition systems must inform the person concerned. These are disclosure-based obligations rather than conformity requirements.

The broadest tier — minimal risk — covers all AI systems not falling into the above categories. This includes the vast majority of AI applications currently on the market, such as recommendation systems, spam filters, AI-optimised logistics, and video game AI. These systems face no mandatory obligations under the Act, though providers are encouraged to adopt voluntary codes of conduct under Article 95.

Art. 5
Art. 6
Art. 50

High-Risk Categories: Annex III in Detail

Annex III of the AI Act lists eight areas where AI systems are classified as high-risk when used for specific purposes. Understanding these categories is essential for any organisation deploying AI in the EU market. Each area reflects a context where AI decisions can materially affect individuals' fundamental rights, safety, or access to essential services.

1. Biometrics (Annex III, Point 1): AI systems intended for remote biometric identification (not in real-time, which is prohibited under Article 5 unless exceptions apply), biometric categorisation based on sensitive attributes (race, political opinions, trade union membership, religious beliefs, sexual orientation), and emotion recognition. Post-remote biometric identification (e.g., forensic facial recognition used after an event by law enforcement) is high-risk rather than prohibited, subject to conformity assessment.

2. Critical infrastructure (Annex III, Point 2): AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, or electricity. This includes AI-driven traffic management, smart grid optimisation, and industrial control systems in essential utilities.

3. Education and vocational training (Annex III, Point 3): AI systems that determine access to educational institutions, evaluate learning outcomes, assess the appropriate level of education for an individual, or monitor prohibited behaviour during exams. Automated essay grading, student admission algorithms, and AI proctoring tools fall into this category.

4. Employment and worker management (Annex III, Point 4): AI systems used for recruitment (CV screening, interview assessment), promotion and termination decisions, task allocation based on individual behaviour or personal traits, and performance monitoring of workers. This is one of the most impactful categories, affecting millions of EU workers.

5. Access to essential private and public services (Annex III, Point 5): AI systems evaluating eligibility for public assistance benefits, creditworthiness assessment for natural persons, risk assessment and pricing in life and health insurance, and the evaluation and classification of emergency calls. AI-driven credit scoring, insurance underwriting algorithms, and benefits eligibility systems are captured.

6. Law enforcement (Annex III, Point 6): AI systems used as polygraphs, for assessing the reliability of evidence, predicting the occurrence or reoccurrence of criminal offences (predictive policing beyond the prohibited category), profiling of natural persons, and crime analytics. Strict safeguards apply given the fundamental rights implications.

7. Migration, asylum, and border control management (Annex III, Point 7): AI systems used as polygraphs or to assess risk indicators (irregular migration, health, security), for document authenticity examination, and for processing of visa, residence permit, and asylum applications. AI-assisted border screening and automated document verification are included.

8. Administration of justice and democratic processes (Annex III, Point 8): AI systems intended to assist judicial authorities in researching and interpreting facts and law and in applying the law to concrete facts, or to be used by alternative dispute resolution bodies. AI systems intended to influence the outcome of elections or referendums, or the voting behaviour of natural persons, are also classified as high-risk (excluding AI systems whose output does not directly interact with natural persons, such as back-office analytics tools).

Annex III
Art. 6(2)
ISO 42001Annex B (AI Risk Sources)

ISO 42001's Annex B enumerates AI risk sources that align with the AI Act's Annex III categories. Organisations implementing ISO 42001 should map their risk inventory against Annex III to identify regulatory classification.

GDPRArt. 35 (DPIA)

Many Annex III use cases — employment, credit scoring, law enforcement profiling — also trigger mandatory DPIAs under GDPR Article 35. Conformity assessments and DPIAs should be coordinated.

The Two Classification Pathways

Article 6 establishes two distinct pathways through which an AI system can be classified as high-risk, reflecting the Act's dual heritage in product safety law and fundamental rights protection.

Pathway 1 — Product Safety (Article 6(1), Annex I): An AI system is high-risk if it is intended to be used as a safety component of a product, or is itself a product, covered by EU harmonisation legislation listed in Annex I, AND the product or system is required to undergo third-party conformity assessment under that harmonisation legislation. Annex I covers machinery, toys, recreational craft, lifts, equipment for potentially explosive atmospheres, radio equipment, pressure equipment, cableway installations, personal protective equipment, gas appliances, medical devices, in vitro diagnostic devices, civil aviation, motor vehicles, agricultural vehicles, marine equipment, rail systems, and construction products. For these products, AI safety components inherit the existing conformity assessment regime, supplemented by the AI Act's specific requirements.

Pathway 2 — Standalone High-Risk (Article 6(2), Annex III): AI systems falling into the eight Annex III categories are classified as high-risk, unless the provider demonstrates that the system does not pose a significant risk of harm to health, safety, or fundamental rights. Article 6(3) provides criteria for this exception: the AI system must be intended to perform a narrow procedural task, improve the result of a previously completed human activity, detect decision-making patterns without replacing or influencing the previously completed human assessment, or perform a preparatory task to an assessment relevant for the purposes of the Annex III use cases. If a provider concludes their system is not high-risk under these exceptions, they must document this assessment and register the system in the EU database before placing it on the market.

The practical implication is that organisations must conduct a structured classification analysis for every AI system they provide or deploy. This analysis should consider the intended purpose, the deployment context, the affected persons and rights, and the potential severity and reversibility of adverse outcomes. Misclassification carries significant risks: deploying an actually high-risk system without conformity assessment exposes the provider to penalties of up to EUR 15 million or 3% of global turnover.

Art. 6(1)
Art. 6(3)
Warning

If you conclude your AI system is not high-risk under the Article 6(3) exception, you must document this assessment before placing the system on the market and register it in the EU database. National market surveillance authorities can challenge this self-assessment.

How to Self-Assess Your AI System's Risk Level

Organisations developing or deploying AI systems need a structured methodology to determine their classification under the AI Act. The following five-step self-assessment framework provides practical guidance aligned with the Act's provisions and the European Commission's guidance.

Step 1 — Confirm it is an AI system: Verify your system meets the Article 3(1) definition — a machine-based system with varying autonomy that infers from input to generate outputs (predictions, content, recommendations, decisions). Simple rule-based software, traditional statistical methods without adaptiveness, and basic automation do not qualify. Document this assessment with reference to Recitals 12 and 13.

Step 2 — Check for prohibited practices: Screen against Article 5 prohibitions. If your system performs social scoring, real-time remote biometric identification (without an applicable exception), subliminal manipulation, exploitation of vulnerabilities, emotion recognition in workplaces/schools, untargeted facial recognition scraping, or certain predictive policing, it cannot be placed on the EU market. This check is the highest priority as violations carry 7% turnover fines.

Step 3 — Assess high-risk classification: Determine whether your AI system falls under Pathway 1 (safety component of an Annex I product requiring third-party conformity assessment) or Pathway 2 (Annex III use case). For Pathway 2, map your system's intended purpose against the eight Annex III categories. If it falls within an Annex III category, assess whether the Article 6(3) exception applies — the system must be narrow-purpose, preparatory, or non-influential to human decision-making.

Step 4 — Evaluate transparency obligations: If your system is not high-risk, determine whether it falls under Article 50 transparency requirements. Does the system interact with natural persons (chatbot)? Does it generate synthetic audio, image, video, or text content? Does it perform emotion recognition or biometric categorisation? If yes, specific disclosure obligations apply even though the system is not high-risk.

Step 5 — Document and register: Document your classification decision with supporting analysis. High-risk systems must be registered in the EU database under Article 49 before market placement. Non-high-risk Annex III systems (relying on the Article 6(3) exception) must also be registered. Maintain this documentation as a living record, as changes to the system's intended purpose or deployment context may alter its classification.

ISO 42001Clause 6.1.2 (AI Risk Assessment)

ISO 42001's risk assessment process should incorporate the AI Act's classification methodology. The standard's context-dependent risk identification (Clause 4) maps naturally to the Act's purpose-based classification approach.

Tip

FortisEU's AI risk classification tool automates this five-step assessment process, mapping your AI systems against Article 5 prohibitions, Annex III categories, and Article 50 transparency triggers to generate a documented classification decision with regulatory references.

Cross-Framework Mapping: AI Act and ISO 42001

ISO/IEC 42001:2023, the international standard for AI management systems, provides a complementary framework that organisations can leverage alongside AI Act compliance. While the AI Act establishes mandatory legal obligations, ISO 42001 provides a management system approach to responsible AI governance that can operationalise many of the Act's requirements.

ISO 42001's risk assessment process (Clause 6.1) aligns with the AI Act's risk-based classification. Organisations implementing ISO 42001 should incorporate the AI Act's four-tier classification as a regulatory constraint within their risk assessment methodology. The standard's emphasis on understanding the organisation's context (Clause 4) — including regulatory, stakeholder, and societal considerations — maps directly to the Act's purpose-based classification approach.

The AI Act's high-risk requirements (Articles 8-15) correspond to several ISO 42001 controls. Article 9's risk management system requirement aligns with ISO 42001's Clause 6.1 (Actions to address risks and opportunities) and Annex A controls on risk identification, assessment, and treatment. Article 10's data governance requirements correspond to ISO 42001's data management controls covering data quality, representativeness, and bias mitigation. Article 11's technical documentation maps to ISO 42001's documentation requirements (Clause 7.5), while Article 14's human oversight provisions align with the standard's controls on human involvement in AI system decision-making.

Organisations pursuing ISO 42001 certification alongside AI Act compliance should conduct an integrated gap analysis. The standard covers governance, ethics, and organisational aspects that the Act addresses less prescriptively, while the Act provides specific technical requirements (e.g., conformity assessment procedures, EU database registration, CE marking) that fall outside ISO 42001's scope. Together, they provide a robust governance-plus-compliance framework for responsible AI deployment in the EU market.

It is important to note that ISO 42001 certification does not constitute compliance with the AI Act. The Act requires specific conformity assessment procedures (Article 43), including third-party assessment for biometric AI systems, and places obligations on specific legal roles (provider, deployer, importer, distributor) that go beyond a management system standard. However, demonstrating ISO 42001 implementation can provide evidence of organisational commitment and systematic processes that support AI Act compliance arguments during market surveillance inspections.

ISO 42001Full Standard Mapping

ISO 42001 Clause 6.1 → AI Act Art. 9 risk management; Clause 7.5 → Art. 11 documentation; Annex A data controls → Art. 10 data governance; Annex A human oversight → Art. 14 human oversight.

NIST AI RMFGovern / Map / Measure / Manage

The NIST AI RMF's four functions provide an alternative risk management structure. 'Govern' maps to AI Act governance obligations, 'Map' to risk classification, 'Measure' to monitoring requirements, and 'Manage' to conformity and corrective actions.

FAQ

Frequently Asked Questions

How do I know if my AI system is high-risk?

Your AI system is high-risk if it meets either of two pathways under Article 6. Pathway 1: the system is a safety component of a product covered by EU harmonisation legislation (Annex I) requiring third-party conformity assessment. Pathway 2: the system falls into one of eight Annex III categories — biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, or justice — unless you can demonstrate under Article 6(3) that it poses no significant risk (narrow procedural task, preparatory, or non-influential). Document your classification and register the system in the EU database.

What happens if I misclassify my AI system?

Misclassification carries significant regulatory risk. If you deploy a high-risk AI system without completing the required conformity assessment, you face penalties of up to EUR 15 million or 3% of global annual turnover under Article 99(4). National market surveillance authorities can order the system withdrawn from the market and require corrective measures. If the misclassification concerns a prohibited practice, penalties increase to EUR 35 million or 7% of turnover. Providers should err on the side of caution and document their classification reasoning.

Can the same AI model be classified differently depending on use?

Yes. The AI Act classifies based on intended purpose and deployment context, not the underlying technology. The same foundation model could be minimal risk when used for text summarisation, limited risk when powering a customer-facing chatbot (transparency required), or high-risk when deployed for employment screening (Annex III, Point 4). This context-dependent approach means providers must assess classification for each specific intended use, and deployers using a general-purpose system for a high-risk purpose assume provider-level obligations under Article 25.

Does ISO 42001 certification mean compliance with the AI Act?

No. ISO 42001 certification provides evidence of a systematic AI management approach but does not constitute AI Act compliance. The Act requires specific conformity assessment procedures (Article 43), EU database registration (Article 49), CE marking (Article 48), and role-specific obligations that go beyond management system certification. However, ISO 42001 implementation can provide supporting evidence during market surveillance inspections and streamline compliance by establishing the governance, documentation, and risk management processes that the Act requires.

Are AI-powered recommendation systems high-risk?

Generally, no. AI-powered recommendation systems for content, products, or services are typically classified as minimal risk and face no mandatory obligations under the AI Act. However, exceptions apply: a recommendation system used in education to determine a student's appropriate level (Annex III, Point 3) or in employment to allocate tasks based on worker profiling (Annex III, Point 4) could be high-risk. The critical factor is the intended purpose — content recommendations on an e-commerce platform differ fundamentally from recommendations that affect access to education, employment, or essential services.

This content is for informational purposes only and does not constitute legal advice. Consult qualified legal counsel for compliance decisions.

Automate EU-AI-ACT Compliance with FortisEU

Turn regulatory obligations into actionable controls with evidence workflows, real-time dashboards, and EU-sovereign AI assistance.