EU AI Act High-Risk AI Systems
Comprehensive reference on high-risk AI system classification under the EU AI Act (Regulation 2024/1689), covering Annex III categories, risk management, data governance, transparency, human oversight, technical documentation, conformity assessment, and post-market monitoring obligations effective August 2026.
- 1
High-risk AI systems are classified through two pathways: safety components of Annex I products requiring third-party conformity assessment, and Annex III use cases (biometrics, critical infrastructure, employment, education, essential services, law enforcement, migration, justice).
- 2
The risk management system under Article 9 is a continuous lifecycle obligation — not a one-time assessment — that must identify, evaluate, and mitigate risks through design, controls, and deployer information.
- 3
Data governance under Article 10 requires training, validation, and testing datasets to be representative, sufficiently free of errors, and subject to bias monitoring — coordinated with GDPR data protection obligations.
- 4
Human oversight under Article 14 requires AI systems to be designed so humans can meaningfully monitor, override, and stop the system — not merely rubber-stamp its outputs.
- 5
Full application of high-risk system obligations begins 2 August 2026, with Annex I product systems following on 2 August 2027. Begin compliance preparation now.
1. High-Risk AI System Classification
The EU AI Act (Regulation 2024/1689) establishes a risk-based regulatory framework in which high-risk AI systems bear the most substantial compliance obligations. Article 6 defines two pathways to high-risk classification. First, under Article 6(1), AI systems that are safety components of products, or are themselves products, covered by the EU harmonised legislation listed in Annex I (including machinery, medical devices, toys, lifts, radio equipment, civil aviation, motor vehicles, and marine equipment) and that are required to undergo third-party conformity assessment under that legislation are classified as high-risk. Second, under Article 6(2), AI systems falling within the use cases enumerated in Annex III are classified as high-risk, regardless of whether they are embedded in a product.
Annex III enumerates eight areas of high-risk use: (1) biometrics, insofar as permitted under Union or national law; (2) critical infrastructure — management and operation of road traffic, and supply of water, gas, heating, and electricity; (3) education and vocational training — determining access to or influencing outcomes in education and training; (4) employment, workers management, and access to self-employment — recruitment, selection, contract decisions, monitoring, and evaluation of workers; (5) access to and enjoyment of essential private and public services and benefits — creditworthiness assessment, risk assessment in life and health insurance, evaluation of applications for public assistance and services, and emergency dispatch prioritisation; (6) law enforcement — individual risk assessments, polygraphs and similar tools, evaluation of evidence reliability, crime prediction, and profiling; (7) migration, asylum, and border control management; and (8) administration of justice and democratic processes.
Article 6(3) introduces an important qualification: an AI system listed in Annex III is not considered high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights. This exception applies where the AI system performs a narrow procedural task, improves the result of a previously completed human activity, detects decision-making patterns without replacing or influencing the human assessment, or performs a preparatory task to an assessment relevant to the Annex III use cases. Providers relying on this exception must document their assessment before placing the system on the market and must notify the relevant national competent authority. The exception is narrow and must be applied conservatively — its misapplication exposes the provider to the full high-risk compliance obligations retroactively.
The Article 6(3) exception that allows providers to declare an Annex III system as non-high-risk is narrow and requires notification to the competent authority. Misapplication does not shield the provider — national authorities can override the self-assessment and require full high-risk compliance.
2. Risk Management System (Article 9)
Article 9 requires providers of high-risk AI systems to establish, implement, document, and maintain a risk management system throughout the entire lifecycle of the AI system. This is not a one-time risk assessment but a continuous, iterative process that identifies and analyses known and reasonably foreseeable risks, estimates and evaluates risks that may emerge when the system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse, and adopts appropriate and targeted risk management measures.
The risk management system must identify risks to health, safety, and fundamental rights that the high-risk AI system may pose, considering both the intended purpose and conditions of reasonably foreseeable misuse. Risk evaluation must use available data from the post-market monitoring system (Article 72) to refine risk assessments over time. Risk management measures must ensure that residual risk associated with each hazard, as well as the overall residual risk, is judged acceptable. When adopting risk management measures, the provider must consider the effects and possible interactions of those measures — a mitigation that reduces one risk but introduces another is not necessarily a net improvement.
The risk management measures themselves must follow a hierarchy: eliminate or reduce risks through design and development choices, implement adequate mitigation and control measures where design choices cannot eliminate the risk, and provide information to deployers regarding residual risks. This hierarchy parallels the 'safety by design' principle found in EU product safety legislation. Testing of the high-risk AI system must be carried out to identify the most appropriate and targeted risk management measures, and testing must ensure that the system performs consistently for its intended purpose and complies with the requirements of Chapter III, Section 2. Testing must include, as appropriate, testing in real-world conditions in accordance with Article 60.
3. Data and Data Governance (Article 10)
Article 10 establishes data governance requirements that apply to training, validation, and testing datasets used for high-risk AI systems. The provision recognises that AI system quality is fundamentally dependent on data quality, and mandates that datasets be subject to appropriate data governance and management practices. These practices must address, at minimum: design choices for data collection and preparation, data collection processes and their origin, data preparation operations (annotation, labelling, cleaning, updating, enrichment, and aggregation), the formulation of relevant assumptions regarding what the data measures and represents, an assessment of the availability, quantity, and suitability of the datasets needed, and examination of possible biases that are likely to affect health, safety, or fundamental rights.
Training, validation, and testing datasets must be relevant, sufficiently representative, and, to the best extent possible, free of errors and complete in view of the intended purpose of the system. They must have the appropriate statistical properties, including with respect to the persons or groups of persons in relation to whom the high-risk AI system is intended to be used. Article 10(4) addresses bias explicitly: to the extent necessary for bias monitoring, detection, and correction, providers may process special categories of personal data under Article 9 of the GDPR, subject to appropriate safeguards including pseudonymisation and encryption. This provision creates a carefully bounded exemption that allows bias mitigation through the use of demographic data that would otherwise be off-limits.
For organisations operating under both the AI Act and the GDPR, data governance under Article 10 must be coordinated with GDPR obligations. The legal basis for processing training data must be established under Article 6 of the GDPR, purpose limitation under Article 5(1)(b) must be respected, data minimisation under Article 5(1)(c) must be applied, and data protection impact assessments under Article 35 must be conducted where appropriate. The AI Act's data quality requirements complement but do not replace GDPR data protection obligations — compliance with both frameworks is required, and tension between data quality (more data, more representative data) and data minimisation (less data, only necessary data) must be resolved through careful proportionality analysis documented in the risk management system.
Article 10(4) of the AI Act permits processing of special category data (ethnicity, gender, etc.) specifically for bias monitoring and correction, subject to GDPR safeguards. This is a narrow exemption — it does not authorise general processing of sensitive data for AI training.
4. Transparency (Article 13) and Human Oversight (Article 14)
Article 13 requires that high-risk AI systems be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately. This transparency obligation manifests through instructions for use that must accompany each high-risk AI system, providing deployers with concise, complete, correct, and clear information including: the identity and contact details of the provider, the system's characteristics, capabilities, and limitations of performance, its intended purpose, the level of accuracy, robustness, and cybersecurity against which the system has been tested and validated, any known or foreseeable circumstances that may lead to risks to health, safety, or fundamental rights, technical capabilities and limitations relevant to the transparency of the system to the deployer, and specifications for input data.
Article 14 addresses human oversight — one of the AI Act's most distinctive requirements. High-risk AI systems must be designed and developed in such a way that they can be effectively overseen by natural persons during the period in which they are in use. Human oversight measures must be identified by the provider and built into the system, or identified as appropriate to be implemented by the deployer. The human oversight measures must enable the individual to whom human oversight is assigned to fully understand the capacities and limitations of the high-risk AI system, be able to duly monitor its operation and detect and address anomalies, remain aware of automation bias and be able to decide not to use the system or to override, reverse, or stop the system, and be able to intervene or interrupt the system through a 'stop' button or similar procedure.
The human oversight requirement has significant implications for system design. It is not sufficient to merely provide a human with the ability to review AI outputs — the system must be designed so that the human can meaningfully exercise that oversight. This means providing sufficient information for the human to understand why the system reached a particular output, ensuring the human has adequate time and resources to review before consequences materialise, designing the interface so that the override or stop function is accessible and effective, and ensuring that the organisational context does not create pressure to defer to the AI system's output. The combination of transparency (Article 13) and human oversight (Article 14) is intended to prevent AI systems from operating as black boxes that produce consequential decisions without meaningful human engagement.
5. Technical Documentation and Conformity Assessment
Article 11 requires providers to draw up technical documentation before a high-risk AI system is placed on the market or put into service, and to keep that documentation up to date. The technical documentation must demonstrate compliance with the requirements of Chapter III, Section 2, and provide national competent authorities and notified bodies with all necessary information to assess compliance. Annex IV specifies the minimum content of technical documentation: a general description of the system, a detailed description of elements and development process, detailed information about monitoring, functioning, and control, a description of the risk management system, a description of changes made throughout the lifecycle, a list of harmonised standards or common specifications applied, a description of the conformity assessment procedure followed, and an EU declaration of conformity.
The conformity assessment procedure depends on the classification pathway. For AI systems classified as high-risk under Article 6(2) (Annex III use cases), providers generally follow the internal control procedure set out in Annex VI — a self-assessment that includes verifying the quality management system, examining the technical documentation, and verifying that the design and development process is consistent with the requirements. However, for biometric identification and categorisation systems, the conformity assessment requires the involvement of a notified body under Article 43(1). For AI systems classified as high-risk under Article 6(1) (Annex I product legislation), the conformity assessment follows the procedure applicable under the relevant sectoral legislation, integrating the AI Act requirements into the existing product conformity framework.
Upon successful conformity assessment, the provider issues an EU declaration of conformity (Article 47) and affixes the CE marking (Article 48). The declaration of conformity must contain specified information including the provider's identity, a statement that the declaration is issued under the sole responsibility of the provider, that the high-risk AI system complies with the AI Act, references to relevant harmonised standards or common specifications, and details of the notified body involved (where applicable). The technical documentation and declaration of conformity must be kept for a period of ten years after the high-risk AI system has been placed on the market or put into service. National market surveillance authorities may request access to this documentation at any time.
For most Annex III high-risk AI systems, the conformity assessment is an internal self-assessment (Annex VI). Only biometric identification and categorisation systems require notified body involvement. Start your compliance preparation with the internal control procedure and engage a notified body early if your system falls within the biometrics category.
6. Post-Market Monitoring and Incident Reporting
Article 72 requires providers to establish and document a post-market monitoring system proportionate to the nature and risks of the high-risk AI system. The system must actively and systematically collect, document, and analyse relevant data that deployers provide or that are collected through other sources on the performance of the AI system throughout its lifetime. The post-market monitoring system must enable the provider to evaluate the continuous compliance of the AI system with the requirements, identify risks from the system when used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, and take corrective or preventive action where appropriate.
The post-market monitoring plan must include a specific strategy for data collection and analysis, with defined metrics, thresholds, and triggers for corrective action. Data collected must include feedback from deployers, analysis of interactions with the system, review of AI system logs, and analysis of any incidents or malfunctions reported. Where the system's performance degrades over time (model drift, distributional shift in input data, emergent bias), the provider must take corrective action which may include retraining, updating, or withdrawing the system. The post-market monitoring plan must be included in the technical documentation under Annex IV.
Article 73 establishes incident reporting obligations: providers must report to market surveillance authorities any serious incident involving their high-risk AI system — defined as an incident that directly or indirectly leads to death, serious damage to health, serious and irreversible disruption of critical infrastructure management, or a breach of fundamental rights obligations. The report must be made immediately after the provider establishes a causal link, or a reasonable likelihood thereof, between the AI system and the serious incident, and in any event not later than 15 days after becoming aware of the serious incident. Providers operating AI systems across multiple Member States should establish a centralised incident reporting capability that can file reports with the relevant national authority in each affected jurisdiction within the statutory timeline.
7. Timeline and Preparation Strategy
The EU AI Act entered into force on 1 August 2024, with a phased application schedule. Prohibited practices under Article 5 became applicable on 2 February 2025. AI literacy obligations under Article 4 and provisions relating to general-purpose AI models became applicable on 2 August 2025. The high-risk AI system requirements — the obligations described in this guide — become fully applicable on 2 August 2026 for newly placed systems. For high-risk AI systems that are also regulated as products under Annex I legislation (medical devices, machinery, etc.), the application date is 2 August 2027. This timeline gives providers and deployers a defined but finite window to achieve compliance.
Preparation should begin with an AI system inventory: catalogue all AI systems developed, deployed, or procured within your organisation, classify each against the Annex III use cases and the Article 6(3) exception criteria, and prioritise compliance efforts by risk classification. For systems classified as high-risk, establish the risk management system (Article 9), implement data governance practices (Article 10), develop technical documentation (Article 11, Annex IV), design for transparency and human oversight (Articles 13-14), and prepare the conformity assessment evidence. For organisations that are deployers rather than providers, focus on the deployer obligations under Article 26: use the system in accordance with instructions, ensure human oversight, monitor operation, maintain logs, and conduct a fundamental rights impact assessment where required.
The compliance effort for high-risk AI systems is substantial and should not be underestimated. Organisations that have invested in responsible AI governance frameworks — including risk assessment, bias testing, transparency documentation, and human review processes — will find the transition more manageable. Those starting from scratch should allocate dedicated resources, engage legal and compliance expertise with specific AI Act knowledge, and consider the AI Act's interaction with the GDPR (for data governance and individual rights), NIS2 (for cybersecurity requirements applicable to critical infrastructure AI), and sector-specific legislation (for regulated products and services). The AI Act is not a standalone regulation — it operates within the broader EU regulatory ecosystem, and compliance requires a holistic approach.
High-risk AI system obligations become fully applicable on 2 August 2026. For AI systems embedded in products regulated under Annex I legislation (e.g., medical devices), the date is 2 August 2027. Do not wait for the deadline — the conformity assessment, technical documentation, and risk management system require substantial lead time.
How do I determine if my AI system is high-risk under the AI Act?
First, check whether your AI system is a safety component of a product covered by Annex I EU harmonised legislation that requires third-party conformity assessment — if so, it is high-risk under Article 6(1). Second, check whether your system falls within any of the eight Annex III use case categories (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice). If yes, assess whether the Article 6(3) exception applies (narrow procedural task, improvement of prior human activity, pattern detection without replacement of human assessment, or preparatory task). If the exception does not apply, your system is high-risk. Document this classification analysis and notify the competent authority if relying on the Article 6(3) exception.
What is the difference between a provider and a deployer under the AI Act?
A provider is a natural or legal person that develops an AI system or has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark. A deployer is a natural or legal person that uses an AI system under its authority, except where the system is used in the course of a personal, non-professional activity. Providers bear the primary compliance obligations: risk management, data governance, technical documentation, conformity assessment, and post-market monitoring. Deployers bear lighter but still significant obligations: use in accordance with instructions, human oversight, monitoring, log retention, fundamental rights impact assessment (for certain deployers), and transparency toward affected persons. An organisation that fine-tunes or substantially modifies a general-purpose AI model for a high-risk use case may be reclassified from deployer to provider.
Does the AI Act apply to AI systems already in use before August 2026?
High-risk AI systems already placed on the market or put into service before 2 August 2026 are subject to the AI Act only if they undergo significant changes after that date. A 'significant change' is a change that affects the compliance of the AI system with the requirements or results in a modification to its intended purpose. If no significant change occurs, existing systems are grandfathered. However, for AI systems used by public authorities that were placed on the market or put into service before 2 August 2026, the requirements apply from 2 August 2030. Providers should inventory existing systems, assess whether planned updates would constitute significant changes, and plan compliance accordingly.
What penalties apply for non-compliance with high-risk AI system requirements?
Article 99 establishes administrative fines of up to EUR 15 million or 3% of total worldwide annual turnover (whichever is higher) for non-compliance with the obligations for high-risk AI systems under Chapter III, Section 2 (Articles 8-15), deployer obligations under Article 26, and obligations of notified bodies. For supply of incorrect, incomplete, or misleading information to notified bodies or national competent authorities, fines of up to EUR 7.5 million or 1% of turnover apply. For prohibited practices (Article 5), the maximum is EUR 35 million or 7% of turnover. Member States also determine rules on penalties applicable to infringements not covered by the Regulation's fine framework, which may include additional national enforcement measures.
How does the AI Act interact with GDPR for high-risk AI systems?
The AI Act and GDPR operate in parallel and are cumulative, not alternative. A high-risk AI system that processes personal data must comply with both the AI Act's requirements (risk management, data governance, transparency, human oversight) and the GDPR's requirements (lawful basis, purpose limitation, data minimisation, data subject rights, DPIA where required). Article 10(5) of the AI Act specifically permits processing of special category data for bias detection under GDPR Article 9 safeguards. Article 22 of the GDPR (automated decision-making rights) applies alongside the AI Act's human oversight requirements. Organisations should develop integrated compliance documentation that addresses both frameworks rather than maintaining parallel compliance workstreams.
Ready to Operationalise This?
Turn this guide into working compliance workflows. Create an account or schedule a personalised demo.