High-Risk AI System Requirements: Articles 8–15 of the EU AI Act
Overview of High-Risk Obligations
Chapter III, Section 2 of the EU AI Act (Articles 8-15) establishes the core technical and organisational requirements that high-risk AI systems must satisfy before they can be placed on the EU market or put into service. These requirements apply to providers of high-risk AI systems — the entities that develop or commission the system and place it on the market under their name. Deployers (users of high-risk AI systems) face separate but complementary obligations under Articles 26-27.
The requirements are designed to be applied throughout the AI system's entire lifecycle — from design and development through deployment, operation, and decommissioning. Article 8 establishes the overarching principle: high-risk AI systems must comply with the requirements laid down in Articles 9-15, taking into account the intended purpose of the system and the generally acknowledged state of the art. The 'state of the art' reference ensures requirements evolve with technological capabilities — what constitutes adequate risk management or accuracy today may be insufficient as AI capabilities advance.
Providers must implement a quality management system (Article 17) to ensure systematic compliance with all requirements. This system must include policies and procedures for: design and design verification, development and quality control, testing and validation, compliance with technical standards, data management systems and practices, risk management, post-market monitoring, incident reporting, communication with competent authorities, and record-keeping. The quality management system must be proportionate to the size of the provider's organisation and documented in a systematic manner.
Importantly, these requirements are not merely design-time obligations. Article 9(1) explicitly requires that the risk management system be a 'continuous iterative process planned and run throughout the entire lifecycle' of the AI system. Post-market monitoring (Article 72), serious incident reporting (Article 73), and ongoing compliance obligations mean that providers remain responsible for their high-risk AI systems as long as they are in operation.
Compliance with Requirements
High-risk AI systems must comply with Articles 9-15, taking into account intended purpose and state of the art.
Quality Management System
Providers must establish a QMS covering design, development, testing, data management, risk management, and post-market monitoring.
Risk Management System (Article 9)
Article 9 requires providers to establish, implement, document, and maintain a risk management system for high-risk AI systems. This is the foundational requirement upon which all others build — a systematic approach to identifying, analysing, evaluating, and mitigating risks throughout the AI system's lifecycle.
The risk management system must be a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic updating. It must identify and analyse the known and reasonably foreseeable risks that the high-risk AI system can pose to health, safety, or fundamental rights when the system is used in accordance with its intended purpose. Crucially, the assessment must also consider risks from reasonably foreseeable misuse — uses that deviate from the intended purpose but which the provider can reasonably anticipate.
Article 9(2) specifies that risk management must include: estimation and evaluation of risks arising when the system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse; evaluation of risks arising from data analysis of post-market monitoring data; and adoption of appropriate and targeted risk management measures designed to address identified risks. Risk management measures must give due consideration to the effects and possible interactions resulting from combined application of the requirements, and must reflect the generally acknowledged state of the art.
Article 9(4) requires that residual risks associated with each hazard, as well as the overall residual risk of the high-risk AI system, are judged acceptable. When identifying the most appropriate risk management measures, the provider must consider elimination or reduction of risks through adequate design and development; implementation of adequate mitigation and control measures addressing risks that cannot be eliminated; and provision of information and, where appropriate, training to deployers. Residual risks must be communicated to the deployer.
The risk management system must also consider whether the high-risk AI system is likely to be accessed by or have an impact on children, and ensure that testing procedures are appropriate to the intended purpose and carried out at appropriate stages of the development process.
Risk Management System
A continuous iterative process throughout the entire lifecycle, requiring regular systematic updating.
Risk Identification and Assessment
Must cover risks from intended use, reasonably foreseeable misuse, and post-market monitoring data.
Residual Risk Assessment
Residual risks must be judged acceptable and communicated to deployers.
ISO 42001's risk assessment process aligns with Article 9 but operates at the organisational level. The AI Act requires system-specific risk management for each high-risk AI system, while ISO 42001 addresses AI risk at the management system level.
AI systems deployed in NIS2-covered critical infrastructure must satisfy both the AI Act's risk management (Art. 9) and NIS2's cybersecurity risk management measures (Art. 21). The requirements are complementary: AI Act focuses on AI-specific risks while NIS2 addresses broader cybersecurity.
Data Governance and Management (Article 10)
Article 10 establishes data governance requirements for high-risk AI systems that are trained with data. This is one of the most operationally significant provisions, as it governs the entire data pipeline from collection through training, validation, and testing.
Training, validation, and testing data sets must be subject to data governance and management practices appropriate for the intended purpose. These practices must concern: the relevant design choices; data collection processes and the origin of data, and in the case of personal data, the original purpose of data collection; relevant data-preparation processing operations, such as annotation, labelling, cleaning, updating, enrichment, and aggregation; the formulation of relevant assumptions, notably with respect to the information that the data are supposed to measure and represent; an assessment of the availability, quantity, and suitability of the data sets needed; examination in view of possible biases that are likely to affect the health and safety of persons, have a negative impact on fundamental rights, or lead to discrimination; appropriate measures to detect, prevent, and mitigate possible biases; and the identification of relevant data gaps or shortcomings and how those gaps can be addressed.
Article 10(3) requires that training, validation, and testing data sets be relevant, sufficiently representative, and to the best extent possible free of errors and complete in view of the intended purpose. They must have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be used. These characteristics must be met at the level of individual data sets and any combination thereof.
A critical provision is Article 10(5), which permits the processing of special categories of personal data (as defined in GDPR Article 9(1) — racial or ethnic origin, political opinions, religious beliefs, trade union membership, genetic data, biometric data, health data, sexual orientation) to the extent that it is strictly necessary for the purposes of ensuring bias detection and correction, subject to appropriate safeguards. This is a carefully crafted exception to GDPR's general prohibition on processing special category data, designed to enable fairness-oriented AI development.
Organisations must document their data governance practices as part of the technical documentation required under Article 11. This includes descriptions of data sources, collection methodologies, labelling procedures, bias mitigation strategies, and the statistical properties of data sets used for training, validation, and testing.
Data Governance Practices
Specifies data management practices covering design, collection, preparation, bias assessment, and gap identification.
Data Quality Requirements
Data sets must be relevant, representative, free of errors, complete, and statistically appropriate for intended purpose.
Special Category Data for Bias Detection
Permits processing of GDPR Article 9(1) special category data strictly for bias detection and correction with safeguards.
The AI Act's Article 10(5) creates a legal basis for processing special category personal data for bias detection. This interacts with GDPR Article 9(2)(g) (substantial public interest) and requires coordinated legal basis analysis.
Technical Documentation, Record-Keeping, and Transparency (Articles 11–13)
Articles 11-13 establish comprehensive documentation and transparency requirements that form the evidential backbone of high-risk AI compliance.
Article 11 requires providers to draw up technical documentation before the system is placed on the market or put into service, and to keep it up to date. The documentation must demonstrate compliance with all high-risk requirements (Articles 8-15) and provide national competent authorities and notified bodies with all necessary information to assess compliance. Annex IV specifies the required content, which includes: a general description of the AI system (intended purpose, version, relationship to hardware/other software); a detailed description of the elements and development process (methods, design specifications, system architecture, computational resources, training methodologies); detailed information about monitoring, functioning, and control (capabilities, limitations, accuracy levels, foreseeable unintended outcomes, human oversight specifications, operational parameters); a description of the risk management system; changes throughout the lifecycle; harmonised standards applied or other solutions used; a description of the data governance approach; and the EU declaration of conformity.
Article 12 mandates automatic recording of events (logs) for high-risk AI systems to the extent appropriate for the intended purpose. Logging capabilities must enable monitoring of the AI system's operation, including: recording the period of each use (start and end date and time), the reference database against which input data is checked, input data for which the search has led to a match, and the identification of natural persons involved in the verification of results. For law enforcement AI and biometric systems, the logging requirements are particularly stringent, covering the identity of operators, the purpose of use, and the data subjects affected.
Article 13 requires that high-risk AI systems be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately. An appropriate type and degree of transparency must be achieved to enable compliance with the obligations of the provider and deployer. Instructions for use must accompany the system, including: the identity and contact details of the provider; the system's characteristics, capabilities, and limitations of performance (accuracy, robustness, cybersecurity measures for known or foreseeable circumstances, potential risks to health, safety, and fundamental rights); changes to the system pre-determined by the provider; human oversight measures; computational and hardware resources needed; and expected lifetime and maintenance measures.
Technical Documentation
Providers must prepare and maintain technical documentation demonstrating compliance, per Annex IV specifications.
Record-Keeping (Logging)
High-risk AI systems must include automatic event logging appropriate to intended purpose.
Transparency and Instructions for Use
Systems must be sufficiently transparent for deployers to interpret output and use appropriately.
Human Oversight (Article 14)
Article 14 establishes one of the AI Act's most consequential requirements: high-risk AI systems must be designed and developed in such a way as to be effectively overseen by natural persons during the period in which they are in use. Human oversight aims to minimise risks to health, safety, or fundamental rights that may emerge when the system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse.
The human oversight measures must be identified and built into the AI system by the provider before market placement, or alternatively identified as appropriate to be implemented by the deployer. These measures must enable the individuals exercising oversight to: fully understand the capacities and limitations of the AI system and be able to duly monitor its operation (including detecting and addressing anomalies, dysfunctions, and unexpected performance); remain aware of the possible tendency of automatically relying on or over-relying on the output produced by the AI system ('automation bias'), in particular for systems used to provide information or recommendations for decisions to be taken by natural persons; be able to correctly interpret the AI system's output, taking into account the characteristics of the system and the interpretation tools available; and be able to decide, in any particular situation, not to use the system, to disregard, override, or reverse the output, or to intervene in the operation or interrupt the system through a 'stop' button or similar procedure.
For AI systems identified in Annex III Point 1(a) (biometric identification) and Point 7 (migration, asylum, border control), the Act imposes an additional safeguard: no action or decision shall be taken by the deployer on the basis of the identification resulting from the system unless that identification has been separately verified and confirmed by at least two natural persons with the necessary competence, training, and authority.
The human oversight requirement has significant implications for system design. 'Meaningful human oversight' cannot be achieved by simply adding a human approval step if the human lacks the information, training, or time to exercise genuine judgment. Recital 73 emphasises that the oversight must be effective — pro forma oversight where the human routinely rubber-stamps AI decisions does not satisfy the requirement. This means providers must design systems with appropriate explanation capabilities, confidence indicators, and the ability for human operators to meaningfully interrogate and override AI outputs.
Human Oversight Design
High-risk AI must be designed for effective oversight by natural persons during the period of use.
Human Oversight Capabilities
Oversight must enable understanding, anomaly detection, automation bias awareness, interpretation, override, and interruption.
Two-Person Verification
Biometric identification and migration AI require dual human verification before any action is taken on results.
Meaningful human oversight requires more than a rubber-stamp approval step. Providers must design AI systems with explanation capabilities, confidence indicators, and genuine override mechanisms that enable human operators to exercise informed judgment. Pro forma oversight does not satisfy Article 14.
Accuracy, Robustness, and Cybersecurity (Article 15)
Article 15 establishes requirements for the technical performance of high-risk AI systems, addressing three interconnected properties: accuracy, robustness, and cybersecurity.
Accuracy (Art. 15(1)): High-risk AI systems must be designed and developed to achieve an appropriate level of accuracy, robustness, and cybersecurity and to perform consistently in those respects throughout their lifecycle. The levels of accuracy and the relevant accuracy metrics must be declared in the instructions for use that accompany the system. This is not a requirement for perfection — it is a requirement for transparency about performance levels and consistency of that performance over time.
Robustness (Art. 15(4)): High-risk AI systems must be as resilient as possible regarding errors, faults, or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to interaction with natural persons or other systems. Robustness must be achieved through appropriate technical redundancy solutions, which may include backup or fail-safe plans. High-risk AI systems that continue to learn after being placed on the market (online learning) must be developed in such a way as to eliminate or reduce as far as possible the risk of biased outputs influencing input for future operations ('feedback loops') and to ensure that feedback loops are duly addressed with appropriate mitigation measures.
Cybersecurity (Art. 15(5)): High-risk AI systems must be resilient against attempts by unauthorised third parties to alter their use, outputs, or performance by exploiting the system's vulnerabilities. The technical solutions must be appropriate to the relevant circumstances and risks, and may include measures to prevent and control for attacks trying to manipulate training data sets ('data poisoning'), or pre-trained components used in training ('model poisoning'), inputs designed to cause the AI system to make a mistake ('adversarial examples' or 'model evasion'), confidentiality attacks, or model flaws.
This article creates a direct bridge to NIS2 cybersecurity requirements. For AI systems deployed in critical infrastructure sectors covered by NIS2, the cybersecurity obligations under Article 15(5) must be coordinated with the broader cybersecurity risk management measures required under NIS2 Article 21. The NIS2 requirements for incident handling, supply chain security, and encryption practices are directly relevant to AI system cybersecurity. Organisations subject to both regimes should implement an integrated security programme that addresses both the AI-specific threats (data poisoning, adversarial examples, model extraction) and the general cybersecurity obligations (network security, access control, business continuity).
Accuracy Requirements
Systems must achieve appropriate accuracy levels, declared in instructions for use, and perform consistently throughout lifecycle.
Robustness Requirements
Systems must be resilient to errors, faults, and inconsistencies; online learning systems must mitigate feedback loops.
Cybersecurity Requirements
Systems must be resilient against data poisoning, adversarial examples, model evasion, and confidentiality attacks.
AI Act Article 15(5) cybersecurity requirements directly complement NIS2 Article 21 measures. AI-specific threats (data poisoning, adversarial examples) must be addressed alongside general cybersecurity obligations (network security, incident handling, supply chain security).
ISO 27001 security controls provide an implementation framework for AI Act Article 15(5) cybersecurity requirements. Controls for access management, cryptography, operations security, and system integrity are directly applicable.
Frequently Asked Questions
What is the conformity assessment process for high-risk AI?
Article 43 establishes two conformity assessment pathways. For most high-risk AI systems, providers can self-assess compliance through an internal control procedure based on Annex VI — verifying their quality management system and technical documentation against the requirements. However, for AI systems in biometric identification (Annex III, Point 1), a third-party conformity assessment by a notified body is mandatory, unless the provider has applied harmonised standards covering all relevant requirements. After successful assessment, the provider issues an EU declaration of conformity (Article 47), affixes the CE marking (Article 48), and registers the system in the EU database (Article 49).
What is the role of the deployer for high-risk AI?
Deployers (users of high-risk AI) have distinct obligations under Article 26. They must: use the system in accordance with the provider's instructions for use; ensure input data is relevant and sufficiently representative; monitor the system's operation based on the instructions for use; suspend use if they consider the system poses a risk; inform the provider or distributor if they identify risks; keep logs automatically generated by the system for at least six months (unless otherwise specified by EU or national law); and conduct a fundamental rights impact assessment before putting the system into use (for public bodies and private entities providing public services).
How do AI Act requirements interact with NIS2?
Article 15(5) of the AI Act requires cybersecurity resilience for high-risk AI systems, covering threats like data poisoning, adversarial examples, and model evasion. For AI systems deployed in NIS2-covered critical infrastructure, these requirements must be coordinated with NIS2 Article 21 cybersecurity risk management measures. Organisations should implement integrated security programmes addressing both AI-specific threats and broader network/information security obligations. NIS2's incident reporting (24-hour early warning) also applies to significant cybersecurity incidents affecting high-risk AI systems in covered sectors.
What documentation must providers maintain?
Annex IV specifies comprehensive technical documentation requirements: general system description (intended purpose, versions, architecture); development process details (methods, design specifications, training methodologies); monitoring and control information (capabilities, limitations, accuracy metrics, foreseeable unintended outcomes, human oversight specifications); risk management system description; data governance documentation (sources, collection methods, preparation, bias mitigation); testing and validation results; harmonised standards applied; and the EU declaration of conformity. This documentation must be prepared before market placement and kept up to date throughout the system's lifecycle.
What are the post-market monitoring obligations?
Article 72 requires providers to establish and document a post-market monitoring system proportionate to the nature of the AI technologies and the risks of the high-risk AI system. The system must actively and systematically collect, document, and analyse relevant data provided by deployers or collected through other sources on the performance throughout the system's lifetime. Providers must report serious incidents to market surveillance authorities under Article 73 — any malfunction or misuse that has led to death, serious damage to health, serious disruption to critical infrastructure, or violation of fundamental rights obligations must be reported within 15 days of the provider becoming aware.
This content is for informational purposes only and does not constitute legal advice. Consult qualified legal counsel for compliance decisions.
Automate EU-AI-ACT Compliance with FortisEU
Turn regulatory obligations into actionable controls with evidence workflows, real-time dashboards, and EU-sovereign AI assistance.