EU AI Act Compliance Checklist
Structured checklist for EU AI Act compliance covering the risk classification system, prohibited practices, high-risk AI requirements, transparency obligations, and phased implementation timelines.
- 1
The AI Act uses a four-tier risk classification — prohibited, high-risk, limited-risk, minimal-risk — and the classification determines all downstream compliance obligations.
- 2
Prohibited AI practices (Article 5) have been in force since 2 February 2025 — organisations must have already audited and discontinued any prohibited systems.
- 3
High-risk AI system requirements (Chapter III) cover the full lifecycle: risk management, data governance, documentation, transparency, human oversight, and cybersecurity.
- 4
GPAI model requirements are now in force (since August 2025) — organisations using third-party foundation models must verify provider compliance.
- 5
Full application on 2 August 2026 requires conformity assessments, fundamental rights impact assessments, and complete technical documentation for high-risk systems.
Risk Classification System: Four Tiers
The EU AI Act (Regulation 2024/1689) organises its regulatory requirements around a risk-based classification system with four tiers: unacceptable risk, high risk, limited risk, and minimal risk. This tiered approach is designed to ensure that the most invasive or dangerous AI applications face the strictest requirements, while low-risk applications remain largely unregulated. Understanding which tier your AI system falls into is the critical first step in any compliance programme.
Unacceptable risk AI systems are banned outright. These are the practices listed in Article 5, which include social scoring by public authorities, real-time remote biometric identification in publicly accessible spaces for law enforcement purposes (with narrow exceptions), exploitation of vulnerabilities of specific groups, and emotion recognition in workplaces and educational institutions. The prohibitions on these practices applied from 2 February 2025 — the first provisions of the AI Act to take effect.
High-risk AI systems are subject to extensive requirements but are permitted provided they meet the compliance obligations set out in Chapter III. These include AI systems used in critical infrastructure, education, employment, essential services, law enforcement, migration, and the administration of justice. They also include AI systems that serve as safety components of products covered by existing EU product safety legislation. Limited-risk systems are subject primarily to transparency obligations — informing users that they are interacting with AI. Minimal-risk systems are subject to no mandatory requirements, though voluntary codes of practice are encouraged.
The classification assessment must be conducted for each AI system the organisation deploys, develops, or provides. A single organisation may have systems falling across multiple tiers. The assessment should be documented and should include the reasoning for the classification, the relevant articles and annexes referenced, and any borderline determinations that were resolved. Regulators will scrutinise classification decisions, particularly for systems that were classified below high-risk where the use case might suggest otherwise.
The prohibitions on unacceptable risk AI practices (Article 5) have been in effect since 2 February 2025. Organisations that have not yet audited their AI systems for prohibited practices are already operating outside the law.
Prohibited AI Practices (Article 5)
Article 5 of the EU AI Act lists the AI practices that are considered to pose an unacceptable risk and are therefore prohibited. These prohibitions are absolute — there is no compliance pathway that permits these practices. They took effect on 2 February 2025, six months after the regulation's entry into force on 1 August 2024.
The prohibited practices include: AI systems that deploy subliminal, manipulative, or deceptive techniques to materially distort a person's behaviour in a way that causes or is likely to cause significant harm; AI systems that exploit vulnerabilities of specific groups of persons due to their age, disability, or social or economic situation; biometric categorisation systems that categorise individuals based on sensitive characteristics such as race, political opinions, or sexual orientation; social scoring systems used by public authorities that evaluate or classify persons based on social behaviour or personal characteristics, leading to detrimental treatment; real-time remote biometric identification in publicly accessible spaces for law enforcement (with limited exceptions for specific serious crimes); AI systems that infer emotions in workplaces or educational institutions (with narrow exceptions for safety or medical purposes); and the creation of facial recognition databases through untargeted scraping of images from the internet or CCTV footage.
Organisations should conduct an immediate audit of all AI systems currently in use to verify that none fall within the prohibited categories. This audit should cover not just systems developed in-house, but also third-party AI tools and services used by employees. SaaS applications with embedded AI features, HR technology platforms, and marketing automation tools should all be assessed. The audit results and classification reasoning must be documented and should be reviewed by legal counsel with AI Act expertise.
The exceptions to the biometric identification prohibition are very narrow and apply only to law enforcement authorities in specific circumstances (searching for missing persons, preventing imminent threat to life, locating suspects of specific serious crimes). These exceptions require prior judicial authorisation and are subject to additional safeguards. Private sector organisations should treat the biometric identification prohibition as absolute for their purposes.
High-Risk AI System Requirements (Chapter III)
Chapter III of the AI Act establishes the compliance requirements for high-risk AI systems. These requirements apply to both providers (developers) and deployers (users) of high-risk AI systems, though the obligations differ in scope and nature. For providers, the requirements are comprehensive and include risk management, data governance, technical documentation, record-keeping, transparency, human oversight, and accuracy, robustness, and cybersecurity standards.
The risk management system under Article 9 must be established, implemented, documented, and maintained throughout the entire lifecycle of the high-risk AI system. It must include identification and analysis of known and reasonably foreseeable risks, estimation and evaluation of the risks that may emerge when the system is used in accordance with its intended purpose, adoption of appropriate risk management measures, and testing procedures to ensure that the risk management measures are effective. This is not a one-time assessment — it must be a living process that adapts as the system evolves and as the deployment context changes.
Data governance under Article 10 requires that training, validation, and testing datasets are subject to appropriate data governance practices. These practices must cover the design choices for the datasets, data collection processes, data preparation operations (annotation, labelling, cleaning, enrichment), the formulation of relevant assumptions about the information the data is supposed to measure, an assessment of the availability, quantity, and suitability of the datasets, examination of possible biases, and identification of any data gaps that could affect compliance. For organisations using pre-trained foundation models, demonstrating data governance compliance for the training data can be particularly challenging.
Technical documentation under Article 11 must be drawn up before the system is placed on the market or put into service and must be kept up-to-date. The documentation must contain, at minimum, a general description of the system, a detailed description of the development process, detailed information about monitoring, functioning, and control, a description of the risk management system, and a description of the changes made throughout the lifecycle. Annex IV of the regulation provides the exhaustive list of required documentation elements.
Transparency Obligations
The AI Act imposes transparency obligations at multiple levels, affecting providers, deployers, and in some cases, individuals interacting with AI systems. These obligations are not limited to high-risk systems — certain transparency requirements apply to AI systems across all risk tiers. The transparency framework is designed to ensure that people know when they are interacting with AI and can make informed decisions accordingly.
Article 50 establishes the general transparency obligations. Providers of AI systems that interact directly with natural persons (such as chatbots) must ensure that the system is designed and developed in a way that the person is informed they are interacting with an AI system. Providers of AI systems that generate synthetic audio, image, video, or text content must ensure the outputs are marked in a machine-readable format and are detectable as artificially generated or manipulated. Deployers of emotion recognition or biometric categorisation systems (where permitted) must inform the persons exposed.
For high-risk AI systems, Article 13 requires that the system is designed and developed to ensure a sufficient level of transparency to enable deployers to interpret the system's output and use it appropriately. This includes instructions for use that contain the provider's identity, the system's characteristics, capabilities, and limitations of performance, the intended purpose, the level of accuracy and known risks, and any circumstances that could affect performance. The documentation must be concise, complete, correct, and accessible to deployers — not buried in technical white papers.
Deployers of high-risk AI systems have their own transparency obligations under Article 26. When making decisions that affect natural persons using a high-risk AI system, the deployer must inform the affected person that they are subject to such a system. For AI systems used in the public sector for risk assessments or decision-making (e.g., social benefit eligibility, creditworthiness), the transparency requirements are particularly important. The deployer must be able to explain, in human-understandable terms, the role of the AI system in the decision-making process and the main factors and logic involved.
General-Purpose AI Models (Chapter V)
Chapter V of the AI Act introduces requirements for General-Purpose AI (GPAI) models — large foundation models capable of performing a wide range of tasks. These requirements apply to providers of GPAI models and became applicable from 2 August 2025. The GPAI provisions are layered: all GPAI models must meet baseline transparency requirements, and GPAI models with systemic risk must meet additional obligations.
The baseline requirements for all GPAI model providers under Article 53 include: drawing up and keeping up-to-date technical documentation of the model, drawing up and keeping up-to-date information and documentation to be made available to downstream providers who integrate the GPAI model into their AI systems, putting in place a policy to comply with Union copyright law (including the text and data mining opt-out under Article 4 of the Copyright Directive), and publishing a sufficiently detailed summary of the training data according to a template provided by the AI Office.
GPAI models classified as presenting systemic risk under Article 51 face additional obligations. A GPAI model is presumed to have systemic risk if it was trained using a total computing power of more than 10^25 FLOPs, or if the Commission designates it as such based on criteria including the number of registered end users, the degree of multi-modality, and benchmarks and evaluations of capabilities. Providers of systemic risk GPAI models must, in addition to baseline requirements: perform model evaluations including adversarial testing, assess and mitigate systemic risks, track and report serious incidents, and ensure an adequate level of cybersecurity protection for the model.
Organisations that deploy GPAI models developed by third parties (e.g., using APIs from foundation model providers) should ensure that their provider has complied with the GPAI obligations. Contractual provisions should require the GPAI provider to share relevant technical documentation and to notify the deployer of material changes to the model. Due diligence on GPAI provider compliance is becoming an essential part of AI procurement processes.
GPAI model requirements under Chapter V became applicable on 2 August 2025. If your organisation uses third-party foundation models, verify with your provider that they have met their Article 53 obligations, including the training data summary and copyright compliance policy.
Implementation Timeline: Phased Application
The EU AI Act follows a phased application timeline, with different provisions becoming enforceable at different dates. This phased approach is designed to give organisations time to adapt while ensuring that the most urgent prohibitions take effect quickly. Understanding the timeline is essential for prioritising compliance efforts.
The first phase took effect on 2 February 2025, six months after the regulation entered into force. This phase activated the prohibitions on unacceptable risk AI practices (Article 5) and the general provisions on AI literacy (Article 4). From this date, all prohibited AI practices must have been discontinued, and organisations must ensure that their staff operating or interacting with AI systems have sufficient AI literacy relevant to their role.
The second phase took effect on 2 August 2025, twelve months after entry into force. This activated the GPAI model requirements (Chapter V), the governance and penalties framework, and the requirements for notified bodies. Providers of GPAI models must now comply with the transparency, documentation, and copyright-related obligations. For models with systemic risk, the additional evaluation and mitigation requirements are also in force.
The third and final phase — full application of the regulation — takes effect on 2 August 2026. This is when the high-risk AI system requirements (Chapter III), the deployer obligations, the conformity assessment procedures, and the full enforcement framework become applicable. Organisations developing or deploying high-risk AI systems must have their risk management systems, data governance frameworks, technical documentation, human oversight mechanisms, and conformity assessments in place by this date. An exception applies to high-risk AI systems used as safety components of products subject to existing EU product safety legislation — these must comply by 2 August 2027.
The timeline makes clear that compliance is not a single-deadline exercise. Organisations should have already addressed prohibited practices, should now be focusing on GPAI and AI literacy compliance, and must use the remaining months before August 2026 to prepare high-risk AI system compliance programmes. Waiting until the final deadline creates significant implementation risk.
Compliance Checklist: Action Items by Phase
This section provides a structured checklist of compliance actions organised by the phased timeline. Organisations should use this as a starting framework and adapt it to their specific AI portfolio and risk profile.
Phase 1 (effective 2 February 2025 — immediate action required): Conduct a comprehensive audit of all AI systems in use, including third-party services, to identify any prohibited practices under Article 5. Discontinue any AI systems deploying social scoring, subliminal manipulation, vulnerability exploitation, or prohibited biometric identification. Implement an AI literacy programme for staff who develop, deploy, operate, or interact with AI systems. Document all classification decisions and the reasoning behind them.
Phase 2 (effective 2 August 2025 — now in force): If you are a provider of GPAI models, prepare technical documentation including a training data summary. Implement a copyright compliance policy including mechanisms to honour text and data mining opt-outs. If your GPAI model is classified as systemic risk, conduct model evaluations, adversarial testing, and systemic risk assessments. If you deploy third-party GPAI models, verify your provider's compliance and obtain relevant documentation.
Phase 3 (effective 2 August 2026 — preparation required now): For each high-risk AI system, implement a risk management system covering the full lifecycle. Establish data governance practices for training, validation, and testing datasets. Prepare technical documentation meeting Annex IV requirements. Implement logging and record-keeping capabilities. Design and test human oversight mechanisms. Conduct accuracy, robustness, and cybersecurity assessments. Establish post-market monitoring processes. Complete conformity assessments and prepare EU declarations of conformity. For deployers: implement fundamental rights impact assessments (Article 27), establish user notification processes, and ensure human oversight of automated decisions.
Ongoing: Maintain an AI system register with current risk classifications. Review classifications when systems are updated or use contexts change. Monitor regulatory guidance from the European AI Office, national authorities, and standards bodies. Track emerging harmonised standards under standardisation requests to CEN and CENELEC.
Start your high-risk AI compliance programme now, even though full application is not until August 2026. Risk management systems, data governance frameworks, and technical documentation take months to develop and test. The conformity assessment process itself requires lead time with notified bodies.
When does the EU AI Act fully apply?
The EU AI Act has a phased application. Prohibited practices (Article 5) and AI literacy requirements applied from 2 February 2025. GPAI model requirements applied from 2 August 2025. Full application — including all high-risk AI system requirements, deployer obligations, and conformity assessments — takes effect on 2 August 2026. High-risk AI used as safety components in products covered by existing product safety legislation have until 2 August 2027.
How do I determine if my AI system is high-risk?
Check Annex III of the regulation, which lists the areas where AI systems are considered high-risk (critical infrastructure, education, employment, essential services, law enforcement, etc.). Also check whether your AI system serves as a safety component of a product covered by EU product safety legislation listed in Annex I. If your system falls within these areas and makes or materially influences decisions affecting natural persons, it is likely high-risk. Document your classification assessment with supporting reasoning.
Does the AI Act apply to AI systems we use but did not develop?
Yes. The AI Act creates obligations for both providers (developers) and deployers (users) of AI systems. As a deployer, you must use high-risk AI systems in accordance with their instructions for use, ensure human oversight, monitor the system's operation, report malfunctions, and conduct fundamental rights impact assessments where required (Article 27). You are also responsible for the transparency obligation of informing affected persons that they are subject to a high-risk AI system.
What is AI literacy and who needs it?
Article 4 requires providers and deployers to ensure that their staff and other persons dealing with AI systems on their behalf have a sufficient level of AI literacy, taking into account the technical knowledge, experience, education, and training of those persons and the context in which the AI systems are used. This is not limited to technical staff — it covers anyone who operates, monitors, interprets, or makes decisions based on AI systems. The obligation has been in force since 2 February 2025.
How does the EU AI Act interact with GDPR?
The AI Act and GDPR are complementary, not mutually exclusive. AI systems processing personal data must comply with GDPR in full, in addition to any AI Act requirements. The AI Act's data governance requirements (Article 10) are additive to GDPR's data protection principles. Fundamental rights impact assessments under Article 27 of the AI Act should incorporate data protection impact assessments (DPIAs) required under GDPR Article 35. The same AI system may trigger obligations under both regulations simultaneously.
Ready to Operationalise This?
Turn this guide into working compliance workflows. Create an account or schedule a personalised demo.