What is the EU AI Act? The World's First Comprehensive AI Regulation
Legislative History and Background
The European Commission published its proposal for a Regulation laying down harmonised rules on artificial intelligence on 21 April 2021, as part of the broader European approach to AI excellence and trust outlined in the 2018 Coordinated Plan on AI and the 2020 White Paper on AI. The proposal was the culmination of years of preparatory work, including the High-Level Expert Group on AI's Ethics Guidelines for Trustworthy AI (April 2019) and the subsequent Assessment List for Trustworthy AI (ALTAI).
The legislative journey was one of the most complex in EU regulatory history. The European Parliament adopted its negotiating position on 14 June 2023, introducing significant amendments including provisions on general-purpose AI models and foundation models that were not in the Commission's original proposal. Trilogue negotiations between the Parliament, Council, and Commission proved particularly challenging, with intense debates over real-time biometric identification, law enforcement exemptions, foundation model obligations, and open-source treatment. Political agreement was reached on 8 December 2023 after a marathon 36-hour negotiating session.
The European Parliament formally adopted the Regulation on 13 March 2024 with an overwhelming majority of 523 votes in favour, 46 against, and 49 abstentions. The Council gave final approval on 21 May 2024. Regulation (EU) 2024/1689 was published in the Official Journal of the European Union on 12 July 2024 and entered into force on 1 August 2024. The Act establishes the world's first comprehensive, legally binding framework for artificial intelligence, setting a potential global benchmark akin to the 'Brussels effect' observed with the GDPR.
The AI Act is complemented by the AI Liability Directive (proposed September 2022) and the revised Product Liability Directive (adopted March 2024), creating a comprehensive legal ecosystem for AI governance. Together, these instruments address preventive regulation (AI Act), fault-based civil liability (AI Liability Directive), and strict product liability (Product Liability Directive).
Purpose of the Regulation
Establishes the goal of improving the functioning of the internal market by laying down uniform rules for AI systems.
Entry into Force and Application
The Regulation enters into force on the twentieth day following publication and applies in a phased manner from 2 February 2025.
The Risk-Based Approach
The defining architectural principle of the EU AI Act is its risk-based approach to regulation. Rather than applying uniform obligations to all AI systems regardless of their potential impact, the Act calibrates regulatory requirements according to the level of risk an AI system poses to health, safety, and fundamental rights. This approach was inspired by the EU's existing product safety framework (the New Legislative Framework) and reflects the proportionality principle enshrined in Article 5 TEU.
The Act establishes four tiers of risk. At the highest level, certain AI practices are deemed to pose an unacceptable risk and are prohibited outright under Article 5. These include social scoring by public authorities, exploitation of vulnerabilities, subliminal manipulation, and most forms of real-time remote biometric identification in public spaces. Below this, high-risk AI systems — defined in Article 6 and Annex III — face comprehensive obligations including risk management, data governance, technical documentation, transparency, human oversight, and accuracy and robustness requirements (Articles 8-15). The third tier covers limited-risk AI systems that require specific transparency obligations under Article 50, such as chatbot disclosure and AI-generated content labelling. Finally, minimal-risk AI systems — the vast majority of AI applications — face no mandatory obligations, though voluntary codes of conduct are encouraged under Article 95.
This graduated approach was designed to balance innovation with protection. Recital 26 emphasises that the classification should focus on the intended purpose and context of use, not on the underlying technology. The same machine learning model could fall into different risk categories depending on its application: a model used to recommend films (minimal risk) versus the same model architecture used for criminal recidivism prediction (high risk). The European Commission is empowered under Article 7 to update the list of high-risk use cases through delegated acts, ensuring the framework can adapt to evolving AI capabilities and risks.
Classification of High-Risk AI Systems
Establishes the two-pathway classification: Annex I product safety legislation (Art. 6(1)) and Annex III standalone high-risk areas (Art. 6(2)).
Amendments to Annex III
Empowers the Commission to amend the high-risk use case list through delegated acts, subject to criteria in Art. 7(2).
ISO 42001 requires organisations to identify AI-related risks and determine risk treatment. The AI Act's risk classification provides a regulatory floor that ISO 42001 implementations should incorporate.
The NIST AI Risk Management Framework's 'Map' function aligns with the AI Act's risk classification requirement — both mandate contextual risk identification based on intended use and deployment context.
Scope and Key Definitions
The EU AI Act has deliberately broad scope. Under Article 2, it applies to providers who place AI systems on the EU market or put them into service in the EU, regardless of whether those providers are established within the Union or in a third country. It also applies to deployers of AI systems established in the EU, and to providers and deployers located outside the EU where the output produced by the AI system is used in the Union. This extraterritorial reach mirrors the GDPR's approach and ensures that AI systems affecting EU citizens are subject to the Regulation regardless of origin.
Article 3 provides critical definitions that underpin the entire regulatory framework. An 'AI system' is defined as a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. This definition was deliberately crafted to be technology-neutral and future-proof, covering machine learning, deep learning, logic-based approaches, statistical methods, and hybrid systems.
The Act distinguishes between several key roles. A 'provider' is any natural or legal person that develops an AI system or GPAI model, or that has an AI system or GPAI model developed, and places it on the market or puts it into service under its own name or trademark. A 'deployer' is any natural or legal person using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity. An 'importer' brings AI systems from third countries into the EU market, while a 'distributor' makes AI systems available on the EU market without being a provider or importer. Each role carries distinct obligations, with providers bearing the heaviest regulatory burden.
Notably, the Act excludes AI systems used exclusively for military, defence, or national security purposes (Article 2(3)), AI systems used solely for scientific research and development (Article 2(6)), and individuals using AI systems for purely personal, non-professional activities. Free and open-source AI systems are also partially exempted, provided they are not high-risk or prohibited, and are not GPAI models with systemic risk (Article 2(12)).
Scope
Defines territorial and material scope, including extraterritorial application to non-EU providers and deployers.
AI System Definition
Defines 'AI system' as a machine-based system with varying autonomy that infers from input to generate outputs influencing environments.
Provider Definition
Defines 'provider' as any person developing or commissioning an AI system and placing it on the market under their own name.
Deployer Definition
Defines 'deployer' as any person using an AI system under their authority, excluding personal non-professional use.
The AI Act's extraterritorial scope means that non-EU AI providers whose systems produce outputs used in the EU are subject to the Regulation. This mirrors the GDPR's approach and captures US-based and Asian AI companies serving EU markets.
Phased Implementation Timeline
The EU AI Act adopts a phased implementation approach, recognising that different provisions require different lead times for compliance. Article 113 establishes the following timeline, calculated from the entry into force date of 1 August 2024.
6 months (2 February 2025): Prohibitions on unacceptable-risk AI practices under Article 5 become applicable. From this date, social scoring systems, real-time remote biometric identification (with exceptions), subliminal manipulation, and exploitation of vulnerabilities are banned. AI literacy obligations under Article 4 also apply.
12 months (2 August 2025): Obligations for general-purpose AI (GPAI) model providers under Chapter V become applicable, including technical documentation, copyright compliance, and training data summaries. GPAI models with systemic risk face additional obligations including model evaluations and adversarial testing. Rules on notified bodies and governance structure also apply.
24 months (2 August 2026): The main body of the Regulation applies, including all high-risk AI system obligations (risk management, data governance, technical documentation, human oversight, accuracy, robustness), conformity assessment procedures, deployer obligations, transparency requirements for limited-risk systems, market surveillance provisions, and the penalty regime. This is the primary compliance deadline for most organisations.
36 months (2 August 2027): High-risk obligations apply to AI systems that are safety components of products covered by EU harmonisation legislation listed in Annex I Section A (including machinery, toys, medical devices, civil aviation, motor vehicles, and marine equipment). These products already have existing conformity assessment regimes that need to be adapted to incorporate AI Act requirements.
Organisations should note that the phased timeline means compliance work must begin immediately. The prohibited practices are already in force, GPAI obligations apply from August 2025, and the comprehensive high-risk framework — which requires risk management systems, data governance procedures, technical documentation, and conformity assessments — demands substantial preparation well before the August 2026 deadline.
Entry into Force and Application
Establishes the phased application: 6 months for prohibitions, 12 months for GPAI, 24 months for main provisions, 36 months for Annex I products.
Prohibited AI practices (Article 5) have been in force since 2 February 2025. Organisations currently deploying social scoring, real-time biometric identification, subliminal manipulation, or vulnerability exploitation AI systems must already have ceased these practices or face penalties of up to EUR 35 million or 7% of global turnover.
Enforcement and Governance Architecture
The AI Act establishes a multi-layered governance architecture combining EU-level coordination with national enforcement. At the EU level, the European AI Office — established within the European Commission — plays a central role in coordinating enforcement, developing guidance, and overseeing GPAI model compliance. The AI Office has exclusive competence for supervising GPAI model providers and can directly investigate and sanction violations of GPAI obligations.
The European Artificial Intelligence Board, composed of representatives from all Member States, provides a forum for coordination and advisory opinions. It supports the consistent application of the Regulation across the EU, issues recommendations and written opinions, and contributes to the development of harmonised standards, codes of practice, and guidance. An advisory forum of stakeholders — including industry, SMEs, start-ups, civil society, and academia — provides technical expertise to the Board and Commission.
At the national level, each Member State must designate one or more national competent authorities and a market surveillance authority responsible for AI Act enforcement. These authorities have powers to access AI systems, request documentation, conduct inspections, issue corrective measures, and impose administrative fines. For high-risk AI systems, enforcement builds on the existing market surveillance framework established by Regulation (EU) 2019/1020.
The penalty regime under Article 99 establishes three tiers of administrative fines. Violations of prohibited practices (Article 5) attract fines of up to EUR 35 million or 7% of global annual turnover. Non-compliance with high-risk obligations, GPAI requirements, or other substantive provisions triggers fines of up to EUR 15 million or 3% of turnover. Supplying incorrect or misleading information to authorities can result in fines of up to EUR 7.5 million or 1% of turnover. For SMEs and start-ups, the lower of the absolute amount or the percentage applies, providing proportionate treatment. The Act also mandates that Member States establish AI regulatory sandboxes — controlled environments for testing innovative AI systems under regulatory supervision — to support innovation while ensuring compliance.
European AI Office
Establishes the AI Office within the Commission to coordinate enforcement, develop guidance, and supervise GPAI model providers.
European Artificial Intelligence Board
Creates the AI Board composed of Member State representatives to advise and coordinate consistent application.
Administrative Fines
Establishes three tiers: up to 7%/EUR 35M for prohibited practices, 3%/EUR 15M for other violations, 1%/EUR 7.5M for misinformation.
The AI Act's enforcement architecture parallels GDPR's supervisory authority model. Where AI systems process personal data, both AI market surveillance authorities and data protection authorities may have jurisdiction.
AI systems in critical infrastructure sectors may face parallel oversight from AI market surveillance authorities and NIS2 competent authorities, requiring coordinated enforcement.
Frequently Asked Questions
When does the EU AI Act apply?
The AI Act applies in phases. Prohibited practices (Article 5) have been in force since 2 February 2025. GPAI model obligations apply from 2 August 2025. The main provisions — including high-risk AI system requirements, deployer obligations, transparency requirements, and the penalty regime — apply from 2 August 2026. High-risk obligations for AI in products covered by Annex I Section A (machinery, medical devices, etc.) apply from 2 August 2027.
Does the AI Act apply to non-EU companies?
Yes. The AI Act has extraterritorial scope under Article 2. It applies to providers placing AI systems on the EU market or putting them into service in the EU, regardless of establishment. It also applies to deployers established in the EU, and to providers and deployers located outside the EU where the AI system's output is used within the Union. Non-EU providers must designate an authorised representative in the EU for high-risk AI systems.
What is the definition of an AI system under the Act?
Article 3(1) defines an AI system as 'a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.' This technology-neutral definition covers machine learning, deep learning, and statistical approaches.
Are open-source AI systems exempt from the AI Act?
Partially. Article 2(12) provides that free and open-source AI systems are generally exempt from AI Act obligations, except when they are: (a) placed on the market or put into service as high-risk AI systems, (b) AI systems that fall under prohibited practices (Article 5), or (c) AI systems subject to transparency obligations (Article 50). Additionally, open-source GPAI models are exempt from certain obligations (technical documentation, copyright compliance) unless they are classified as having systemic risk (trained with >10^25 FLOPs).
How does FortisEU help with EU AI Act compliance?
FortisEU provides an integrated AI Act compliance platform featuring automated risk classification assessment to determine whether your AI systems are high-risk, prohibited, or limited-risk; conformity documentation workflows aligned to Articles 8-15; AI system inventory management with lifecycle tracking; cross-framework mapping to GDPR data protection and NIS2 cybersecurity requirements; GPAI model documentation templates; and audit-ready evidence collection — all hosted on sovereign EU infrastructure with EU-based AI assistance for gap analysis.
This content is for informational purposes only and does not constitute legal advice. Consult qualified legal counsel for compliance decisions.
Automate EU-AI-ACT Compliance with FortisEU
Turn regulatory obligations into actionable controls with evidence workflows, real-time dashboards, and EU-sovereign AI assistance.