Skip to main content
FORTISEU
EU REGULATIONIn Force

EU AI Act

The world's first comprehensive AI regulation

The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence, establishing harmonised rules for AI systems placed on the EU market. It introduces a risk-based classification system with four tiers — unacceptable, high, limited, and minimal risk — each carrying proportionate obligations. Providers and deployers of high-risk AI systems must implement risk management systems, ensure data governance, maintain technical documentation, and enable human oversight. The Act bans certain AI practices outright, including social scoring and real-time remote biometric identification in public spaces (with narrow exceptions). Foundation models and general-purpose AI systems face transparency and systemic risk obligations. Penalties reach up to EUR 35 million or 7% of global annual turnover. The European AI Office coordinates enforcement alongside national market surveillance authorities. FortisEU operationalises EU AI Act compliance with automated risk classification assessments, conformity documentation workflows, AI system inventory management, and cross-framework mapping to GDPR data protection and NIS2 cybersecurity requirements — all hosted on sovereign EU infrastructure.

Enforcement deadline

Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence

4
Risk Tiers
Unacceptable, high, limited, and minimal risk — Title II-IV, Regulation (EU) 2024/1689
€35M / 7%
Max Fine
Article 99(3) — up to EUR 35 000 000 or 7% of total worldwide annual turnover for prohibited practices
8
High-Risk Categories
Annex III lists 8 areas: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice
Aug 2026
Applied From
Article 113 — main provisions apply from 2 August 2026; prohibited practices from 2 February 2025
FAQ

Common Questions

What is the EU AI Act?

The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence, adopted by the European Parliament and Council in June 2024 and published in the Official Journal on 12 July 2024. It establishes harmonised rules for AI systems placed on the EU market using a risk-based approach with four tiers: unacceptable risk (banned outright), high risk (subject to conformity assessment and ongoing obligations), limited risk (transparency obligations), and minimal risk (voluntary codes of practice). The regulation applies to providers, deployers, importers, and distributors of AI systems, regardless of where they are established, if the AI system is placed on the EU market or its output is used in the EU.

What AI practices are banned under the EU AI Act?

Article 5 prohibits AI practices deemed to pose unacceptable risk, including: social scoring systems by public authorities, real-time remote biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions for serious crimes, missing persons, and imminent threats), AI systems that exploit vulnerabilities of specific groups due to age, disability, or socioeconomic situation, subliminal manipulation techniques that cause significant harm, emotion recognition systems in workplaces and educational institutions, untargeted scraping of facial images from the internet or CCTV for facial recognition databases, and certain forms of predictive policing based solely on profiling. These prohibitions apply from 2 February 2025.

How does the AI Act's risk classification work?

The AI Act classifies AI systems into four risk tiers. Unacceptable risk (Title II, Art. 5): banned practices listed above. High risk (Title III, Art. 6, Annex III): AI systems in 8 critical areas — biometrics, critical infrastructure, education/vocational training, employment/worker management, essential private/public services, law enforcement, migration/asylum/border control, and administration of justice — plus AI systems that are safety components of products subject to EU harmonisation legislation. Limited risk (Art. 50): systems requiring transparency, such as chatbots, emotion recognition, and AI-generated content. Minimal risk: all other AI systems, subject only to voluntary codes of conduct. Classification depends on the intended purpose and context of deployment.

What are the obligations for foundation models and general-purpose AI?

General-purpose AI (GPAI) models face obligations under Articles 51-56. All GPAI model providers must: maintain up-to-date technical documentation, provide information and documentation to downstream providers, comply with EU copyright law, and publish a sufficiently detailed summary of training data content. GPAI models with systemic risk (those trained with more than 10^25 FLOPs, or designated by the AI Office) face additional obligations: perform model evaluations including adversarial testing, assess and mitigate systemic risks, track and report serious incidents, and ensure adequate cybersecurity protections. These obligations apply from 2 August 2025.

How does the EU AI Act relate to GDPR?

The EU AI Act and GDPR are complementary regulations that apply concurrently to AI systems processing personal data. AI providers must comply with GDPR principles — lawfulness, fairness, transparency, purpose limitation, data minimisation, accuracy, storage limitation, and integrity/confidentiality — when developing and deploying AI systems. GDPR Article 22 (automated individual decision-making, including profiling) provides additional safeguards for AI-driven decisions that produce legal or similarly significant effects. High-risk AI conformity assessments under the AI Act must explicitly address GDPR compliance. Data Protection Impact Assessments (DPIAs) under GDPR Article 35 are likely required for most high-risk AI deployments. The AI Act's transparency obligations complement GDPR's information requirements.

What are the penalties under the EU AI Act?

The AI Act establishes three tiers of administrative fines. For prohibited AI practices (Art. 5 violations): up to EUR 35,000,000 or 7% of total worldwide annual turnover, whichever is higher. For non-compliance with high-risk AI obligations, GPAI obligations, or other substantive requirements: up to EUR 15,000,000 or 3% of global turnover. For supplying incorrect, incomplete, or misleading information to authorities: up to EUR 7,500,000 or 1% of global turnover. SMEs and start-ups benefit from proportionate fine ceilings (the lower of the absolute amount or the percentage). Member States must lay down rules on penalties by 2 August 2025. National market surveillance authorities enforce the regulation, coordinated by the European AI Office.

Operationalise EU AI Act Compliance

Turn EU AI Act requirements into automated workflows, evidence collection, and audit-ready outputs. Create an account or schedule a personalised demo.