Skip to main content
FORTISEU
EU-AI-ACTEstablished

EU AI Act — Frequently Asked Questions

12 min readUpdated 2026-03-12

About This FAQ

The EU AI Act (Regulation (EU) 2024/1689) is a landmark piece of legislation that introduces comprehensive rules for artificial intelligence systems in the European Union. Given the Act's complexity — 113 articles, 180 recitals, and 13 annexes — organisations face many questions about its scope, obligations, timelines, and interactions with existing EU law. This FAQ provides authoritative answers to the 15 most frequently asked questions, drawing on the Act's text, recitals, and official European Commission guidance. All answers are current as of March 2026, reflecting the enforcement status where prohibited practices are already in force (since February 2025) and GPAI obligations apply (from August 2025), with the main provisions — including high-risk AI requirements — due to apply from 2 August 2026.

FAQ

Frequently Asked Questions

What is the scope of the EU AI Act?

The AI Act applies to: providers placing AI systems on the EU market or putting them into service, regardless of establishment; deployers of AI systems established in the EU; providers and deployers outside the EU where the AI output is used in the EU; importers and distributors of AI systems. It excludes AI for military/defence/national security (Art. 2(3)), purely scientific R&D (Art. 2(6)), personal non-professional use, and AI systems released under free and open-source licences (unless high-risk, prohibited, or requiring transparency).

How does the AI Act define an 'AI system'?

Article 3(1) defines an AI system as 'a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.' This technology-neutral definition covers machine learning, deep learning, statistical approaches, logic-based and knowledge-based reasoning, and hybrid systems. Simple rule-based automation, traditional statistical software without adaptiveness, and basic database queries generally fall outside this definition.

How do I self-assess whether my AI system is high-risk?

Follow a structured process: (1) Confirm the system meets the Art. 3(1) AI system definition. (2) Check Art. 5 prohibited practices — if applicable, the system cannot be deployed. (3) Assess Pathway 1 (Art. 6(1)): is the system a safety component of a product under Annex I harmonisation legislation requiring third-party conformity assessment? (4) Assess Pathway 2 (Art. 6(2)): does the system's intended purpose fall into one of the eight Annex III categories (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice)? (5) If Annex III applies, check Art. 6(3) exceptions (narrow procedural task, preparatory, non-influential). Document your classification and register in the EU database.

When did prohibited AI practices come into effect?

Article 5 prohibitions became applicable on 2 February 2025 — six months after the Act's entry into force. From this date, social scoring, real-time remote biometric identification in public spaces (with narrow exceptions), subliminal manipulation, exploitation of vulnerabilities, emotion recognition in workplaces and schools (with medical/safety exceptions), untargeted facial recognition scraping, and certain predictive policing practices are banned within the EU. The accelerated timeline reflects the view that these practices pose unacceptable risks that cannot wait for the main compliance deadline.

What does conformity assessment involve for high-risk AI?

For most high-risk AI systems, providers self-assess through an internal control procedure (Annex VI): reviewing the quality management system and verifying technical documentation against Articles 8-15 requirements. For biometric identification AI (Annex III, Point 1), third-party conformity assessment by a notified body is mandatory unless harmonised standards covering all requirements have been applied. After successful assessment, providers: issue an EU declaration of conformity (Art. 47), affix the CE marking (Art. 48), register in the EU database (Art. 49), and maintain compliance throughout the system's lifecycle through post-market monitoring.

What are the GPAI model obligations?

All GPAI model providers (from 2 August 2025) must: maintain up-to-date technical documentation per Annex XI, provide information to downstream AI system providers, comply with EU copyright law (including opt-out reservations), and publish a detailed training data content summary. GPAI models with systemic risk (>10^25 FLOPs or designated by Commission) face additional requirements: model evaluations with adversarial testing, systemic risk assessment and mitigation, serious incident reporting to the AI Office, and adequate cybersecurity protections. Compliance can be demonstrated through codes of practice (Art. 56) until harmonised standards are available.

How are open-source AI systems treated?

Free and open-source AI systems (publicly accessible weights, architecture, usage info) receive partial exemptions. They are generally exempt from AI Act obligations unless: the system is high-risk (full Chapter III applies), the system falls under prohibited practices (Art. 5 applies), or the system triggers transparency obligations (Art. 50 applies). Open-source GPAI models are exempt from baseline documentation and copyright obligations (Art. 53(2)) unless they are classified as having systemic risk (>10^25 FLOPs), in which case all systemic risk obligations apply. This tiered approach supports open-source innovation while ensuring frontier models face appropriate oversight.

What are the penalty tiers under the AI Act?

Three tiers of administrative fines: (1) Prohibited practices (Art. 5): up to EUR 35 million or 7% of global annual turnover. (2) High-risk AI, GPAI, and other substantive violations: up to EUR 15 million or 3% of global turnover. (3) Supplying incorrect/misleading information to authorities: up to EUR 7.5 million or 1% of turnover. For SMEs and start-ups, the lower of the absolute amount or the percentage applies. Member States must establish penalty rules by 2 August 2025. The AI Office can impose fines directly on GPAI model providers with systemic risk.

How does the AI Act interact with GDPR?

The AI Act and GDPR apply concurrently to AI systems processing personal data. Key interactions: (1) GDPR principles (lawfulness, fairness, transparency, data minimisation) apply to AI training and deployment. (2) Art. 22 GDPR (automated decision-making) provides additional safeguards for AI decisions with legal effects. (3) Art. 35 GDPR DPIAs are likely required for most high-risk AI deployments. (4) AI Act Art. 10(5) creates a limited exception allowing special category data processing for bias detection. (5) AI Act transparency obligations complement GDPR's information requirements. (6) Enforcement coordination between data protection and AI market surveillance authorities is required.

How does the AI Act relate to NIS2?

The AI Act and NIS2 are complementary for AI systems deployed in critical infrastructure. Art. 15(5) requires high-risk AI cybersecurity resilience against data poisoning, adversarial examples, and model evasion — aligning with NIS2 Art. 21 cybersecurity risk management measures. Organisations in NIS2-covered sectors (energy, transport, health, digital infrastructure) deploying high-risk AI must satisfy both regimes: AI-specific conformity requirements and broader network/information security obligations. NIS2's 24-hour incident reporting also applies to significant cybersecurity incidents affecting AI systems.

What are national market surveillance authorities' roles?

Each Member State must designate one or more market surveillance authorities responsible for AI Act enforcement. These authorities can: access AI systems for inspection and testing; request access to source code, training data, and technical documentation; order corrective measures (modification, withdrawal, recall); impose administrative fines within the penalty framework; conduct ex officio investigations or respond to complaints. The market surveillance framework builds on Regulation (EU) 2019/1020. For high-risk AI in products, the existing product safety market surveillance infrastructure applies.

What are AI regulatory sandboxes?

Article 57 requires Member States to establish AI regulatory sandboxes — controlled environments set up by national competent authorities for developing, testing, and validating innovative AI systems for a limited time before market placement, under regulatory supervision. Sandboxes provide a structured framework for AI providers to engage with regulators, receive guidance on compliance, and test their systems under relaxed conditions. Priority access is given to SMEs, start-ups, and micro-enterprises. Sandbox participation does not exempt systems from AI Act obligations but provides a supervised environment for iterative compliance development.

What is the European AI Office?

The European AI Office, established within the European Commission under Article 64, is the central EU body for AI Act coordination. Its responsibilities include: supervising GPAI model providers (exclusive competence); coordinating cross-border enforcement; developing guidance, codes of practice, and implementing acts; managing the EU database for high-risk AI systems; supporting the European AI Board; monitoring AI market developments; and engaging with international counterparts. The AI Office has direct supervisory and sanctioning power over GPAI model providers with systemic risk — a unique enforcement mechanism in EU regulatory architecture.

How does the AI Act apply to non-EU providers?

The AI Act has extraterritorial reach under Article 2. Non-EU providers are subject if they: place AI systems on the EU market, put them into service in the EU, or produce output used within the EU. For high-risk AI systems, non-EU providers must appoint an authorised representative established in the EU before market placement (Art. 22). The authorised representative is mandated by the provider and must have the authority to carry out conformity assessment tasks, maintain documentation, and cooperate with market surveillance authorities. Failure to appoint a representative means the system cannot legally be placed on the EU market.

What steps should organisations take now to prepare for compliance?

Organisations should follow a structured preparation plan: (1) AI system inventory — catalogue all AI systems developed, deployed, or procured, including vendor AI. (2) Risk classification — assess each system against Art. 5 prohibitions, Art. 6 high-risk pathways, and Art. 50 transparency triggers. (3) Gap analysis — compare current practices against applicable requirements (risk management, data governance, documentation, human oversight, accuracy, robustness, cybersecurity). (4) Governance framework — establish AI governance roles, policies, and accountability structures. (5) Documentation — begin preparing technical documentation per Annex IV (high-risk) or Annex XI (GPAI). (6) Training — ensure staff involved in AI development, deployment, and oversight understand their obligations (Art. 4 AI literacy). (7) Vendor management — review contracts with AI providers for compliance obligations and conformity documentation. (8) Cross-framework integration — align AI Act compliance with existing GDPR, NIS2, and sector-specific obligations.

This content is for informational purposes only and does not constitute legal advice. Consult qualified legal counsel for compliance decisions.

Automate EU-AI-ACT Compliance with FortisEU

Turn regulatory obligations into actionable controls with evidence workflows, real-time dashboards, and EU-sovereign AI assistance.