Skip to main content
FORTISEU
Back to Blog
AI Governance12 August 20259 min readAttila Bognar

AI Act GPAI Rules Since August 2, 2025: How to Operationalize Compliance Without Chaos

The EU AI Act's general-purpose AI obligations started on August 2, 2025. This guide shows how security and compliance teams can turn legal text into an auditable operating model.

AI Act GPAI Rules Since August 2, 2025: How to Operationalize Compliance Without Chaos featured visual
EU AI ActGPAIAI governanceCompliance operations

August 2, 2025 was not a soft milestone. On that date, the general-purpose AI provisions of the EU AI Act (Regulation 2024/1689) became enforceable. The obligations under Articles 53 through 55 — covering transparency, technical documentation, copyright compliance, and systemic risk management — shifted from future requirements to present-tense law. Organizations still framing their AI Act programs as "preparatory" are already behind in the only dimension that matters: operational execution.

Legal interpretation got most organizations through 2024. It will not get them through an audit in 2025. The gap between understanding what the GPAI rules require and being able to demonstrate compliance under pressure is where most programs are failing right now. And that gap is an operational design problem, not a legal knowledge problem.

What Changed on August 2, 2025

The EU AI Act entered into force on August 1, 2024, but its obligations are phased. The August 2, 2025 milestone activated the GPAI-specific provisions in Chapter V, making them the first substantive compliance obligations to take effect — ahead of the high-risk AI system requirements that follow in August 2026.

For GPAI providers — meaning organizations that develop or place general-purpose AI models on the EU market — Article 53 imposes four core obligations that are now legally binding:

Technical documentation (Art. 53(1)(a)): Providers must draw up and maintain technical documentation of the model, including its training and testing process, and make it available to the AI Office and national competent authorities on request. This is not a one-time deliverable. The documentation must be kept up to date as models evolve.

Downstream transparency (Art. 53(1)(b)): Providers must make information and documentation available to downstream providers — the companies integrating GPAI models into their own AI systems — so those deployers can meet their own AI Act obligations. This creates a supply chain transparency requirement that flows through every layer of the AI value chain.

Copyright compliance policy (Art. 53(1)(c)): Providers must implement a policy to comply with EU copyright law, particularly regarding text and data mining under Directive 2019/790. This includes respecting opt-out reservations expressed by rights holders.

Training data summary (Art. 53(1)(d)): Providers must publish a sufficiently detailed summary of the training data used, following a template provided by the AI Office. This summary must be publicly available.

For organizations that consume GPAI models rather than develop them, these provider obligations create indirect but significant operational requirements. You need to know what your providers owe you, verify they are delivering it, and build your own compliance processes on that foundation.

The Systemic Risk Threshold

Article 55 adds a heavier obligation tier for GPAI models classified as posing systemic risk. The classification trigger is clear: any model trained with total computing power exceeding 10^25 floating point operations (FLOPs) is presumed to pose systemic risk. The Commission can also designate additional models based on other criteria including capability assessments.

As of August 2025, this threshold captures the largest foundation models in the market. For providers of these models, Art. 55 requires:

  • Performing and documenting model evaluations, including adversarial testing
  • Assessing and mitigating possible systemic risks
  • Tracking, documenting, and reporting serious incidents to the AI Office
  • Ensuring adequate cybersecurity protections for the model and its infrastructure
  • Reporting energy consumption estimates

For enterprise deployers, the systemic risk classification of a model you consume changes your own risk profile. If your operations depend on a model classified under Art. 55, your vendor governance, business continuity planning, and incident response processes must account for the heightened risk posture that classification implies.

Why Most Programs Stall

The pattern is consistent across industries. AI Act programs stall because they are structured as policy projects rather than operational programs. Teams build governance frameworks on slides, circulate responsibility matrices in SharePoint, and debate risk classification taxonomies in committee — while core operational questions remain unanswered:

Which AI systems are in scope right now? Not "which might be in scope after further analysis" — which are definitively in scope today and need active compliance measures?

Who owns each system end-to-end? Not who chairs the AI ethics board, but who is personally accountable for ensuring each in-scope system has current documentation, functioning controls, and audit-ready evidence?

What evidence proves controls are active, not aspirational? Can you produce, today, the technical documentation summaries your GPAI providers are required to give you under Art. 53(1)(b)? Can you demonstrate you have reviewed them?

Without concrete answers to these questions, compliance becomes a quarterly scramble that consumes disproportionate resources and produces unreliable outputs.

Operationalizing GPAI Compliance

The winning approach is straightforward and uncomfortable: run AI governance like change management for critical infrastructure. Not as an innovation initiative. Not as a legal project. As operational discipline with the same rigor you apply to production systems.

This requires five concrete capabilities:

AI system inventory with accountable owners. Every GPAI model in your environment — directly integrated, embedded in vendor products, or accessed through partner channels — must be catalogued with a named owner, a risk classification, and a documented business purpose. FortisEU's compliance automation platform supports this with structured AI system registers that connect to your broader control framework.

Risk classification with documented rationale. The AI Act establishes risk categories with specific criteria. Your classification decisions need to be recorded, justified, and reviewable. When an auditor asks why you classified a particular AI system as limited risk rather than high risk, "the team discussed it" is not an adequate answer. You need documented analysis against the criteria in Annex III and Art. 6.

Control mapping with evidence standards. Each obligation creates control requirements. Each control needs a defined evidence standard — what artifact proves the control is operating effectively? — and an evidence collection process that generates those artifacts as a byproduct of normal operations, not as an audit preparation exercise.

Vendor governance with escalation rights. For organizations consuming GPAI models, vendor governance under the AI Act intersects with existing third-party risk management obligations under NIS2 and DORA. Your vendor agreements must include provisions for Art. 53(1)(b) information sharing, model change notification, and incident escalation. Your vendor risk management process must verify these provisions are being honored.

Recurring executive review with hard decisions. AI governance is not a set-and-forget program. Models change. Use cases evolve. Risk profiles shift. Executive review cadences — quarterly at minimum, monthly for high-risk deployments — must include real decision points: continue, modify, or retire AI systems based on current risk and compliance posture.

The Model Card and Documentation Challenge

One of the most practically difficult Art. 53 requirements is the technical documentation obligation. For providers, producing and maintaining model cards that satisfy the AI Office's template requirements demands coordination between research teams, engineering, legal, and compliance. For deployers, the challenge is different but equally real: you must consume, review, and act on the documentation your providers produce.

Most organizations have no established process for reviewing model documentation. They have procurement processes that evaluate vendor contracts. They have security review processes that assess technical risk. But the specific task of evaluating whether a GPAI provider's technical documentation summary is "sufficiently detailed" per Art. 53(1)(d) — and whether the information provided under Art. 53(1)(b) is adequate for your own compliance needs — is a new operational capability that most teams need to build.

Practical steps include: designating a review owner (typically within the CISO or compliance officer function), establishing minimum documentation requirements in your vendor onboarding process, creating a structured review template that maps provider documentation against your own obligations, and building a tracking mechanism that flags when documentation becomes stale.

Art. 53(1)(c) receives less attention than transparency or systemic risk provisions, but it creates meaningful operational exposure. GPAI providers must implement policies to comply with EU copyright law, including Directive 2019/790 on copyright in the digital single market.

For deployers, the risk is indirect but real. If a GPAI model you integrate was trained on data that infringes copyright, and if rights holders pursue claims, your organization may face downstream liability depending on how you use the model's outputs. Your vendor agreements should include representations about copyright compliance, indemnification provisions, and the right to receive updates on any copyright-related disputes or policy changes affecting the models you consume.

FortisEU's regulatory intelligence module tracks developments in AI copyright enforcement, including national implementation variations across EU member states, so your compliance team can stay current without manual monitoring.

Intersection with NIS2 and DORA

For organizations subject to NIS2 or DORA, GPAI compliance does not exist in isolation. NIS2 Art. 21(2)(d) requires supply chain security measures that encompass AI supply chains. DORA Art. 28 imposes ICT third-party risk management requirements that apply to GPAI providers serving financial entities.

The operational implication: your AI governance program should be integrated with, not separate from, your broader ICT risk management framework. AI system inventories should feed into your NIS2 asset registers. GPAI provider assessments should be conducted within your DORA third-party risk management process. Incident response procedures for AI-related events should follow your established incident management workflows.

Running parallel governance programs for AI Act, NIS2, and DORA compliance is a resource drain that produces inconsistent results. Integrated governance — where a single control can satisfy overlapping requirements across multiple regulations — is both more efficient and more defensible.

Evidence Architecture for Audit Readiness

When the AI Office or a national competent authority requests evidence of GPAI compliance, they will not accept a governance framework document. They will want specific artifacts: your AI system register, the model documentation you received from providers, your risk classification decisions with rationale, evidence of executive review, and records of any incidents or non-conformities.

Building an evidence architecture that produces these artifacts continuously requires:

  • Structured data models for AI system metadata (owner, purpose, risk tier, provider, model version)
  • Automated collection of provider documentation updates
  • Timestamped records of classification decisions and reviews
  • Incident logs that distinguish AI-specific events from general ICT incidents
  • Executive dashboards that demonstrate board-level oversight per Art. 4(3) deployer obligations

This architecture should be designed once and operated continuously, not rebuilt every time an audit request arrives.

Key Takeaways

  • GPAI obligations are live, not future. Art. 53 transparency, documentation, and copyright compliance requirements became enforceable on August 2, 2025. Organizations that are still "preparing" are non-compliant.

  • Operationalize, do not just interpret. The gap between legal understanding and operational compliance is where programs fail. Build AI governance as an operating model with inventories, owners, evidence standards, and review cadences.

  • The 10^25 FLOPs threshold matters for deployers too. If you consume systemic risk models, your vendor governance and business continuity planning must account for the heightened obligation tier under Art. 55.

  • Integrate AI governance with existing frameworks. For NIS2 and DORA-regulated entities, AI Act compliance should be embedded within existing ICT risk management programs, not operated as a standalone initiative.

  • Copyright compliance is a real obligation, not a footnote. Art. 53(1)(c) creates supply chain risk that deployers must address through contractual provisions and ongoing monitoring.

Next Step

Turn guidance into evidence.

If procurement is involved, start with the Trust Center. If you want to see the product, create an account or launch a live demo.