Skip to main content
FORTISEU
Back to Blog
AI Governance8 January 202612 min readAttila Bognar

EU AI Act Governance: The Control Model You Actually Need

The EU AI Act is not just a legal framework — it is an operating model problem. Map Art. 9-15 high-risk requirements to practical controls covering risk management, data governance, documentation, transparency, human oversight, and robustness.

EU AI Act Governance: The Control Model You Actually Need featured visual
AI ActAI governanceRisk managementHigh-risk AICompliance controlsEU regulation

Most AI governance programs start with ambition and end with paperwork. The EU AI Act requires more than policies filed and risk categories assigned. It requires an operational control model where every AI system has an owner, every risk classification has documented rationale, and every high-risk system is governed through a lifecycle that extends from design through deployment to decommissioning. Building that model is an engineering and governance challenge, not a legal compliance exercise. Organizations that approach it as the latter will produce documentation that satisfies no one when scrutiny arrives.

Why the AI Act Is an Operating Model Problem

The AI Act (Regulation 2024/1689) entered into force on August 1, 2024, with obligations phasing in through 2027. The prohibited practices under Article 5 applied from February 2, 2025. The GPAI model provisions (Articles 51-56) apply from August 2, 2025. The high-risk AI system obligations under Articles 6-49 apply from August 2, 2026.

What makes the AI Act distinctive among EU regulations is that it does not merely require organizations to implement security or privacy controls around technology they already operate. It requires organizations to build governance into the AI system lifecycle itself. The controls are not external guardrails added after the system is designed. They are architectural requirements that shape how AI systems are built, validated, deployed, monitored, and retired.

This means compliance cannot be achieved by a legal team writing policies and a compliance team tracking attestations. It requires engineering teams to build governance capabilities into their AI development and deployment processes, risk teams to maintain classification and assessment expertise that is AI-specific, and operations teams to implement monitoring and intervention capabilities that the regulation explicitly mandates.

The control model below translates Articles 9 through 15 of the AI Act (the core high-risk AI system requirements) into practical governance controls that bridge the gap between legal obligation and operational reality.

Article 9: Risk Management System

Article 9 requires providers of high-risk AI systems to establish, implement, document, and maintain a risk management system throughout the AI system's lifecycle. This is not a one-time risk assessment. It is a continuous process that must identify and analyze known and reasonably foreseeable risks, estimate and evaluate risks that may emerge during use, and adopt appropriate risk management measures.

Practical Controls

AI System Inventory. Before you can manage risk, you need to know what you have. Build and maintain a canonical inventory of all AI systems in use, in development, and in procurement evaluation. Each entry should include: system name, business purpose, provider (internal or external), risk classification, data inputs, decision scope, affected persons, and named owner.

The inventory is the foundation. Without it, every subsequent control operates on assumptions. Most organizations that claim to have an AI inventory actually have a project tracker that captures systems in active development but misses embedded AI components in SaaS tools, vendor-provided analytics, and legacy systems with ML-based features.

Risk Classification Discipline. Article 6 and Annex III define the high-risk categories. But applying these categories to real AI deployments requires interpretation, and interpretation requires documented rationale. For every AI system in your inventory, record the risk classification decision, the reasoning behind it, the Annex III category it falls under (or why it does not), and who made the decision.

The risk classification is not static. When a system's use case expands, when the population of affected persons changes, or when the deployment context shifts, the classification must be re-evaluated. Build this re-evaluation into your change management process.

Continuous Risk Assessment. Implement a periodic risk review cycle for all high-risk AI systems. Quarterly reviews should examine: changes in the system's performance metrics, new failure modes identified in testing or production, changes in the population of affected persons, and emerging risks identified through incident monitoring or external intelligence. The output of each review should be a documented risk posture update with explicit accept/mitigate/escalate decisions.

Article 10: Data and Data Governance

Article 10 requires that training, validation, and testing datasets for high-risk AI systems be subject to appropriate data governance and management practices. These practices must address training methodologies, data collection processes, data preparation operations, bias examination, and gap identification.

Practical Controls

Data Lineage Documentation. For every high-risk AI system, maintain documentation of the data sources used for training, validation, and testing. This includes the origin of the data, the collection methodology, the date ranges covered, the preprocessing and transformation steps applied, and any filtering or sampling that was performed.

Data lineage is not just a compliance artifact. It is an operational necessity for debugging model behavior, investigating bias concerns, and responding to supervisory inquiries about how the system was built.

Bias Detection and Mitigation. Article 10(2)(f) specifically requires examination of data in view of possible biases that are likely to affect health, safety, or fundamental rights. This requires more than running a fairness metrics toolkit on your test set. It requires domain-specific analysis of how the data represents (or fails to represent) the populations the system will affect.

Implement bias assessment as a gate in your AI development process. No high-risk AI system should move from development to production without documented bias analysis and mitigation measures. The analysis should be repeated when training data is updated or when the deployment context changes.

Data Quality Monitoring. Build continuous monitoring for data quality in production. Training data quality matters at system design time, but production input data quality matters throughout the system's lifecycle. Data drift, missing values, distribution shifts, and upstream data source changes can all degrade system performance in ways that the original testing did not anticipate.

Article 11: Technical Documentation

Article 11 requires that technical documentation be drawn up before a high-risk AI system is placed on the market or put into service, and kept up to date throughout the system's lifecycle. The documentation requirements are specified in Annex IV and are substantial.

Practical Controls

Documentation-as-Code. Treat AI technical documentation as a versioned artifact that lives alongside the system's code, not as a separate document maintained by a compliance team. When the system changes, the documentation should change in the same development cycle. This is the same principle that drives documentation-as-delivery in software engineering: documentation that drifts from reality is worse than no documentation because it creates false confidence.

Annex IV Compliance Template. Create a standardized template that maps to every requirement in Annex IV: general description, detailed description of system elements, monitoring and functioning information, risk management details, changes made throughout the lifecycle, performance metrics, and information about the datasets used. Every high-risk AI system should have a completed template that is reviewed and updated at least quarterly.

Version Control for Model Artifacts. Maintain version control not just for code but for model weights, training data snapshots, configuration parameters, and evaluation results. When a supervisory authority asks about the system's behavior at a specific point in time, you need to be able to reconstruct the state of the system, the data it was trained on, and the performance characteristics it exhibited.

Article 13: Transparency and Provision of Information to Deployers

Article 13 requires high-risk AI systems to be designed and developed in such a way that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately.

Practical Controls

Interpretability Framework. For each high-risk AI system, define and document what interpretability means in the specific deployment context. A credit scoring system requires different interpretability than a medical diagnostic support tool. Define the explanations that deployers (the people using the system's output to make decisions) need, and verify that the system provides them.

This is not about deploying a generic explainability toolkit. It is about ensuring that the people who act on the system's outputs understand enough about how those outputs were generated to exercise the human oversight that Article 14 requires.

User-Facing Documentation. Article 13(3) requires that high-risk AI systems are accompanied by instructions for use that include information about the system's intended purpose, level of accuracy, foreseeable misuse, and the human oversight measures built into the system. This is operational documentation, not a legal disclaimer. It must be written for the people who actually use the system, in language they can understand and act on.

Logging and Auditability. Article 12 requires automatic logging of events during the system's operation to the extent appropriate to the intended purpose of the system. Implement logging that captures input data, output decisions, confidence scores, and any human override actions. These logs must be retained for a period appropriate to the intended purpose (at minimum, as long as the system remains in service) and must be accessible to supervisory authorities.

Article 14: Human Oversight

Article 14 is perhaps the most operationally demanding provision. It requires that high-risk AI systems be designed to be effectively overseen by natural persons during the period the system is in use. Human oversight must be able to fully understand the system's capacities and limitations, correctly interpret its output, decide not to use the system or to disregard or override its output, and intervene or interrupt the system.

Practical Controls

Oversight Role Definition. For each high-risk AI system, define the human oversight role: who is responsible for overseeing the system's operation, what qualifications and training they need, what authority they have to override or stop the system, and what escalation path they follow when they identify concerns.

The oversight role cannot be nominal. Article 14(4) explicitly requires that oversight measures enable the individuals to whom human oversight is assigned to properly fulfill their role. If the system processes thousands of decisions per hour and the oversight person reviews a 1% sample weekly, that is not effective oversight. Design the oversight mechanism to be proportionate to the system's decision volume and impact.

Override and Intervention Procedures. Document clear procedures for overriding or stopping a high-risk AI system. These procedures must be tested regularly (not just documented) and must be executable without requiring technical expertise that the oversight person may not possess. Include override procedures in the system's incident response plan so that AI-specific incident scenarios are covered.

Oversight Training Program. Build a training program for human oversight personnel that covers: the system's intended purpose and limitations, how to interpret the system's outputs, when and how to override or stop the system, how to escalate concerns, and the regulatory context for their oversight role. Training should be role-specific, not generic AI awareness training.

Article 15: Accuracy, Robustness, and Cybersecurity

Article 15 requires high-risk AI systems to be designed to achieve appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle.

Practical Controls

Performance Monitoring. Define and continuously monitor accuracy metrics appropriate to the system's intended purpose. These metrics must be specified in the technical documentation (Article 11) and communicated to deployers (Article 13). When performance degrades below defined thresholds, automated alerts should trigger investigation and potential system intervention.

Performance monitoring is not the same as model monitoring. It must include accuracy in real-world conditions, not just on test sets. A model that performs well on historical test data but degrades on production data is not meeting the Article 15 accuracy requirement.

Adversarial Robustness Testing. High-risk AI systems must be resilient to errors, faults, and attempts to alter their performance by third parties exploiting system vulnerabilities. This requires adversarial testing that goes beyond standard security testing. It includes input perturbation testing (how does the system respond to adversarial inputs designed to cause misclassification?), data poisoning resistance assessment, and model extraction risk evaluation.

AI-Specific Cybersecurity Controls. Standard cybersecurity controls (access control, encryption, network segmentation) apply to AI systems as they do to any information system. But AI systems also require AI-specific security controls: model access control (who can query the model and at what rate?), training pipeline integrity (how do you ensure training data and code have not been tampered with?), and inference environment isolation (how do you prevent the production model from being exfiltrated or manipulated?).

These controls should be integrated with your broader cybersecurity risk management framework rather than maintained as a separate AI security program.

The AI Governance Team Structure

Operationalizing the control model above requires a cross-functional team structure. The mistake most organizations make is assigning AI governance exclusively to legal/compliance or exclusively to engineering. Both approaches produce incomplete results.

A functional AI governance team includes:

  • AI Risk Lead (reports to CISO or CRO): owns the risk classification process, manages the risk management system under Article 9, and coordinates cross-functional governance activities
  • AI Engineering Lead (reports to CTO): owns technical documentation, performance monitoring, robustness testing, and the engineering practices that build governance into the development lifecycle
  • Data Governance Lead (reports to CDO or DPO): owns data lineage, bias assessment, data quality monitoring, and the intersection between AI Act data requirements and GDPR data protection obligations
  • Human Oversight Coordinator (business function): owns the definition and resourcing of human oversight roles, oversight training, and override procedure testing
  • Legal/Compliance Advisor: provides regulatory interpretation, supports classification decisions, and ensures the control model remains aligned with evolving regulatory guidance and standards

This team does not need to be full-time dedicated to AI governance in most organizations. But it does need defined roles, regular coordination cadence, and clear decision authority. A monthly AI governance forum where this team reviews the inventory, assesses risk changes, and makes classification and oversight decisions is the minimum viable governance rhythm.

Change Management: The Lifecycle Challenge

The AI Act's requirements are not point-in-time obligations. They apply "throughout the lifecycle" of the AI system. This means your control model must integrate with your change management process.

When an AI system is updated (new training data, model retraining, architecture changes, deployment context expansion), the change must trigger reassessment of risk classification, updated technical documentation, re-evaluation of bias analysis, updated performance baselines, and review of human oversight adequacy. If your change management process does not include AI-specific assessment triggers, you will drift out of compliance between major review cycles.

The most effective approach is to build AI governance checkpoints into the CI/CD pipeline for internally developed systems and into the vendor management lifecycle for externally sourced AI systems. Make compliance an automated gate, not a manual review that depends on someone remembering to request it.

Key Takeaways

  • Build an operational control model, not a policy library. The AI Act requires governance that operates throughout the system lifecycle, not documentation produced once and filed.
  • Map Articles 9-15 to specific, measurable controls with named owners, evidence requirements, and review cadences. Abstract commitments to "responsible AI" do not survive supervisory scrutiny.
  • Treat the AI inventory as the foundation. Every subsequent control depends on knowing what AI systems you have, where they are deployed, who owns them, and how they are classified. Invest in inventory completeness before investing in sophisticated governance processes.
  • Design human oversight to be effective, not nominal. Article 14 requires oversight that can genuinely intervene. If your oversight mechanism cannot keep pace with the system's decision volume, it does not satisfy the requirement.
  • Integrate AI governance into change management. Lifecycle obligations mean compliance is not a state you achieve but a process you maintain. Build AI governance checkpoints into development pipelines and vendor management workflows.

The organizations that will manage AI Act compliance successfully are those that treat it as a governance engineering challenge, building controls into how AI systems are developed and operated rather than wrapping compliance documentation around systems that were built without governance in mind. The control model above provides the architectural blueprint. Execution requires the cross-functional team structure, the change management integration, and the leadership commitment to invest in governance as an ongoing operational capability.

Next Step

Turn guidance into evidence.

If procurement is involved, start with the Trust Center. If you want to see the product, create an account or launch a live demo.