Skip to main content
FORTISEU
StrategyEU AI ActISO 27001

AI Governance Framework

14 minUpdated 2026-03-18

Strategic guide to building an AI governance framework aligned with the EU AI Act and ISO 42001, covering risk assessment, AI inventories, human oversight, transparency, and integration with existing GRC programmes.

Key Takeaways
  1. 1

    AI governance is now a regulatory obligation under the EU AI Act — not a voluntary corporate responsibility exercise.

  2. 2

    An AI inventory covering all developed, deployed, and procured AI systems is the operational prerequisite for governance; shadow AI is a material risk.

  3. 3

    Human oversight under Article 14 requires competent overseers with authority, time, and information to exercise meaningful judgment — not just a confirmation button.

  4. 4

    Integrate AI governance into existing GRC frameworks (ISO 27001, DORA, NIS2) rather than building a parallel structure.

  5. 5

    ISO 42001 provides the management system foundation for AI governance and aligns well with EU AI Act requirements, though certification is not a substitute for regulatory compliance.

What Is AI Governance and Why It Matters Post-EU AI Act

AI governance is the system of policies, processes, roles, and controls that an organisation establishes to ensure that its development and use of artificial intelligence is responsible, ethical, lawful, and aligned with its strategic objectives. Before the EU AI Act, AI governance was largely voluntary — a matter of corporate responsibility statements and industry best practices. The entry into force of Regulation 2024/1689 has transformed AI governance from a discretionary programme into a regulatory obligation for any organisation placing AI systems on the EU market or deploying them within the EU.

The EU AI Act does not prescribe a specific governance model, but its requirements — risk management, data governance, technical documentation, human oversight, transparency, post-market monitoring — collectively assume that the organisation has a functioning governance apparatus capable of managing AI systems across their entire lifecycle. Without governance, there is no mechanism to ensure that risk classifications are accurate, that prohibited practices are detected and prevented, that high-risk system obligations are met, or that incidents are reported. Governance is the connective tissue that makes compliance operationally viable.

Beyond legal compliance, effective AI governance serves business objectives. It reduces the risk of reputational damage from biased or harmful AI outputs, it builds trust with customers and regulators, it improves the quality and reliability of AI systems through structured oversight, and it creates a defensible record of due diligence if an AI system causes harm. Organisations that treat AI governance as a pure compliance cost are missing the strategic opportunity — the organisations that build governance into their AI development and deployment processes from the outset will move faster and with greater confidence than those that retrofit governance after problems emerge.

AI Risk Assessment and Classification Methodology

The foundation of any AI governance framework is a structured risk assessment methodology that can be applied consistently across all AI systems in the organisation's portfolio. The EU AI Act's four-tier risk classification (unacceptable, high, limited, minimal) provides the regulatory framework, but organisations need an operational methodology that translates those legal categories into practical assessment procedures.

The risk assessment should begin with a use-case analysis: what decision or task does the AI system support, who is affected by the output, what is the potential impact of errors or biases, and what is the degree of human oversight in the process? These questions map to the AI Act's classification criteria but must be answered in the specific context of the organisation's operations. A credit scoring model used to determine loan eligibility is clearly high-risk under Annex III. A meeting transcription tool that suggests action items is likely minimal risk. The nuanced cases — recommendation engines, customer service chatbots, predictive maintenance systems — require careful analysis and documented reasoning.

The methodology should produce a classification decision with supporting evidence, a risk profile that identifies the specific risks associated with the system (bias, accuracy, transparency, security, fundamental rights impact), and a set of required controls proportionate to the risk level. For high-risk systems, the control set will be extensive and will include all Chapter III requirements. For limited-risk systems, the controls focus on transparency. For minimal-risk systems, the organisation may adopt voluntary controls aligned with codes of practice.

Risk assessment is not a one-time exercise. AI systems evolve — through model updates, changes in training data, drift in input distributions, and changes in the deployment context. The governance framework must include triggers for risk reassessment: material model updates, expansion to new use cases or geographies, incidents or near-misses, regulatory guidance that changes the classification interpretation, and periodic scheduled reviews. A system classified as limited-risk today may become high-risk if its deployment context changes.

The European Commission's standardisation request to CEN and CENELEC includes development of harmonised standards for AI risk management. Monitor the publication of these standards — once adopted, they will provide presumption of conformity with AI Act requirements and should be incorporated into your risk assessment methodology.

AI Inventory and Model Registry Requirements

An AI inventory — sometimes called an AI register or model registry — is the operational prerequisite for effective AI governance. You cannot govern what you cannot see. The inventory must capture every AI system that the organisation develops, deploys, or procures, along with sufficient metadata to support risk classification, compliance monitoring, and lifecycle management.

The minimum data fields for the AI inventory should include: system name and unique identifier, description of the system and its intended purpose, provider (internal team or third-party vendor), deployment status (development, testing, production, retired), risk classification under the AI Act, date of classification and reviewer, data inputs and outputs, the model type and architecture, training and evaluation data sources, identified risks and mitigation measures, human oversight mechanisms, responsible owner within the organisation, and the date of last review.

For high-risk AI systems, Article 26(5) of the AI Act requires deployers to register in the EU database before putting the system into use. The EU database (established under Article 71) is a publicly accessible register of high-risk AI systems and their providers. The registration obligation means that the organisation's internal AI inventory must contain sufficient information to complete the EU database registration — and the two must be kept consistent.

Building the AI inventory is a discovery exercise as much as a documentation exercise. AI systems are not always visible to central governance functions. They may be embedded in third-party SaaS tools, built by individual business units, integrated through APIs, or operating as components within larger systems. The inventory process should include surveys of all business units, reviews of procurement records for AI-related services, code repository analysis for ML model deployments, and interviews with technology teams. Shadow AI — systems adopted without central IT or governance approval — is a significant and growing risk that the inventory process must actively hunt for.

Human Oversight and Transparency Obligations

Human oversight is a central pillar of the EU AI Act's approach to high-risk AI systems. Article 14 requires that high-risk AI systems are designed and developed in such a way that they can be effectively overseen by natural persons during the period in which they are in use. This includes the ability for the human overseer to fully understand the capabilities and limitations of the system, to properly monitor its operation, to be able to decide not to use the system in a particular situation, to be able to override or reverse the system's output, and to be able to intervene or stop the system.

Implementing effective human oversight requires more than adding a "confirm" button to an automated decision workflow. The human overseer must have the competence to evaluate the AI system's output, the authority to override it, the time to exercise meaningful judgment, and access to information that enables informed assessment. Automation bias — the tendency of human operators to defer to automated outputs without critical evaluation — is a recognised risk that governance frameworks must actively mitigate through training, interface design, and organisational culture.

Transparency obligations under the AI Act operate at multiple levels. Providers must ensure that high-risk AI systems are accompanied by instructions for use that are accessible and comprehensible to deployers (Article 13). Deployers must inform natural persons that they are subject to a high-risk AI system (Article 26). For AI systems that interact directly with persons (e.g., chatbots), Article 50 requires disclosure of the AI nature of the interaction. For synthetic content (deepfakes, AI-generated text, images, audio, video), Article 50 requires machine-readable labelling.

The governance framework should establish clear policies on when and how human oversight is implemented for each risk tier, define the qualifications and training required for human overseers, specify the documentation requirements for human oversight decisions (particularly overrides), and establish metrics for monitoring the effectiveness of human oversight (e.g., override rates, decision consistency, time-to-decision). These policies should be reviewed when AI systems are updated or when the deployment context changes.

Integration with Existing GRC Programmes

AI governance should not exist as an isolated programme. Organisations with mature GRC (Governance, Risk, and Compliance) capabilities should integrate AI governance into their existing frameworks rather than building a parallel structure. This integration reduces duplication, leverages established processes and tooling, and ensures that AI risks are assessed alongside other enterprise risks rather than in a silo.

The most natural integration points are with information security management (ISO 27001/27002), data protection (GDPR), and enterprise risk management. ISO 27001 already provides a management system framework for information security that covers many of the same organisational controls needed for AI governance — risk assessment, asset management, access control, incident management, audit, and continuous improvement. AI-specific controls can be layered onto the existing ISMS rather than duplicated in a separate management system.

For organisations subject to DORA or NIS2, the AI governance framework should align with the ICT risk management and third-party risk management requirements of those regulations. AI systems are ICT systems, and AI providers are ICT third-party providers. The risk assessments, contractual arrangements, incident reporting, and business continuity planning required by DORA and NIS2 apply equally to AI systems and providers. A unified approach avoids the fragmentation that arises when each regulation is managed by a separate team with separate processes.

Practically, integration means: using a single risk register that includes AI risks alongside other operational, ICT, and compliance risks; applying the same vendor due diligence and contractual standards to AI providers as to other critical ICT providers; routing AI incident reports through the same incident management workflow as other cybersecurity and operational incidents; including AI systems in the scope of internal audit and assurance programmes; and reporting AI governance metrics alongside other GRC metrics to the management body. The governance framework document should explicitly map its components to existing policies and processes to demonstrate integration and avoid gaps.

If your organisation already has ISO 27001 certification, use the ISMS as the management system foundation for AI governance. The Plan-Do-Check-Act cycle, risk assessment methodology, control framework, and audit programme can all be extended to cover AI-specific requirements without building a parallel structure.

ISO 42001 Alignment: AI Management System Standard

ISO/IEC 42001:2023 is the international standard for AI management systems (AIMS). Published in December 2023, it provides a structured framework for establishing, implementing, maintaining, and continually improving an AI management system within an organisation. For organisations seeking to build a mature AI governance programme — and particularly those that want to demonstrate governance maturity through certification — ISO 42001 provides a comprehensive and internationally recognised framework.

ISO 42001 follows the same high-level structure (Harmonised Structure) as other ISO management system standards, including ISO 27001. This means that organisations with existing ISO management system certifications can integrate ISO 42001 requirements into their existing management system rather than building from scratch. The standard covers context of the organisation, leadership, planning, support, operation, performance evaluation, and improvement — the same lifecycle structure as ISO 27001 and ISO 9001.

The standard's Annex A provides a set of AI-specific controls organised into categories including AI policies, internal organisation, resources for AI, AI system impact assessment, AI data, AI system lifecycle, third-party relationships, and use of AI systems. These controls are designed to be applicable across all types of AI systems and all levels of risk. Annex B provides implementation guidance for each control, and Annex C maps AI-related objectives and risk sources that organisations should consider.

Alignment between ISO 42001 and the EU AI Act is strong but not complete. ISO 42001 provides a management system framework; the AI Act provides regulatory requirements. Implementing ISO 42001 will not automatically achieve AI Act compliance, but it provides the organisational infrastructure — governance, risk management, documentation, monitoring, improvement — within which AI Act compliance can be systematically achieved and maintained. Organisations should treat ISO 42001 as the governance vehicle and the AI Act as the regulatory specification that drives the content of controls.

Certification to ISO 42001 is offered by accredited certification bodies and is gaining traction among organisations that want to demonstrate AI governance maturity to regulators, customers, and partners. While the EU AI Act does not require ISO 42001 certification, it can serve as supporting evidence of the organisation's commitment to responsible AI management and may become increasingly relevant as harmonised standards are adopted.

Building the Framework: A Practical Roadmap

Implementing an AI governance framework is a multi-quarter programme that should be approached in phases. Phase one — discovery and baseline — focuses on understanding the current state: conducting the AI inventory, identifying existing policies and processes that touch AI, assessing the maturity of current risk management and oversight practices, and mapping the regulatory requirements that apply to the organisation's AI activities. This phase typically takes 6-8 weeks and should involve stakeholders from legal, compliance, IT, data science, business units, and executive leadership.

Phase two — framework design — produces the governance architecture: the AI governance policy, the risk classification methodology, the roles and responsibilities framework (including an AI governance committee or board), the integration plan with existing GRC processes, and the control catalogue. The design should be informed by ISO 42001 structure where the organisation intends to pursue certification, and should be explicitly mapped to EU AI Act requirements. This phase typically takes 8-12 weeks and should produce a set of documented policies and procedures ready for management body approval.

Phase three — implementation — activates the framework: deploying the AI inventory tool, conducting risk classifications for all inventoried systems, implementing required controls for high-risk systems, establishing human oversight mechanisms, developing training programmes for AI overseers and general AI literacy, and operationalising monitoring and reporting. This is the longest phase and runs in parallel with the organisation's AI Act compliance timeline. Priority should be given to systems classified as high-risk, as these carry the heaviest compliance burden and the greatest exposure.

Phase four — assurance and improvement — establishes the ongoing governance cycle: internal audits of AI governance controls, management review of framework effectiveness, performance metrics tracking, incident review and lessons learned integration, and periodic framework updates in response to regulatory developments, organisational changes, and technology evolution. This phase never ends — it is the steady-state operating mode of the AI governance programme. Organisations pursuing ISO 42001 certification should align their internal audit and management review cycles with the certification timeline.

Frequently Asked Questions

Is ISO 42001 certification required for EU AI Act compliance?

No. The EU AI Act does not require any specific certification, including ISO 42001. However, ISO 42001 provides a structured management system framework within which AI Act compliance can be systematically achieved and maintained. Certification can serve as evidence of governance maturity in regulatory dialogues and can build trust with customers and partners. As harmonised standards are developed under the AI Act, the relationship between ISO 42001 and regulatory compliance will become clearer.

How do we handle shadow AI — AI systems adopted without governance approval?

Shadow AI is a significant governance risk. Address it through a combination of discovery (surveying business units, reviewing procurement records, scanning for API integrations), policy (clear acceptable use policies for AI tools, procurement controls that require governance review before adoption), technology (monitoring for unsanctioned AI service usage, API gateway controls), and culture (making the governance process efficient enough that teams prefer to use it rather than work around it). The AI inventory should be a living discovery process, not a one-time census.

Who should own the AI governance programme?

Ownership should sit at a level that spans the functions involved — legal, compliance, IT, data science, and business operations. Common models include a Chief AI Officer, a cross-functional AI governance committee chaired by a senior executive, or integration into the existing CISO or CRO function. The key requirements are: executive sponsorship, authority to set and enforce policies, access to all AI development and deployment activities, and a reporting line to the management body for regulatory accountability.

How does AI governance integrate with data protection (GDPR)?

AI governance and data protection are deeply intertwined. AI systems that process personal data must comply with GDPR principles (lawfulness, fairness, transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity and confidentiality). Data Protection Impact Assessments (DPIAs) under GDPR Article 35 and fundamental rights impact assessments under AI Act Article 27 can be conducted as a single integrated assessment. The DPO should be involved in AI governance, and GDPR compliance should be a mandatory checkpoint in the AI risk assessment workflow.

What is the relationship between the AI Act and sector-specific regulations like DORA?

AI systems used in the financial sector are subject to both the AI Act and DORA. A high-risk AI system used for credit scoring, for example, must comply with AI Act Chapter III requirements (risk management, data governance, transparency, human oversight) and DORA requirements (ICT risk management, incident reporting, third-party risk management for the AI provider). The governance framework should map each AI system to all applicable regulatory requirements and ensure that compliance is achieved holistically, not in silos.

Ready to Operationalise This?

Turn this guide into working compliance workflows. Create an account or schedule a personalised demo.