A vendor saying "we signed the Code of Practice" is not a control statement. It is a marketing statement. And in February 2026, when the AI Office published its updated signatory list for the General-Purpose AI Code of Practice, procurement teams across Europe discovered that the gap between signing and compliance is wider than anyone advertised.
The Code of Practice for GPAI providers — mandated under Article 56 of the EU AI Act (Regulation 2024/1689) — was designed as a co-regulatory instrument. Providers that sign commit to transparency, safety evaluation, and risk mitigation measures that go well beyond existing voluntary frameworks. But signatory status alone tells you nothing about whether those commitments translate into operational controls your organization can verify, test, or enforce. If your vendor due diligence process ends at "they signed the code," you are importing unmanaged risk into your supply chain.
What the Code of Practice Actually Requires
The GPAI Code of Practice is not a badge. It is a structured compliance mechanism with teeth. Under the EU AI Act, GPAI providers face two tiers of obligation depending on whether their models are classified as standard GPAI (Art. 53) or as posing systemic risk (Art. 55).
For all GPAI providers, the Code of Practice operationalizes Article 53 requirements: maintaining up-to-date technical documentation, implementing a copyright compliance policy, publishing sufficiently detailed model summaries, and cooperating with downstream deployers who need information to meet their own AI Act obligations. For systemic risk models — those exceeding the 10^25 FLOPs training compute threshold or designated by the Commission — the Code adds Article 55 measures: adversarial testing, incident monitoring and reporting to the AI Office, cybersecurity protections, and energy consumption disclosure.
Signing the Code is supposed to create a presumption of conformity with these obligations. But presumption is not proof. The AI Office has been explicit: signatories must demonstrate ongoing adherence, not just initial commitment. The February 2026 publication included updated signatory rosters alongside implementation guidance that makes clear the Commission expects verifiable evidence, not self-certification.
Who Signed and What That Tells You
The signatory list as of February 2026 includes major GPAI providers operating in the EU market — companies like those behind the largest foundation models deployed in enterprise contexts. But the list also reveals telling gaps. Several prominent providers signed with caveats. Others joined late. A handful of significant players remain unsigned, citing jurisdictional concerns or disagreements over specific measures.
For enterprise procurement teams, the signatory list is a starting point for vendor segmentation, not an endpoint for risk assessment. A signed provider that cannot produce its technical documentation summary on request is a higher operational risk than an unsigned provider with robust transparency practices. The signature tells you about intent. Your due diligence must verify execution.
What matters more than whether a vendor signed is whether that vendor can answer three concrete questions: What specific measures have you implemented under each Code of Practice commitment? How do you evidence ongoing compliance? What happens contractually when you fall short?
The Three-Layer Vendor Governance Model
Enterprises consuming GPAI models need to move beyond questionnaire-based vendor assessment. The AI Act creates a layered accountability model where providers have obligations, deployers have obligations, and the interfaces between them must be formally governed.
Effective AI vendor governance requires three layers of proof:
Design proof covers the vendor's declared controls, scope boundaries, and governance model. This includes the technical documentation required under Art. 53(1), the model card or summary published per Art. 53(2), and the copyright policy mandated by Art. 53(1)(c). You should be able to obtain and review these artifacts as part of any procurement process. If a vendor cannot produce them, they are not compliant — regardless of signatory status.
Runtime proof addresses what happens after deployment. How does the vendor monitor model behavior in production? What incident detection and notification mechanisms exist? How are model updates communicated to downstream deployers? Art. 53(1)(b) requires providers to make information available to deployers so they can meet their own obligations. This is not optional and it is not satisfied by a changelog buried in release notes.
Accountability proof means named owners, response SLAs, and contractual recourse. When a model change causes downstream impact — a content moderation shift, a classification drift, a bias emergence — who is accountable, how fast do they respond, and what remedies does your contract provide? Without this layer, you have vendor management theatre.
If any one of these layers is missing, your organization is buying promises, not resilience.
Why Signatory Status Creates False Assurance
The most dangerous outcome of the Code of Practice publication is that it gives procurement teams a shortcut they should not take. "Is the vendor a signatory?" becomes a checkbox that substitutes for substantive evaluation.
Consider a practical scenario. A regulated lender deploys a GPAI-powered assistant for customer communications. The vendor is a Code of Practice signatory. Documentation looks polished. Procurement signs off quickly based on signatory status and a completed AI vendor questionnaire.
Three months later, the vendor swaps a moderation component. Response behavior drifts. Customer complaint volume rises. Legal asks for change logs and incident evidence. There is none the lender can consume quickly — the vendor's notification obligations were met by an obscure API status page update that nobody in the lender's organization monitors.
The issue is not model quality. The issue is governance latency. The vendor technically complied with notification requirements. The lender had no operational process to consume and act on those notifications. Signatory status did not prevent this failure because signatory status was never designed to replace deployer-side governance.
Contractual Provisions That Actually Matter
When negotiating with GPAI providers, enterprises subject to NIS2, DORA, or sector-specific regulation should embed specific provisions that go beyond standard data processing agreements. The AI Act creates new categories of obligation that existing contract templates do not cover.
Key contractual provisions should include: model change notification with minimum lead times (not just post-hoc disclosure); incident reporting obligations that align with your own regulatory timelines under NIS2 Art. 23 or DORA Art. 19; the right to audit or obtain third-party audit reports on Code of Practice adherence; data handling commitments that address training data provenance and copyright compliance under Art. 53(1)(c); and termination rights triggered by material non-compliance with Code of Practice commitments.
These are not theoretical concerns. The ESAs and national competent authorities are already examining how financial entities and essential service operators manage AI supply chain risk. The intersection of AI Act obligations with DORA vendor risk management requirements means that GPAI provider governance is now a supervisory focus area, not just a procurement best practice.
Building an Operational AI Vendor Oversight Program
Moving from signatory-as-checkbox to operational AI vendor oversight requires investment in four capabilities.
First, AI inventory management. You cannot govern what you have not mapped. Every GPAI model consumed by the organization — whether through direct API integration, embedded in a SaaS product, or accessed through a partner — needs to be inventoried with an accountable owner, a risk classification, and a documented purpose. FortisEU's compliance automation module supports this mapping natively.
Second, continuous evidence collection. The AI Act is not a point-in-time regulation. Art. 53 requires providers to maintain — present tense — technical documentation and transparency measures. Your vendor oversight must match that tempo. Quarterly questionnaires are insufficient. You need automated monitoring of vendor disclosures, model version changes, and incident reports through evidence collection workflows that generate audit-ready artifacts as a byproduct of normal operations.
Third, risk-tiered governance. Not every GPAI integration warrants the same level of oversight. A foundation model powering internal summarization has different risk characteristics than one making credit decisions. Your governance model should allocate scrutiny proportionally, concentrating deep oversight on high-risk deployments while maintaining baseline monitoring across all GPAI consumption.
Fourth, executive reporting. Board members and senior management are now personally accountable for AI governance under Art. 4(3) of the AI Act's deployer obligations. They need decision-grade dashboards that surface AI vendor risk in business terms — not technical jargon about parameter counts and fine-tuning methods.
The Regulatory Trajectory
The Code of Practice is not the final state. The Commission retains authority under Art. 56(9) to request amendments if the Code proves insufficient. The AI Office can withdraw the presumption of conformity for specific signatories. And the broader AI Act enforcement architecture — with penalties up to 3% of global annual turnover for GPAI violations under Art. 101 — creates escalating incentive for both providers and deployers to take these obligations seriously.
For CISOs and compliance officers managing AI vendor portfolios, the message is clear: the regulatory environment is tightening, not stabilizing. Organizations that build robust AI vendor governance now will navigate future requirements with incremental adjustments. Those that rely on signatory status as a proxy for compliance will face increasingly expensive remediation as enforcement matures.
Key Takeaways
-
Signatory status is a signal, not a control. Treat Code of Practice signing as one data point in vendor evaluation, never as the conclusion. Require design proof, runtime proof, and accountability proof from every GPAI provider.
-
Contractual provisions must cover AI-specific risks. Standard DPAs and vendor agreements do not address model change notification, training data provenance, or Code of Practice adherence. Update your contract templates before your next GPAI procurement.
-
Deployer obligations exist independently of provider compliance. Even if your vendor is a perfect Code of Practice signatory, your organization must maintain its own AI inventory, risk classification, and evidence collection processes under Art. 26 deployer obligations.
-
Build continuous oversight, not periodic review. The AI Act requires ongoing compliance, which means your vendor monitoring must be continuous. Quarterly questionnaires will not survive regulatory scrutiny.
-
Connect AI vendor governance to your broader regulatory posture. For organizations subject to NIS2 or DORA, AI vendor risk is a subset of ICT third-party risk management — govern it within the same framework, not as a standalone program.
