Skip to main content
FORTISEU
EU-AI-ACTEstablished

Transparency Obligations and General-Purpose AI Under the EU AI Act

12 min readUpdated 2026-03-12

Transparency Obligations for Limited-Risk AI (Article 50)

Article 50 establishes transparency obligations for AI systems that, while not classified as high-risk, interact with natural persons or generate content in ways that require disclosure to maintain informed autonomy and prevent deception. These obligations represent the third tier of the risk pyramid — more than voluntary codes of conduct for minimal-risk systems, but less than the full conformity assessment regime for high-risk systems.

The fundamental principle underlying Article 50 is that individuals have a right to know when they are interacting with AI or consuming AI-generated content. This right derives from broader principles of transparency, fairness, and protection against manipulation enshrined in the EU Charter of Fundamental Rights and the GDPR. Without disclosure, individuals cannot exercise meaningful judgment about the reliability, potential biases, or limitations of AI-mediated interactions and content.

Article 50 applies from 2 August 2026 (the main provisions application date) and covers four distinct transparency scenarios, each with specific disclosure requirements calibrated to the context and the potential for deception or confusion. The obligations fall primarily on providers and deployers, with the specific responsible party depending on the transparency scenario.

Importantly, these transparency obligations apply regardless of the AI system's risk classification. A high-risk AI system that also interacts with natural persons must comply with both the Article 50 transparency requirements and the full Chapter III high-risk regime. Similarly, a minimal-risk AI system that generates deepfakes must comply with Article 50 even though it faces no other mandatory obligations.

Art. 50
Recital 132

The Four Transparency Scenarios

Article 50 identifies four specific situations where transparency obligations apply, each reflecting a distinct category of AI-human interaction.

1. Human-AI interaction (Art. 50(1)): Providers of AI systems designed to directly interact with natural persons must ensure the system is designed and developed in such a way that the natural person is informed they are interacting with an AI system, unless this is obvious from the circumstances and context of use. This is the 'chatbot disclosure' requirement — when a user interacts with a conversational AI (customer service bot, virtual assistant, AI-powered chat), they must be informed it is AI, not a human. The obligation falls on the provider (system designer), not the deployer, because it must be built into the system's design. The 'unless obvious' exception applies to contexts where the AI nature is self-evident (e.g., a voice assistant on a smart speaker), but organisations should apply this exception narrowly.

2. Emotion recognition and biometric categorisation (Art. 50(3)): Deployers of emotion recognition systems or biometric categorisation systems must inform the natural persons exposed to them of the operation of the system and process their personal data in accordance with the GDPR. This obligation applies even where the system is not classified as high-risk — for example, an emotion recognition system used in a retail setting (not workplace or education, which are prohibited) must still inform individuals. The information must be given before any data is processed and must be clear, meaningful, and easily accessible.

3. AI-generated content (Art. 50(2)): Providers of AI systems that generate synthetic audio, image, video, or text content must ensure that the outputs are marked in a machine-readable format and are detectable as artificially generated or manipulated. This is the technical watermarking/labelling requirement. The marking must be effective, interoperable, robust, and reliable to the extent technically feasible. For text generated by AI for the purpose of informing the public on matters of public interest, the AI-generated nature must also be disclosed. This applies to AI text generators, image synthesisers, video generators, and audio synthesisers.

4. Deep fakes (Art. 50(4)): Deployers of AI systems that generate or manipulate image, audio, or video content constituting a 'deep fake' must disclose that the content has been artificially generated or manipulated. This disclosure must be made in a clear and distinguishable manner at the latest at the time of first presentation. The obligation falls on the deployer (the person using the deepfake tool) rather than the provider. An exception exists for content that is part of an obviously artistic, creative, satirical, or fictional work — but even here, the disclosure must not prejudice the rights and freedoms of third parties appearing in the content.

These obligations are cumulative: a deepfake video, for example, must be both machine-marked by the provider (Art. 50(2)) and disclosed by the deployer (Art. 50(4)). Similarly, a customer service chatbot that also generates text responses must comply with both the interaction disclosure (Art. 50(1)) and the content marking (Art. 50(2)) requirements.

Art. 50(1)
Art. 50(2)
Art. 50(4)
Note

Transparency obligations are cumulative. A customer service chatbot generating text must comply with both the interaction disclosure requirement (inform users they are talking to AI) and the content marking requirement (machine-readable labelling of AI-generated text). Design both disclosure mechanisms from the outset.

General-Purpose AI Model Obligations (Articles 51–53)

Chapter V of the AI Act introduces a novel regulatory category: general-purpose AI (GPAI) models. These are AI models — including large language models — that are trained with a large amount of data using self-supervision at scale, display significant generality, are capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market, and can be integrated into a variety of downstream systems or applications. The GPAI provisions were not in the Commission's original April 2021 proposal; they were introduced during trilogue negotiations in response to the rapid emergence of foundation models such as GPT-4, Claude, and Gemini.

Article 51 establishes baseline obligations for all GPAI model providers. These apply from 2 August 2025 (12 months after entry into force) and include: drawing up and keeping up-to-date the technical documentation of the model, including its training and testing process and the results of its evaluation, which must contain at minimum the information set out in Annex XI; drawing up and keeping up-to-date information and documentation for providers of AI systems who intend to integrate the GPAI model into their systems, to enable them to understand the model's capabilities and limitations and comply with their own obligations; establishing a policy to comply with EU copyright law, in particular to identify and comply with reservations of rights expressed by rightsholders under Article 4(3) of Directive 2019/790; and drawing up and making publicly available a sufficiently detailed summary of the content used for training the GPAI model, according to a template provided by the AI Office.

The copyright compliance obligation is particularly significant. GPAI model providers must implement processes to identify and respect opt-out reservations expressed by copyright holders under the text and data mining exceptions in the Digital Single Market Directive. The publicly available training data summary is intended to enable rightsholders to understand whether their content was used in training and to exercise their rights accordingly.

Article 53 provides specific implementation rules. GPAI model providers can demonstrate compliance through codes of practice drawn up pursuant to Article 56 until harmonised standards are published. Providers already placing GPAI models on the market before 2 August 2025 must take the necessary steps to comply by that date. Free and open-source GPAI models (where model weights, architecture, and usage information are publicly accessible) are exempt from the documentation and copyright obligations, unless they are classified as having systemic risk — recognising that open-source development should not be disproportionately burdened.

Art. 51
Art. 53
Annex XI

GPAI Models with Systemic Risk (Articles 51–55)

The AI Act introduces heightened obligations for GPAI models classified as posing systemic risk. A GPAI model is presumed to have systemic risk if it has high-impact capabilities, assessed on the basis of appropriate technical tools and methodologies, including indicators and benchmarks. A GPAI model is presumed to have high-impact capabilities when the cumulative amount of computation used for its training, measured in floating point operations (FLOPs), is greater than 10^25. This threshold was calibrated based on the state of the art at the time of adoption and captures the largest frontier models while excluding smaller, narrower models.

The European Commission may also designate a GPAI model as having systemic risk based on criteria set out in Annex XIII, even if it does not meet the 10^25 FLOPs threshold. These criteria include: the number of parameters; the quality and size of the training data set; the number of registered business and end users; the model's modalities (text, image, audio, video); benchmarking performance; the model's reach across the internal market; and the number of downstream integrated systems and applications.

GPAI model providers with systemic risk face additional obligations under Article 55, in addition to the baseline Article 51 requirements. These include: performing model evaluations in accordance with standardised protocols and tools, including conducting and documenting adversarial testing to identify and mitigate systemic risks; assessing and mitigating possible systemic risks, including their sources, that may stem from the development, the placing on the market, or the use of GPAI models with systemic risk; tracking, documenting, and reporting serious incidents and possible corrective measures to the AI Office and relevant national competent authorities without undue delay; and ensuring an adequate level of cybersecurity protection for the GPAI model and its physical infrastructure.

Systemic risks are defined broadly to include risks that are significant, including in relation to major accidents, disruptions of critical sectors, or serious consequences for public health and safety; actual or reasonably foreseeable negative effects on democratic processes, public and economic security; and the dissemination of illegal, false, or discriminatory content. The systemic risk assessment must consider the general capabilities and potential misuse scenarios of the model, including through downstream integration.

The AI Office has direct supervisory authority over GPAI model providers with systemic risk, including the power to request information, conduct evaluations, and impose corrective measures. This EU-level oversight contrasts with the national enforcement model for other AI Act provisions and reflects the cross-border, systemic nature of frontier AI models.

Art. 51(2)
Art. 55
Annex XIII
Warning

The 10^25 FLOPs threshold currently captures only the largest frontier models (GPT-4 class and above). However, the Commission can designate additional models as having systemic risk based on qualitative criteria in Annex XIII. Providers approaching this threshold should prepare for systemic risk obligations proactively.

Codes of Practice and Harmonised Standards

The AI Act relies on a layered compliance architecture that combines binding legal obligations with co-regulatory instruments. Articles 56 and 40 establish the two primary mechanisms through which the Act's requirements are operationalised: codes of practice for GPAI models and harmonised standards for high-risk AI systems.

Codes of practice (Article 56) are the primary compliance mechanism for GPAI model obligations during the initial implementation period, before harmonised standards are available. The AI Office facilitates the drawing up of codes of practice, with the involvement of GPAI model providers, relevant national competent authorities, and other stakeholders. The codes must cover: detailed rules for compliance with the baseline obligations (Article 51), including technical documentation, downstream provider information, copyright policy, and training data summaries; and for GPAI models with systemic risk, the additional obligations including model evaluation methodologies, adversarial testing protocols, systemic risk assessment frameworks, incident reporting procedures, and cybersecurity measures.

Codes of practice must take into account international approaches and be ready for implementation by 2 May 2025 (nine months after entry into force). If a code of practice cannot be finalised in time, or the AI Office deems an existing code of practice inadequate, the Commission may issue implementing acts providing common rules for the implementation of GPAI obligations. This backstop ensures that regulatory certainty exists even if industry-led self-regulation proves insufficient.

Harmonised standards (Article 40) play a different role: they provide presumption of conformity with the AI Act's requirements. Where a high-risk AI system or GPAI model complies with harmonised standards or parts thereof, the references of which have been published in the Official Journal, it is presumed to conform to the requirements covered by those standards. The European Standardisation Organisations (CEN, CENELEC, ETSI) have been mandated to develop harmonised standards for the AI Act, with initial standards expected from 2025-2026. Until harmonised standards are available, common specifications adopted by the Commission may serve as a compliance reference.

Organisations should actively participate in the development of codes of practice and harmonised standards through their industry associations and national standardisation bodies. These instruments will shape the practical interpretation of the Act's requirements and establish compliance benchmarks that national market surveillance authorities will reference during inspections.

Art. 56
Art. 40
FAQ

Frequently Asked Questions

Do I need to disclose when users interact with a chatbot?

Yes. Article 50(1) requires that AI systems designed to interact directly with natural persons inform the user they are interacting with an AI system, unless this is obvious from the circumstances and context. For customer service chatbots, virtual assistants, and AI-powered messaging, the disclosure must be clear and provided before or at the beginning of the interaction. The provider must build the disclosure into the system's design. The 'unless obvious' exception should be applied narrowly — when in doubt, disclose.

What are the requirements for AI-generated content labelling?

Article 50(2) requires providers of AI systems generating synthetic audio, image, video, or text to ensure outputs are marked in a machine-readable format as artificially generated or manipulated. This marking must be effective, interoperable, robust, and reliable to the extent technically feasible. For AI-generated text informing the public on matters of public interest, the AI nature must also be disclosed to readers. For deepfakes, deployers must additionally make a clear, distinguishable human-readable disclosure at the latest at first presentation (Art. 50(4)).

What is the 10^25 FLOPs threshold for GPAI systemic risk?

GPAI models trained using more than 10^25 floating point operations are presumed to have systemic risk under Article 51(2). This threshold currently captures only the largest frontier models. Providers of such models face additional obligations: model evaluations with adversarial testing, systemic risk assessment and mitigation, serious incident reporting to the AI Office, and cybersecurity protections. The Commission can also designate models below this threshold as having systemic risk based on qualitative criteria including parameters, training data size, user reach, and market impact.

Are open-source GPAI models exempt from obligations?

Partially. Free and open-source GPAI models — where model weights, architecture, and usage information are publicly accessible under a permissive licence — are exempt from the baseline obligations of technical documentation, downstream provider information, and copyright compliance (Article 53(2)). However, open-source GPAI models classified as having systemic risk (>10^25 FLOPs or designated by Commission) must still comply with all systemic risk obligations, including model evaluations, adversarial testing, and incident reporting. The exemption recognises that open-source development benefits from reduced regulatory burden while ensuring frontier-capability models face appropriate oversight regardless of licensing model.

This content is for informational purposes only and does not constitute legal advice. Consult qualified legal counsel for compliance decisions.

Automate EU-AI-ACT Compliance with FortisEU

Turn regulatory obligations into actionable controls with evidence workflows, real-time dashboards, and EU-sovereign AI assistance.