Skip to main content
FORTISEU
Back to Blog
DORA14 February 202614 min readAttila Bognar

DORA Incident Reporting: Templates, Deadlines, and the Operational Guide to Getting It Right

DORA incident reporting under Art. 17-23 requires staged, structured submissions. This operational guide covers the classification taxonomy, the three-report cycle, ESA templates, common mistakes, and parallel reporting with NIS2.

DORA Incident Reporting: Templates, Deadlines, and the Operational Guide to Getting It Right featured visual
DORAIncident reportingOperational resilienceRegulatory evidenceICT incidents

Most teams miss their first high-pressure DORA incident submission for the same reason: coordination collapses faster than anyone expects. Classification, legal review, data collection, and executive signoff all compete in the same narrow window — and the process that looked adequate on paper disintegrates under real-world time pressure. If your reporting quality depends on heroics, the process is not ready. It is a liability waiting to be tested.

DORA incident reporting under Articles 17-23 is the most operationally demanding requirement in the regulation. Not the most conceptually complex — the ICT risk management framework and the register of information are arguably more sprawling — but the most unforgiving in execution. When a major ICT-related incident strikes, you have hours to classify, days to file, and weeks to deliver a comprehensive final report. The quality of that sequence determines how regulators perceive your operational maturity.

This guide covers the DORA-specific operational detail: the classification taxonomy, the three-report submission cycle, the ESA reporting templates, the common mistakes in first submissions, and the parallel reporting requirements for entities subject to both DORA and NIS2. This is not a unified incident response planning guide — for building an IRP that satisfies DORA, NIS2, and GDPR simultaneously, see our separate framework guide. This is the DORA-specific execution manual.

The DORA Incident Classification Taxonomy

DORA Art. 18 establishes the classification framework for ICT-related incidents. The classification determines whether an incident crosses the "major" threshold that triggers regulatory reporting obligations. Not every ICT incident is reportable — only major ICT-related incidents and, on a voluntary basis, significant cyber threats.

The ESAs' Regulatory Technical Standards (RTS) on incident classification specify the criteria and thresholds. A major ICT-related incident is one that meets the materiality thresholds across one or more of the following criteria:

Clients, financial counterparts, and transactions affected. How many clients are impacted? How many financial counterparts? What transaction volume is affected? The RTS specifies numeric thresholds that vary by entity type. A payment institution affecting 10,000 clients crosses a different threshold than a credit institution affecting the same number.

Reputational impact. Has the incident attracted media attention, client complaints, or regulatory inquiry? Reputational impact is assessed qualitatively but carries significant weight in classification decisions.

Duration and service downtime. How long has the ICT service been unavailable or degraded? The RTS establishes time-based thresholds — incidents lasting beyond specified durations are presumed major.

Geographic spread. Does the incident affect services in multiple member states? Cross-border impact elevates classification severity.

Data losses. Has the incident resulted in unauthorized access, loss, or corruption of data? Data loss incidents that affect confidentiality, integrity, or availability of data cross the major threshold more quickly than service-only disruptions.

Criticality of services affected. Does the incident affect critical or important functions as defined in your ICT risk management framework? An incident affecting a critical function has a lower materiality threshold than one affecting a non-critical function.

Economic impact. What are the direct and indirect financial costs? Recovery costs, regulatory fines, client compensation, and reputational damage costs are all considered.

The classification decision must be made quickly — you cannot wait for complete information before determining whether an incident is major. The RTS acknowledges this by requiring classification based on available information, with the expectation that classification may be revised as the situation evolves. This creates a practical challenge: the initial classification must be defensible based on the information available at the time, even if subsequent analysis changes the picture.

The Three-Report Submission Cycle

DORA Art. 19 establishes a three-stage reporting cycle for major ICT-related incidents. Each stage has a defined purpose, timeline, and content requirement.

Initial Notification

Timeline: Without undue delay, and no later than 4 hours after classifying the incident as major (or 24 hours after the incident has been detected, whichever is earlier). The ESA implementing standards refine this further.

Purpose: Alert the competent authority that a major incident has occurred and is being managed.

Content: The initial notification must include: the identity of the reporting financial entity, the classification of the incident, the date and time of detection, a brief description of the incident, the initial assessment of services and systems affected, and the actions taken or planned.

Key operational requirement: The 4-hour window from classification (or 24 hours from detection) is the critical constraint. This means your classification decision must happen fast — you cannot afford a 12-hour debate about whether the incident meets the major threshold, because that debate consumes the time you need for notification preparation. Pre-defined classification criteria with clear thresholds and designated classification authority (one person or role, not a committee) are essential.

Intermediate Report

Timeline: Without undue delay, and no later than 72 hours after the initial notification.

Purpose: Provide a more complete picture of the incident, including updated impact assessment and response actions.

Content: The intermediate report expands on the initial notification with: updated classification if the situation has evolved, detailed description of the root cause (if known), comprehensive impact assessment including number of clients affected, services degraded, geographic scope, and financial impact estimate, and a description of the remediation actions taken and planned.

Key operational requirement: The 72-hour window requires coordination across multiple teams. Technical teams must provide root cause analysis. Business teams must quantify client and service impact. Legal must review the narrative. Management must approve the submission. All of this must converge within approximately three business days of the initial notification.

Final Report

Timeline: No later than one month after the intermediate report.

Purpose: Deliver a comprehensive post-incident report including root cause analysis, total impact assessment, lessons learned, and remediation actions.

Content: The final report is the most comprehensive submission and must include: confirmed root cause analysis, total impact quantification (clients, transactions, financial cost, data loss), complete timeline of the incident and response, assessment of the adequacy of existing controls, remediation actions completed and planned with timelines, and lessons learned.

Key operational requirement: The one-month timeline for the final report seems generous compared to the initial stages, but organizations consistently underestimate the effort. Root cause analysis for complex ICT incidents frequently takes longer than a month, which means the final report may need to acknowledge ongoing analysis while still providing a substantive assessment. The report must also connect to your broader ICT risk management framework — how does this incident change your risk assessment? What control improvements are planned? This cross-referencing requires input from risk management, not just incident response.

ESA Reporting Templates

The ESAs have published Implementing Technical Standards (ITS) specifying the templates and data elements for incident reports. These templates are not suggestions — they define the structured format that competent authorities expect.

The templates include both structured data fields (dropdown selections, numeric values, date/time stamps) and narrative sections. Common structured fields include:

  • Incident identifier (assigned by the reporting entity)
  • Reporting entity identification (LEI, entity name, entity type)
  • Incident detection date and time (UTC)
  • Incident classification (per RTS criteria)
  • Services affected (mapped to the entity's register of critical functions)
  • Impact quantification (clients, transactions, geographic scope)
  • Root cause category (from a defined taxonomy)
  • Remediation status (from a defined set of options)

The narrative sections require clear, factual prose that avoids both technical jargon (regulators are not network engineers) and vague euphemism (regulators recognize obfuscation). The narrative should read like an engineering incident report: what happened, when, what was affected, what was done, what remains to be done.

FortisEU's incident management module maps to the ESA template structure, auto-populating structured fields from your incident data model and providing guided narrative sections that align with regulatory expectations.

Common Mistakes in First Submissions

Based on the first year of DORA reporting experience, several recurring mistakes emerge:

Under-Classification

The most consequential mistake is failing to classify a reportable incident as major. This typically happens when: the classification authority is unclear or distributed across multiple teams; the classification criteria are ambiguous or not operationalized; or the team applies a "wait and see" approach, hoping the incident resolves before it reaches the major threshold.

Under-classification has compounding consequences. When the incident is later reclassified as major, the reporting timeline resets — but the competent authority now sees a delayed initial notification, which raises questions about the entity's classification process maturity. The remedy is a clear classification authority (one designated role), operationalized criteria (not just the RTS text, but translated into your operational context), and a bias toward reporting: when in doubt, classify as major and de-escalate later if warranted.

Inconsistent Narrative Across Report Stages

The three reports must tell a coherent, evolving story. The final report should be a natural extension of the initial and intermediate reports, not a contradictory retelling. Organizations frequently undermine coherence by: having different authors for each stage, not maintaining a single incident chronology that all reports reference, or changing key terminology between reports (calling something a "degradation" in the initial report and a "failure" in the final report without explaining the reclassification).

Maintain a single incident record that accumulates detail over time. Each report stage draws from this single source, adding new information while preserving consistency with previous submissions.

Inadequate Impact Quantification

Competent authorities want numbers, not narratives about impact. "A significant number of clients were affected" is not adequate. "Approximately 34,000 retail clients and 120 institutional counterparts experienced service degradation for 6.5 hours" is adequate. This quantification requires real-time coordination with business operations teams — the incident response team typically does not have direct access to client impact data.

Build impact quantification into your incident response workflow from the start. Assign a specific role (business impact analyst) to begin gathering client and transaction impact data as soon as the incident is declared, not as an afterthought when the report deadline approaches.

Missing Connection to ICT Risk Framework

The final report should demonstrate that the incident has been analyzed within the context of your ICT risk management framework (Art. 6) and that lessons learned will feed into framework improvements. Reports that treat the incident as an isolated event — rather than connecting it to risk assessments, control effectiveness, and governance processes — signal to regulators that the ICT risk framework is not operationally integrated.

After every major incident, explicitly document: which controls were tested and whether they performed as designed; what the incident reveals about risk assessment accuracy; and what framework updates are planned as a result.

Failure to Coordinate Parallel Notifications

For entities subject to both DORA and other reporting frameworks (NIS2, GDPR), failing to coordinate parallel notifications creates inconsistency risk. If your DORA report to the competent authority describes the incident differently than your NIS2 notification to the CSIRT or your GDPR Art. 33 notification to the DPA, regulators will notice — and the discrepancy will generate additional scrutiny.

Parallel Reporting: DORA and NIS2

Financial entities subject to both DORA and NIS2 face dual reporting obligations for ICT incidents. While DORA Art. 1(2) establishes DORA as lex specialis — meaning DORA requirements take precedence where they conflict with NIS2 — this does not eliminate NIS2 reporting obligations. It means that where DORA provides specific rules for the financial sector, those rules apply instead of the equivalent NIS2 provisions. But NIS2 may impose additional obligations in areas where DORA is silent.

The practical parallel reporting workflow should address:

Trigger alignment. DORA's "major ICT-related incident" threshold and NIS2's "significant incident" threshold under Art. 23(3) use different criteria. An incident may be major under DORA but not significant under NIS2, or vice versa, or both. Your classification process must evaluate against both thresholds simultaneously to determine which reports are required.

Timeline coordination. DORA's initial notification deadline (4 hours from classification / 24 hours from detection) and NIS2's early warning deadline (24 hours from becoming aware of the significant incident) create overlapping but non-identical timelines. Build a unified timeline that satisfies both requirements by hitting the earliest deadline across all applicable frameworks.

Content harmonization. Where both DORA and NIS2 reports are required, ensure the factual content is consistent. Use a single incident record as the source of truth, with report-specific formatting applied as a final step. FortisEU's compliance automation platform supports this by generating framework-specific reports from a unified incident data model.

Recipient management. DORA reports go to the financial competent authority. NIS2 reports go to the CSIRT and, in some member states, also to the competent authority designated under NIS2. For entities in the financial sector, the NIS2 competent authority may be the same entity as the DORA competent authority, or it may be different. Map recipient requirements per member state and per applicable framework.

For entities that also have GDPR notification obligations — when the incident involves personal data breach — a third parallel reporting stream activates under GDPR Art. 33 (notification to the DPA within 72 hours) and potentially Art. 34 (communication to data subjects). The DPO function must be integrated into the incident response workflow for incidents with personal data implications.

Building the Reporting Pipeline

The organizations that handle DORA incident reporting well treat it as a pre-engineered pipeline, not an ad-hoc process. The pipeline has defined stages, roles, inputs, and quality gates.

Stage 1: Detection and Initial Assessment (0-2 hours). The incident is detected, triaged, and initially assessed for major incident classification. Output: classification decision with documented rationale.

Stage 2: Initial Notification (2-4 hours from classification). The initial notification is drafted, reviewed, approved, and submitted. Output: filed initial notification with acknowledgment from the competent authority.

Stage 3: Response and Investigation (4-72 hours). Technical response continues while the investigation develops root cause understanding, impact quantification is gathered from business operations, and the intermediate report is drafted. Output: filed intermediate report.

Stage 4: Analysis and Remediation (72 hours - 1 month). Root cause analysis is completed, total impact is quantified, lessons learned are documented, remediation plan is defined and tracked, and the final report is drafted and submitted. Output: filed final report.

Stage 5: Post-Incident Integration. Incident findings are fed into the ICT risk management framework. Control improvements are planned and tracked. The register of information is updated if third-party provider involvement is identified. Risk management assessments are revised to reflect new information.

Each stage should have a designated lead, a defined input set (what information is needed from whom), a quality gate (what must be verified before the output is finalized), and an escalation path (what happens when a dependency is not met within the timeline).

Preparing for Your First Major Incident Report

For financial entities that have not yet filed a major incident report under DORA, the following preparation steps are essential:

  1. Designate a classification authority. One role (not a committee) who has the authority and responsibility to classify incidents as major. This person needs training on the RTS criteria and access to the data needed for classification decisions.

  2. Operationalize the classification criteria. Translate the RTS thresholds into your operational context. What does "significant number of clients" mean for your entity? What service degradation duration triggers the major threshold? Document these operationalized criteria and train the classification authority.

  3. Build report templates. Pre-populate the ESA templates with static information (entity details, contact information, service registers). Build tooling that auto-fills dynamic fields from your incident management system.

  4. Define the coordination model. Who provides technical analysis? Who quantifies business impact? Who reviews the narrative? Who approves and submits? Document this RACI and ensure all participants know their role before an incident occurs.

  5. Run a tabletop exercise. Simulate a major incident scenario and walk through the full reporting pipeline. Measure elapsed time at each stage. Identify bottlenecks. Refine the process based on exercise findings. FortisEU's Fortis Arena capability supports structured tabletop exercises that test your reporting pipeline against realistic scenarios.

  6. Establish parallel reporting awareness. If your entity is subject to NIS2 or GDPR in addition to DORA, ensure the incident response team understands which reporting streams activate for which incident types and how to coordinate parallel submissions.

Key Takeaways

  • The three-report cycle is a pipeline, not three separate tasks. Initial notification (4 hours), intermediate report (72 hours), and final report (one month) must tell a coherent, evolving story from a single source of truth. Design the pipeline end-to-end before your first major incident.

  • Classification speed determines reporting success. The 4-hour notification window from classification is the binding constraint. A single designated classification authority with operationalized RTS criteria is essential — committee-based classification will miss the deadline.

  • Impact quantification must start immediately, not at report time. Assign a business impact analyst role that begins gathering client, transaction, and service impact data as soon as the incident is declared. Waiting until the intermediate report deadline to quantify impact creates a predictable bottleneck.

  • Parallel DORA-NIS2 reporting requires unified incident data. For entities subject to both, a single incident record feeding framework-specific report generation prevents the inconsistency that draws supervisory scrutiny. Map trigger thresholds, timelines, and recipients for both frameworks.

  • Connect incident reporting to your ICT risk framework. The final report must demonstrate that the incident has been analyzed within your Art. 6 framework and that lessons learned will drive control improvements. Isolated incident reports signal framework immaturity to regulators.

Next Step

Turn guidance into evidence.

If procurement is involved, start with the Trust Center. If you want to see the product, create an account or launch a live demo.