How to Conduct a Cybersecurity Risk Assessment
Detailed guide to conducting cybersecurity risk assessments aligned with NIS2, DORA, and GDPR requirements. Covers threat identification, vulnerability analysis, impact and likelihood matrices, risk scoring, treatment planning, and regulatory documentation expectations.
- 1
Risk assessment is mandatory under NIS2 Article 21(2)(a), DORA Article 8, and GDPR Article 35 — not a discretionary best practice.
- 2
Scope assessments by business function rather than individual assets to capture dependencies and align with DORA's function-oriented approach.
- 3
Calibrate your impact and likelihood scales with specific, measurable criteria to ensure consistent scoring across assessors and assessment cycles.
- 4
Risk treatment decisions must be documented and approved by the management body, particularly for DORA-obligated financial entities under Article 6(8).
- 5
Maintain continuous risk monitoring between formal assessment cycles to ensure your risk picture does not become stale.
1. Risk Assessment Foundations
A cybersecurity risk assessment is the systematic process of identifying threats to your information systems, analysing vulnerabilities that those threats could exploit, evaluating the potential impact and likelihood of exploitation, and documenting the results in a form that supports risk treatment decisions. It is not a penetration test (which evaluates specific technical vulnerabilities), not a compliance audit (which evaluates adherence to specific requirements), and not a business impact analysis (which evaluates the consequences of process disruption). A risk assessment subsumes elements of all three but serves a broader purpose: providing the organisation with a comprehensive, prioritised view of its cyber risk landscape.
Under EU regulations, risk assessment is not discretionary. NIS2 Article 21(2)(a) lists risk analysis as the first mandatory measure category for essential and important entities. DORA Article 8 requires financial entities to identify all sources of ICT risk, assess cyber threats and ICT vulnerabilities relevant to their ICT-supported business functions, and assess the potential impact of ICT disruptions. GDPR Article 35 mandates Data Protection Impact Assessments (DPIAs) for processing operations likely to result in high risk to individuals' rights and freedoms. Each regulation has slightly different scoping and documentation requirements, but all share a common expectation: that organisations systematically evaluate the risks they face and make documented decisions about how to address them.
The frequency of risk assessment is also regulated. DORA Article 8(6) requires financial entities to perform ICT risk assessments upon major changes to network and information system infrastructure, processes, or procedures. NIS2, through its transposition into national law, typically requires regular reassessment — with the exact cadence varying by Member State. GDPR DPIAs must be performed before commencing the relevant processing activity and reviewed when there is a change in the risk. Best practice is to conduct a comprehensive organisational risk assessment annually, supplemented by targeted assessments triggered by significant changes, incidents, or new threat intelligence.
ENISA publishes an annual Threat Landscape Report that provides an authoritative overview of the top cyber threats facing EU organisations. Use this as a primary input to your threat identification process — it is the most directly relevant threat intelligence source for EU regulatory risk assessments.
2. Asset Identification and Scoping
You cannot assess risks to assets you do not know you have. The first operational step in any risk assessment is building a comprehensive inventory of the assets within scope. For cybersecurity risk assessments, assets include information assets (databases, file shares, SaaS applications, email systems), technology assets (servers, network devices, endpoints, cloud instances), people (system administrators, developers, third-party personnel with system access), processes (change management, incident response, backup procedures), and physical assets (data centres, network cabinets, office spaces where sensitive data is processed).
DORA Article 8(1) is unusually specific about this requirement: financial entities must identify and document all ICT-supported business functions, roles, and responsibilities; all information and ICT assets supporting those functions, including interconnections and dependencies; and the configuration of ICT assets. This level of documentation goes beyond what most organisations maintain as standard IT asset management. It requires mapping the relationship between business functions and the technology that supports them — not just knowing that you have a database server, but knowing which business functions depend on it, what data it holds, who accesses it, and what other systems it connects to.
Scoping decisions determine the assessment's value and tractability. An assessment scoped too broadly (all IT assets across all business units) becomes unwieldy and produces generic findings. An assessment scoped too narrowly (just the web application) misses systemic risks and dependency chains. The recommended approach is to scope by business function: identify the critical business functions (using your BIA if one exists), map the assets that support each function, and assess risks at the function level. This naturally captures dependencies, avoids the trap of assessing individual servers in isolation, and aligns with DORA's function-oriented approach.
Document your scoping decisions and their rationale. If you exclude certain assets or functions from the assessment, explain why — for example, a function that processes no personal data and has no NIS2 or DORA nexus may be legitimately out of scope. Regulatory examiners will challenge unexplained exclusions, particularly if an incident later occurs in an area that was excluded from the assessment.
3. Threat Identification and Vulnerability Analysis
Threat identification answers the question: what could go wrong? A threat is any circumstance or event with the potential to adversely affect an asset through unauthorised access, destruction, disclosure, modification, or denial of service. Threats can be intentional (targeted cyberattacks, insider threats, state-sponsored espionage), unintentional (misconfiguration, accidental data exposure, human error), or environmental (natural disasters, power failures, pandemic-related disruptions). A thorough risk assessment considers all three categories.
For EU organisations in 2026, the threat landscape has several distinctive characteristics. State-sponsored cyber operations targeting EU critical infrastructure have escalated significantly since 2022, with ENISA's Threat Landscape 2025 identifying state-nexus actors as a top-tier threat to energy, transport, and government sectors — all NIS2 Annex I categories. Ransomware remains the most financially impactful threat to EU businesses, with DORA-obligated financial entities being particularly attractive targets due to their perceived willingness to pay to restore operations. Supply chain attacks have matured from theoretical concerns to operational realities, with the exploitation of managed service providers and software dependencies accounting for an increasing proportion of EU incidents. And the EU AI Act's entry into application introduces a new threat vector: adversarial manipulation of AI systems used in high-risk decision-making.
Vulnerability analysis identifies the weaknesses that threats could exploit. Vulnerabilities are not limited to unpatched software (though that remains the most commonly exploited category). They include architectural weaknesses (flat networks without segmentation, single points of failure), procedural weaknesses (lack of multi-factor authentication, inadequate access reviews, missing logging), human weaknesses (susceptibility to social engineering, insufficient security awareness), and supply chain weaknesses (reliance on single vendors, lack of vendor security assessment, unclear contractual security obligations). For each threat-vulnerability pair, assess whether existing controls adequately mitigate the risk or whether residual exposure remains.
Use structured methodologies for threat identification rather than brainstorming sessions. The STRIDE model (Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, Elevation of privilege) provides a systematic way to identify threats to technical systems. ENISA's threat taxonomy provides a comprehensive categorisation of threat types relevant to EU organisations. For DORA-specific assessments, the threat-led penetration testing (TLPT) framework under Article 26 — based on the TIBER-EU framework — provides a threat-intelligence-driven methodology for identifying realistic attack scenarios against financial entities.
Do not limit threat identification to external cyber threats. DORA Article 8(2) explicitly requires consideration of ICT vulnerabilities and threats, including ICT-related incidents and near-misses. Internal threats — accidental or malicious insider actions, process failures, and configuration drift — account for a significant proportion of EU incidents and must be included in your assessment.
4. Impact and Likelihood Analysis
Once threats and vulnerabilities have been identified, each risk scenario must be evaluated for its potential impact and likelihood of occurrence. This is the analytical core of the risk assessment — the step that transforms a list of theoretical concerns into a prioritised risk register that drives resource allocation decisions.
Impact analysis evaluates the consequences of a risk scenario materialising. For cybersecurity risks in an EU regulatory context, impact should be assessed across multiple dimensions: operational impact (disruption to business functions, loss of service availability), financial impact (direct costs of incident response, regulatory fines, contractual penalties, revenue loss), reputational impact (customer trust, market perception, media coverage), legal and regulatory impact (enforcement actions, supervisory measures, personal liability of management body members under NIS2 Article 20), and impact on individuals' rights (data breach consequences under GDPR, particularly relevant where personal data is involved). DORA Article 8(3) requires financial entities to assess the potential impact of ICT disruptions on the financial entity itself, its clients, counterparts, and the financial system — adding systemic impact as a fifth dimension.
Likelihood analysis evaluates the probability of a risk scenario occurring within a defined time horizon (typically one year). Likelihood assessment should be informed by threat intelligence (what are threat actors actually doing, not just what they could theoretically do), vulnerability data (which weaknesses are actually present, based on scanning and assessment), historical incident data (what has happened before, both internally and to comparable organisations), and control effectiveness (how well do existing controls reduce the likelihood of exploitation). Avoid the trap of assessing likelihood in a vacuum — a vulnerability with a public exploit and active exploitation in the wild has a fundamentally different likelihood profile than a theoretical vulnerability with no known exploitation.
Combine impact and likelihood into a risk score using a consistent methodology. A 5x5 matrix (five impact levels from negligible to critical, five likelihood levels from rare to almost certain) is the most common approach and provides sufficient granularity for most organisations. Define each level with specific, measurable criteria — not just 'high impact' but 'financial loss exceeding EUR 5 million' or 'service disruption exceeding 24 hours affecting more than 10,000 users'. This calibration ensures that different assessors reach comparable conclusions and that risk scores are meaningful rather than arbitrary. Document the calibration criteria as part of your risk methodology and review them annually to ensure they remain aligned with your organisation's risk appetite and operating environment.
5. Risk Treatment Planning
Risk treatment is where the assessment translates into action. For each risk above your organisation's risk acceptance threshold, you must select a treatment strategy: mitigate (implement controls to reduce impact or likelihood), transfer (shift the risk to a third party through insurance or contractual arrangements), avoid (cease the activity that generates the risk), or accept (acknowledge the risk and retain it, with documented justification). Under DORA Article 6(8), financial entities must document their ICT risk treatment decisions and obtain approval from their management body — risk acceptance is not a unilateral decision by the IT department.
For each risk to be mitigated, develop a treatment plan that specifies the control or controls to be implemented, the expected risk reduction (how the residual risk compares to the inherent risk after treatment), the implementation timeline, the responsible owner, the required resources, and the success criteria. Treatment plans should be realistic and sequenced — attempting to implement fifty new controls simultaneously will result in none being implemented effectively. Prioritise based on the risk assessment results: address the highest-scoring risks first, and sequence treatments so that foundational controls (access management, logging, network segmentation) are implemented before dependent controls (anomaly detection, automated response).
Risk transfer through cyber insurance is increasingly relevant but also increasingly complex for EU organisations. The cyber insurance market has matured significantly, with insurers now requiring evidence of specific controls (multi-factor authentication, endpoint detection and response, offline backups, incident response plans) before underwriting policies. For DORA-obligated entities, insurance does not satisfy the regulatory obligation to implement ICT risk management measures — you cannot insure your way out of Article 9 protection and prevention requirements. Insurance is a complement to, not a substitute for, operational controls.
Residual risk — the risk remaining after treatment — must be formally evaluated and accepted. Every organisation retains some residual risk; the question is whether the retained risk falls within the organisation's risk appetite. Document residual risk levels in your risk register, obtain management body approval for risk acceptance decisions, and review accepted risks at least annually to determine whether the acceptance rationale remains valid. GDPR Article 35(7)(d) requires DPIAs to describe the measures envisaged to address risks, including safeguards and mechanisms to ensure personal data protection — making risk treatment documentation not just good practice but a legal requirement where personal data is involved.
Maintain a 'risk treatment backlog' — a prioritised list of approved but not-yet-implemented treatments with estimated resource requirements. This gives your leadership team a clear view of the investment needed to close identified gaps and supports budget discussions with concrete, risk-justified line items.
6. Documentation and Regulatory Reporting
A risk assessment that is not documented is a risk assessment that did not happen — at least in the eyes of EU regulators. NIS2 supervisory authorities, DORA competent authorities, and GDPR supervisory authorities all have the power to request risk assessment documentation during examinations, audits, or incident investigations. The quality, completeness, and currency of this documentation directly affect how regulators perceive your organisation's risk management maturity.
Your risk assessment documentation should include: the assessment scope and methodology (including the risk scoring criteria, threat sources consulted, and vulnerability assessment methods used); the asset inventory and classification results; the complete threat and vulnerability analysis with the evidence base for each identification; the impact and likelihood analysis for each risk scenario with supporting rationale; the risk register with inherent risk scores, treatment decisions, control references, residual risk scores, and risk owners; the risk treatment plans with timelines, resource allocations, and success criteria; and the management body approval records showing that leadership has reviewed and accepted the assessment results.
For GDPR DPIAs specifically, Article 35(7) requires the documentation to include: a systematic description of the processing operations and their purposes, an assessment of the necessity and proportionality of the processing, an assessment of the risks to individuals' rights and freedoms, and the measures envisaged to address those risks. DPIAs that do not address all four elements will be found insufficient by supervisory authorities.
Risk assessment results should feed into multiple reporting streams. The board or management body should receive a summary of the top risks, material changes since the last assessment, and any risks where treatment is not progressing as planned. The compliance team should receive the full risk register for mapping against regulatory requirements. The IT and security teams should receive the treatment plans for implementation planning. And the internal audit function should receive the assessment methodology and results for independent validation. Avoid the pattern where the risk assessment is produced, filed, and forgotten until the next audit cycle — it should be a living input to ongoing governance, not a periodic deliverable.
7. Continuous Assessment and Improvement
A risk assessment is not a one-time activity but a recurring process that must be refreshed as the threat landscape, business environment, and regulatory context evolve. The question of how often to reassess has both regulatory and practical dimensions. DORA Article 8(6) requires reassessment upon major infrastructure changes. NIS2 Article 21(2)(a) implies ongoing risk analysis. GDPR Article 35 requires review when processing changes. Beyond regulatory mandates, best practice dictates reassessment when new threat intelligence indicates a material change in the threat landscape, when a significant security incident occurs (whether at your organisation or at a comparable entity), when major organisational changes occur (mergers, divestitures, new product launches, geographic expansion), or when the regulatory environment shifts.
Between full reassessments, maintain a continuous risk monitoring capability. This does not require a full assessment cycle — it means that new threats are evaluated against existing vulnerabilities as they emerge, that changes to the control environment are evaluated for their impact on residual risk levels, and that near-miss incidents are analysed for their implications on risk likelihood estimates. Continuous monitoring bridges the gap between periodic assessments and ensures that your risk picture does not become stale between formal review cycles.
Track assessment quality metrics to drive improvement. Useful metrics include: the percentage of risk scenarios that materialised as actual incidents (if this is high, your assessment is accurate; if it is low, either your controls are excellent or your threat identification is incomplete), the accuracy of impact and likelihood estimates for materialised risks (how close were your estimates to actual outcomes), the timeliness of risk treatment implementation (are treatments being delivered on schedule), and the trend in residual risk levels over time (is your overall risk posture improving or deteriorating). These metrics turn risk assessment from a compliance exercise into a management tool.
Finally, solicit feedback from assessment participants and stakeholders after each cycle. Business unit leaders who participated in the assessment can identify whether the process captured the risks they are actually worried about. Technical teams can identify whether the vulnerability analysis was accurate and complete. And leadership can identify whether the assessment results informed their decision-making or sat unused in a shared drive. Use this feedback to refine your methodology, improve your threat sources, and increase the assessment's practical value in each subsequent cycle.
How does a cybersecurity risk assessment differ from a GDPR DPIA?
A cybersecurity risk assessment evaluates threats to information systems and their potential business impact. A GDPR DPIA evaluates the impact of data processing operations on individuals' rights and freedoms. While they overlap (both assess cyber threats to personal data), a DPIA has additional requirements: it must assess the necessity and proportionality of the processing, consider risks from the data subject's perspective, and demonstrate compliance with GDPR principles. Many organisations combine them for efficiency but must ensure the DPIA-specific elements are not lost in the broader cybersecurity assessment.
What risk scoring methodology should we use?
A 5x5 impact-likelihood matrix is the most widely used and provides sufficient granularity for most EU organisations. Define five impact levels (e.g., negligible, minor, moderate, major, critical) and five likelihood levels (e.g., rare, unlikely, possible, likely, almost certain) with specific calibration criteria. For financial entities under DORA, consider extending impact analysis to include systemic impact dimensions. Qualitative scoring is acceptable for most EU regulatory purposes; quantitative methods (like FAIR analysis) add precision but require significantly more data and expertise.
Who should conduct the risk assessment?
Risk assessments should be led by personnel with both cybersecurity expertise and understanding of the business context — the CISO, risk manager, or a dedicated risk assessment team. However, they should not be conducted in isolation. Business unit leaders must participate to validate asset inventories and impact estimates. Technical staff must contribute vulnerability and threat intelligence. And the management body must be involved in scoping decisions and risk acceptance. DORA Article 5(2) makes management body engagement a personal obligation, not a delegation option.
How do we handle risk assessment for cloud and SaaS services?
Cloud and SaaS introduce a shared responsibility model where some risks are managed by the provider and some by the customer. Your risk assessment must cover both: the provider's risk management (evaluated through SOC 2 reports, ISO 27001 certificates, contractual commitments, and DORA Article 28 due diligence for financial entities) and your own configuration and usage risks (access management, data classification, integration security). Do not assume the provider has eliminated all risks — misconfigurations, excessive permissions, and inadequate data protection are customer-side vulnerabilities that the best provider security cannot mitigate.
Can we use automated tools for risk assessment?
Automated tools can accelerate specific assessment steps — asset discovery, vulnerability scanning, threat intelligence aggregation, and control effectiveness evaluation. However, they cannot replace human judgement in scoping decisions, impact calibration, likelihood estimation based on business context, and risk treatment strategy selection. The optimal approach combines automated data collection (to ensure completeness and efficiency) with expert analysis (to ensure accuracy and relevance). For DORA TLPT assessments, Article 26(4) requires that testers have relevant expertise — automation assists but does not satisfy this requirement.
Ready to Operationalise This?
Turn this guide into working compliance workflows. Create an account or schedule a personalised demo.