NIS2 Compliance Won’t Fail On Technology. It Will Fail On People.
Published on: 6 Feb 2026
Last modified on: 12 Mar 2026
NIS2 has raised the bar for cyber security across Europe, and for good reason. Threats are more persistent, more sophisticated, and more disruptive than ever before, and regulators are responding by demanding stronger security controls, clearer accountability, and better visibility into how organisations manage risk.

In response, many organisations have taken familiar and sensible steps. They’ve invested in new security tools, strengthened their technical defences, refined policies, and increased risk reporting to leadership. All of this plays an important role in improving security posture. And yet, history shows that these measures alone won’t be enough.
When breaches happen, they rarely begin with a breakdown in technology. They begin with a human decision, often made quickly, under pressure, or without enough context to recognise the risk in that moment. That is where NIS2 success or failure will ultimately be determined.
NIS2 Puts People Firmly in Scope
One of the most common misconceptions around NIS2 is that it’s primarily a technical or IT-led regulation. While it does include requirements around systems, monitoring, incident reporting, and supply chain security, its scope is much broader than that.
NIS2 places clear emphasis on risk management, governance, and organisational resilience. It expects organisations to understand where their real risks exist, how those risks evolve over time, and whether the controls in place are genuinely effective at reducing them. That effectiveness isn’t measured by how many tools are deployed or how comprehensive a policy library looks on paper, but by whether risks are being managed in practice.
Responsibility is also pushed firmly upwards. Senior management are expected to approve risk management measures and oversee their ongoing effectiveness. In that context, human behaviour becomes impossible to ignore. Decisions around access, credential handling, data sharing, and responses under pressure all directly influence whether controls hold up when they’re tested.
NIS2 doesn’t frame this as a secondary or soft issue. It treats human behaviour as a core component of organisational risk.
Most Breaches Still Begin with Everyday Decisions
Despite years of progress in cyber security technology, the most common breach paths remain remarkably consistent. According to the latest Data Breach Investigations Report, around 60 per cent of breaches involve a human element, including phishing, compromised credentials, and routine mistakes, highlighting how much human behaviour still influences risk.
In that same report, stolen or misused credentials were the primary initial access vector in about 22 per cent of cases, with phishing contributing roughly 15 per cent. These figures underline how many incidents begin not because of a flaw in security tooling, but because of everyday decisions made when people are busy, distracted or under pressure.
These situations don’t arise because employees are careless or malicious. They arise because people are trying to do their jobs in fast-moving environments where convenience, urgency, and competing priorities often shape behaviour. Attackers understand this dynamic extremely well, which is why social engineering continues to be such an effective tactic.
According to the Verizon Data Breach Investigations Report (DBIR), social engineering techniques are involved in nearly three quarters of breaches, making it one of the most consistently successful ways for attackers to gain initial access by exploiting human decision-making rather than technical flaws.
Technology is designed around defined processes and predictable inputs, but it often assumes people will behave consistently, even when they’re tired, under pressure, or working with incomplete information. From a NIS2 perspective, that gap matters. Regulators aren’t only interested in whether controls exist, but whether they’re resilient enough to withstand real-world conditions.
If a control depends on perfect behaviour in imperfect circumstances, it represents a risk that needs to be understood and managed.
Why Policy and Annual Training Fall Short
Most organisations can demonstrate that they have security policies in place and that employees complete regular awareness training. For a long time, this has been treated as reasonable evidence of due care.
Under NIS2, that assumption becomes harder to defend.
Policies describe how things should work, and annual training explains expected behaviour in theory. What they don’t show is how people actually respond when faced with realistic scenarios that mirror the pressures of their day-to-day roles.
From a regulatory standpoint, this creates a visibility gap. Completion rates and policy acknowledgements demonstrate activity, but they don’t demonstrate effectiveness.
As NIS2 drives a more risk-based and outcomes-focused approach to compliance, organisations will need to show that their awareness programmes influence behaviour in a measurable way.
Behavioural Evidence Matters More Than Attendance
One of the most significant shifts introduced by NIS2 is the focus on ongoing risk management rather than point-in-time compliance.
When it comes to human risk, that means being able to answer practical questions:
- Where do employees struggle most?
- Which behaviours introduce the highest levels of risk?
- How does that risk vary across roles, teams, or locations?
- What evidence exists that prove learning interventions are actually having an impact?
Behavioural evidence helps answer these questions. Engagement data, responses to realistic scenarios, and patterns in decision-making all provide valuable insight into how people behave when they’re faced with situations that matter.
Attendance and completion metrics on their own can’t provide that level of assurance.
Engagement Is Not a Nice-to-Have
Engagement in security awareness is often discussed in terms of participation or completion, rather than its impact on how people think and act when faced with risk.
If employees are disengaged, they’re far less likely to absorb guidance, recognise warning signs, or apply learning when it matters most. From a NIS2 perspective, this isn’t a learning design issue, it’s a risk management issue.
Interactive, scenario-based content plays a valuable role here because it reflects how people learn best.
This approach aligns closely with regulatory expectations around effectiveness and continuous improvement.
Technology Supports Resilience, People Determine It
Strong technical controls remain essential. Firewalls, monitoring tools, identity systems, and detection capabilities all play a critical role.
What NIS2 also demands is a clear understanding of how those controls interact with human behaviour.
Organisations that approach NIS2 purely as a technology project risk missing this entirely.
Building Defensible NIS2 Alignment
As NIS2 enforcement approaches, organisations will increasingly be asked to demonstrate how they manage risk in practice.
Defensible alignment is built on evidence.
Because ultimately, NIS2 compliance won’t fail because a tool was missing. It will fail when human behaviour is treated as an afterthought rather than a core part of risk management.
Working with MetaCompliance
Meeting NIS2 expectations requires more than demonstrating that training has been delivered.
MetaCompliance helps organisations take a practical and defensible approach to human risk management.
We have identified relevant learning content that can be used to support NIS2 training requirements.
Combined with MetaCompliance’s risk-based learning approach, organisations gain visibility into how people actually behave.
As NIS2 drives greater accountability and scrutiny, organisations that can clearly show how they prepare people will be best placed to meet both regulatory expectations and real-world threats.
Get in touch today to find out how MetaCompliance can support your NIS2 training strategy.
Frequently Asked Questions about NIS2 Compliance
What is NIS2 compliance?
NIS2 compliance refers to meeting the requirements of the EU’s NIS2 Directive, which strengthens cyber security, risk management, and accountability for essential and important organisations across Europe.
Does NIS2 focus only on technology?
No. While technical controls are essential, NIS2 places strong emphasis on governance, risk management, and human behaviour, recognising people as a core part of cyber risk.
Why is human behaviour important under NIS2?
Most cyber incidents begin with human decisions, such as responding to phishing or handling credentials. NIS2 expects organisations to understand and manage this behavioural risk, not just deploy tools.
What evidence do regulators expect for NIS2 human risk management?
Regulators look for behavioural evidence such as engagement data, responses to realistic scenarios, and measurable improvements in decision-making, rather than attendance metrics alone.
How can organisations demonstrate effective NIS2 alignment?
By showing that risks are understood, controls are tested in real-world conditions, behaviours are monitored, and learning programmes are continuously improved based on evidence.