Cybersecurity Regulation

The Compliance Illusion: Why Cybersecurity Regulation Is Producing the Wrong Kind of Security

CIRCIA. NIS2. The EU AI Act. The Cyber Resilience Act. Governments on both sides of the Atlantic are regulating at a scale and speed not seen since Sarbanes-Oxley. They are not necessarily making anyone safer — and in some cases, they are making the problem measurably worse.

By Vladimir Tsakanyan, PhD  ·  Center for Cyber Diplomacy and International Security  ·  cybercenter.space

Ask the security director of a mid-sized American hospital what keeps her up at night in 2026, and you will likely receive two answers. The first is ransomware — the operational threat she has spent years preparing for, briefing her board on, and building defenses against. The second, increasingly, is her compliance calendar. CIRCIA requires a 72-hour incident report to CISA, while HIPAA still allows 60 days for breach notification. The timelines are not merely different. They are structurally incompatible, demanding parallel workflows, separate documentation standards, and dual reporting chains for the same underlying event — all while clinical operations continue and patient care cannot pause.

This is not an edge case. It is the defining operational reality of cybersecurity governance in 2026: a landscape in which the volume, velocity, and internal contradictions of regulatory obligations have become, in themselves, a security risk.

The argument advanced here is not that regulation is wrong. Mandatory incident reporting, supply chain security requirements, and board-level accountability mandates are, in principle, sound policy instruments. The argument is more precise: that the current architecture of cybersecurity regulation — fragmented across jurisdictions, misaligned in its incentives, and calibrated to the political requirements of legislators rather than the operational realities of defenders — is producing compliance culture at the expense of security culture. These are not the same thing. Conflating them is the central error of the present moment.

The Checkbox Fallacy

The distinction between compliance and security is not new, but it has rarely been more consequential. A compliant organisation is one that satisfies the documented requirements of an applicable regulatory framework. A secure organisation is one that has materially reduced its exposure to the threats most likely to cause it harm. The two conditions frequently overlap. They are not identical, and in a poorly designed regulatory environment, they can actively diverge.

The mechanism of divergence is well understood by practitioners and routinely underweighted by policymakers. When compliance becomes the primary objective — measured by completed audits, filed reports, and checked boxes — organisations rationally allocate their security resources toward meeting documented obligations rather than addressing undocumented risks. Budget cycles, board presentations, and vendor contracts align around the compliance calendar. The threat landscape does not.

A sophisticated attacker does not consult a regulatory framework before selecting a target or a technique. The 2021 Colonial Pipeline breach — the incident that directly catalysed CIRCIA — was not made possible by the absence of incident reporting requirements. It was made possible by an unpatched VPN vulnerability, a compromised credential, and a network architecture that allowed ransomware to propagate from IT systems into operational technology. None of those conditions would have been remediated by the regulation now being written in response to them.

Analyst note

This is not an argument against post-incident reporting requirements. Aggregated incident data, reported consistently across sectors, has genuine intelligence value — it enables pattern recognition, early warning dissemination, and systemic risk assessment that no individual organisation can produce alone. The argument is that reporting requirements are a surveillance instrument, not a defensive one. They tell us what happened. They do not prevent it from happening again. Conflating the two functions — as much of the current regulatory discourse does — produces unrealistic expectations and misallocated resources.

The Regulatory Patchwork and Its Predators

The architecture of cybersecurity regulation in 2026 is best described as a patchwork — a term that sounds neutral but is not. A patchwork does not merely inconvenience those who must navigate it. It creates structural advantages for those sophisticated enough to exploit its seams.

Consider the geography of the problem. A multinational operating in the United States faces CIRCIA’s 72-hour incident reporting requirement to CISA — still, as of this writing, in proposed rulemaking, delayed by a federal appropriations lapse that has pushed the final rule to May 2026 at the earliest, with compliance likely not effective until late 2027 or early 2028. It simultaneously faces NIS2 obligations whose transposition into national law varies substantially across EU member states — as of February 2025, nineteen member states had still not completed full transposition, a failure that has prompted infringement proceedings but has not resolved the underlying compliance uncertainty. Layered atop these are sector-specific requirements: HIPAA for healthcare, DORA for financial services, the Cyber Resilience Act for product manufacturers, and a proliferation of US state-level privacy and breach notification laws whose relationship to federal requirements remains, in the language of regulatory lawyers, “complex.”

This complexity is not equally burdensome to all actors. Large enterprises with dedicated legal, compliance, and security functions can absorb it — expensively, inefficiently, but functionally. Smaller organisations cannot. The compliance burden associated with CIRCIA alone is estimated to fall on more than 300,000 entities across 16 critical infrastructure sectors, a significant proportion of which lack the in-house capability to meet even the basic operational preconditions for compliance: continuous monitoring, a functioning security operations capability, and forensic tooling sufficient to characterise an incident within 72 hours of discovery.

The consequnece is a two-tier security environment that regulation was ostensibly designed to flatten. Sophisticated actors — both the large organisations that can absorb compliance costs and the threat actors that study regulatory frameworks to identify their gaps — navigate the patchwork with increasing fluency. Smaller organisations are left to choose between non-compliance and the kind of checkbox security that satisfies an auditor while leaving the actual attack surface largely intact.

A sophisticated attacker does not consult a regulatory framework before selecting a target. The compliance calendar and the threat calendar are not the same document — and treating them as equivalent is the central error of contemporary cyber governance.

The Incentive Structure Nobody Is Discussing

The deepest problem with the current regulatory architecture is not its complexity. Complexity can be reduced through harmonisation, which both CISA and the European Commission have identified as a priority — the “report once, share many” ambition embedded in CIRCIA’s design, the cross-sectoral coherence goals of NIS2, and the ongoing effort to align CIRCIA with parallel federal requirements all represent genuine attempts to address fragmentation. The deeper problem is the incentive structure that compliance requirements create, and the perverse effects that structure generates at the organisational level.

Mandatory incident reporting, for all its intelligence value, creates a disclosure dilemma that regulators have not adequately resolved. An organisation that reports a breach promptly and accurately faces regulatory scrutiny, potential liability, reputational damage, and the resource drain of a formal response process — in addition to the operational damage of the incident itself. An organisation that manages the same breach quietly, remediating without disclosure on the grounds that the incident falls below the reportable threshold, faces none of those costs. The incentive to under-report, to define incidents narrowly, and to invest in the legal infrastructure of non-disclosure rather than the technical infrastructure of detection is structural. No amount of good regulatory intent eliminates it without addressing the liability consequences of disclosure, which no major regulatory framework has yet done adequately.

NIS2’s introduction of direct management accountability — personal liability for boards and CEOs who fail to ensure adequate cybersecurity governance — is the most significant structural innovation in recent European regulatory design, and it deserves more analytical attention than it has received. Finland’s implementation, which entered force in 2025, explicitly holds senior executives personally responsible for approving risk management measures and overseeing compliance. This is not a compliance requirement. It is a governance instrument — one that, if consistently enforced, changes the cost-benefit calculation for under-investment in security in ways that checkbox auditing never can. Whether it will be consistently enforced, against well-resourced corporations with sophisticated legal defenses, remains the question that 2026 enforcement actions will begin to answer.

Analyst note

The most instructive parallel is not Sarbanes-Oxley, which is the comparison most often reached for, but the environmental liability regime that emerged in the United States following the passage of the Comprehensive Environmental Response, Compensation, and Liability Act in 1980. CERCLA did not make companies safer by requiring them to report pollution. It made them safer by making senior executives personally and financially liable for the consequences of pollution. The regulatory theory — that compliance follows from accountability, not from documentation — is one that cybersecurity governance is only now beginning to internalise, a generation late.

What Proportionate Governance Would Actually Look Like

The critique advanced here implies a positive agenda, and it would be evasive not to state it. The alternative to the current patchwork is not less regulation — it is better-designed regulation, calibrated to outcome rather than process, proportionate to organisational capacity, and structurally honest about what reporting requirements can and cannot achieve.

Outcome-based regulation — setting enforceable security thresholds rather than prescriptive process requirements — has emerged as the dominant recommendation from both the practitioner community and academic research, and for good reason. A regulation that requires organisations to demonstrate a measurable reduction in mean time to detect and respond to incidents creates fundamentally different incentives than one that requires them to file a report within 72 hours. The former demands security investment. The latter demands documentation infrastructure. Only one of them makes the attacker’s job harder.

Proportionality is the second essential principle that current frameworks apply inconsistently. The small municipal water authority and the national telecommunications provider are not equivalent cybersecurity actors. They do not face equivalent threat environments, they do not possess equivalent defensive capabilities, and they should not face equivalent compliance obligations. Tiered regulatory regimes — distinguishing between critical national infrastructure operators, essential service providers, and smaller covered entities — exist in principle in both CIRCIA and NIS2. Their implementation has not consistently reflected the distinction in practice.

The liability reform question is the hardest, and the one most conspicuously absent from current policy debate. Safe harbour provisions for organisations that report promptly and cooperate fully with regulatory investigations would, at a stroke, change the disclosure calculus that currently penalises transparency. Legislation shielding good-faith reporters from the secondary consequences of mandatory disclosure — civil litigation, regulatory action by other agencies, reputational weaponisation by competitors — has been proposed repeatedly and has not advanced, primarily because the stakeholders who benefit from the current asymmetry have more consistent access to the legislative process than the organisations that are harmed by it.

None of this is technically complicated. The policy instruments are well understood. The obstacle is not analytical. It is political — a familiar condition in cybersecurity governance, and one that the volume of regulatory activity in 2026 does not, by itself, resolve.

Bottom line assessment

The cybersecurity regulatory wave of 2025–2026 represents the most ambitious attempt to govern digital security through law since the post-9/11 era. Its ambition should be acknowledged. So should its structural limitations. Compliance is not security. Documentation is not defense. And a regulatory architecture that imposes incompatible obligations on 300,000-plus entities — while providing no liability protection for those who report transparently, no tiering that reflects actual risk differentials, and no outcome-based metrics that would distinguish a genuinely secure organisation from one that has simply hired better compliance lawyers — will produce compliance culture, not security culture. The two are distinguishable. The current policy moment is not reliably distinguishing them. The organisations that understand the difference will be the ones still standing after the next major breach. The ones that do not will have excellent documentation of exactly how it happened.

Cybersecurity Policy CIRCIA NIS2 Compliance Regulation Cyber Governance CISA Vladimir Tsakanyan


Discover more from Center for Cyber Diplomacy and International Security

Subscribe to get the latest posts sent to your email.

Discover more from Center for Cyber Diplomacy and International Security

Subscribe now to keep reading and get access to the full archive.

Continue reading