The vulnerabilities in our power grids, water systems, and hospitals are not primarily technical failures. They are political ones — produced by budget cycles that reward deferred risk, regulatory frameworks that tolerate legacy exposure, and a liability architecture that lets the cost of failure fall on everyone except those who made the investment decisions.
By Vladimir Tsakanyan, PhD · Center for Cyber Diplomacy and International Security · cybercenter.space
On 28 February 2026, US airstrikes on Iranian targets triggered what security analysts had long anticipated: a sustained, coordinated cyber campaign against American critical infrastructure. The targets were predictable — power grids, water treatment facilities, hospital networks, industrial control systems across sixteen sectors. What was equally predictable, and considerably more uncomfortable to acknowledge, is that the vulnerabilities being exploited that week had not been created by the conflict. They had been accumulating, silently and systematically, for the better part of two decades. The strikes of late February did not create the threat. They accelerated one that was already there.
This distinction matters enormously, because it shifts the analytical focus from adversary capability — the conventional frame — to domestic governance failure. The question is not only what Iran, China, Russia, or North Korea can do to American infrastructure. It is why American infrastructure, fifteen years after Stuxnet demonstrated with surgical clarity what ICS-targeted cyber operations could achieve, remains so comprehensively exposed to threats that are neither new nor particularly sophisticated.
The answer is political before it is technical. And the politics, examined without diplomatic evasion, reveal a pattern of collective action failure, misaligned incentives, and deferred accountability that no amount of threat intelligence or incident response capability can fully compensate for.
The Political Logic of Deferral
Industrial control systems — the software and hardware that operate power grids, water treatment plants, pipelines, and manufacturing facilities — were designed for a world that no longer exists. They were built for reliability and physical safety in environments that were, by assumption, isolated from external networks. They were not designed for cybersecurity, because cybersecurity was not a design constraint when most of them were commissioned. Many of the ICS devices currently operating across US critical infrastructure are twenty to thirty years old. Their hardware lifecycles exceed the tenure of the political officials responsible for funding their replacement by a factor of three or four.
This creates a structural incentive that is entirely rational from the perspective of any individual budget holder and collectively catastrophic from a national security standpoint. An infrastructure operator who defers an expensive ICS upgrade saves money this fiscal year, avoids the operational disruption of a system transition, and transfers the risk of compromise to a future budget cycle — and, in all likelihood, to a successor who will inherit both the legacy equipment and the accumulated exposure it represents. The successor faces the same calculus. The equipment ages. The vulnerabilities compound. The threat environment deteriorates. And the upgrade, year after year, is deferred.
Dragos CEO Robert Lee testified before Congress that the fifteen years since Stuxnet had produced a significant and sustained rise in the number of state and non-state actors targeting ICS and OT environments. The capability demonstrated in 2010 — that industrial systems could be physically damaged through software — was not a revelation that remained proprietary. It was a lesson that proliferated. The actors who absorbed it have had fifteen years to develop their own toolkits, map their own target sets, and pre-position within networks that, in some cases, they have been inside for years without detection.
Analyst note
Volt Typhoon — the Chinese state-linked threat actor assessed to have pre-positioned within US critical infrastructure for potential future activation — represents the most consequential manifestation of this dynamic. Its presence is not primarily about intelligence collection. It is about the establishment of persistent access that can be activated to cause disruption at a moment of geopolitical choice. The threat is not incoming. For a significant subset of US infrastructure networks, it is already resident. The political and communications challenge this creates — how to convey the urgency of a threat that has not yet been activated — is one that the US government has not yet resolved with its public or with Congress.
The Budget Cycle vs. The Threat Cycle
The mismatch between political time horizons and security investment requirements is the central structural problem of critical infrastructure protection, and it is one that technical solutions cannot address. A government that operates on two- and four-year electoral cycles, in which infrastructure investment produces no visible return until the moment it prevents a catastrophe — a moment whose causal chain is invisible to voters — will systematically underinvest relative to a threat environment that operates on decade-long preparation cycles.
The numbers make the disparity concrete. The FBI’s 2024 Internet Crime Complaint Center report recorded a nine percent increase in complaints involving critical infrastructure over the prior year, with ransomware identified as the most pervasive threat across all sixteen designated sectors. A data breach in the industrial sector now costs an average of $5.56 million — an eighteen percent increase from 2023, according to IBM’s 2024 analysis. Dual IT/OT attacks, in which threat actors move laterally from enterprise networks into operational technology environments, average $4.56 million per incident. These are the costs of the attacks that are detected, disclosed, and documented. The costs of the pre-positioned access that has not yet been activated — the Volt Typhoon model — are not measurable in current dollars. They are measurable only as future contingent catastrophe.
Against this, the political investment response has been characterised by a recurring pattern: a major incident occurs, producing congressional hearings, executive orders, and funding announcements; the immediate urgency dissipates; the funding is diluted through appropriations processes; and implementation is delayed, modified, or quietly deprioritised as the political salience of the original incident fades. The Colonial Pipeline attack produced a pipeline cybersecurity directive. The directive exists. The pipeline sector’s OT security posture has improved marginally and unevenly. The underlying legacy exposure that made the attack possible — an unpatched VPN, a compromised credential, inadequate IT/OT segmentation — remains widespread across comparable operators.
Attackers know that infrastructure providers are measured on uptime. Once inside, they have the luxury of patience — mapping the environment, identifying the dependencies, and planning a targeted strike at a moment of their choosing. The defender does not have that luxury.
Regulatory Capture and the Tolerance of Known Vulnerability
The regulatory framework governing critical infrastructure cybersecurity in 2026 is, in most sectors, a patchwork of voluntary guidelines, sector-specific mandates of varying stringency, and CISA advisories whose authority is persuasive rather than compulsory. The gap between what CISA’s own guidance recommends and what regulated operators are legally required to implement is, in many sectors, substantial.
CISA’s February 2026 guidance on OT communication security — the document titled, with commendable directness, “Barriers to Secure OT Communication: Why Johnny Can’t Authenticate” — identifies the core problem with precision: strong demand for secure authentication among OT operators, blocked by the cost and complexity of implementing it on industrial protocols that were not designed to carry authentication mechanisms. The barriers are real. They are also, in significant part, a consequence of regulatory frameworks that have allowed manufacturers to ship industrial equipment without security-by-design requirements, and operators to deploy that equipment without mandatory security baseline obligations, for long enough that the insecure baseline has become the entrenched normal.
The political economy of this outcome involves industry lobbying that has successfully resisted mandatory security standards in sectors where the cost of compliance would be significant, regulatory agencies whose technical capacity to write and enforce OT-specific requirements has historically lagged the complexity of the systems they are tasked with overseeing, and a liability architecture in which the downstream costs of a successful attack on public infrastructure — power outages, water contamination, hospital diversions — are socialised across the population rather than internalised by the operators and vendors whose decisions created the vulnerability.
This is not a failure of intent. Most infrastructure operators understand the threat environment. Most regulators understand the gaps. The failure is structural: a political economy in which the benefits of deferred investment are concentrated and immediate, while the costs of the resulting vulnerability are diffuse and future. No amount of threat briefing resolves a structural incentive problem.
Analyst note
The February 2026 post-strike assessment identified three remediation actions — removing ICS interfaces from the internet, changing default credentials, and blocking exposed industrial protocol ports — that would materially reduce the attack surface available to the threat actors then actively targeting US infrastructure. These are not technically complex interventions. They are inexpensive relative to the risk they mitigate. The fact that a significant proportion of internet-exposed ICS devices in the United States had not had these measures applied before the conflict began is not a technical failure. It is a governance failure — one produced by the same structural dynamics that produce every other instance of preventable, foreseeable, and politically deferred risk.
What a Serious National Resilience Strategy Would Actually Cost
The honest answer to this question is: more than anyone is currently proposing, less than the alternative. The cost of a serious, sustained programme of critical infrastructure security uplift — accelerated ICS replacement, mandatory security-by-design requirements for new industrial equipment, enforceable baseline standards across all sixteen critical infrastructure sectors, and the workforce development programme required to implement and maintain them — runs into the tens of billions of dollars over a decade. This is not a trivial sum. It is also not a large number relative to the defence budget, the cost of a major infrastructure disruption event, or the economic value of the systems being protected.
The more politically tractable element of a serious strategy — and the one most conspicuously absent from current policy — is liability reform. Infrastructure operators and equipment vendors who benefit commercially from the deployment of insecure systems, and who transfer the cost of the resulting vulnerability to the public, are not currently facing the financial consequences of that transfer. A mandatory cyber incident reporting requirement that attaches financial liability to operators who deploy equipment with known, unpatched vulnerabilities in designated critical sectors would change the investment calculus more directly than any voluntary framework or information-sharing programme.
New York’s 2026 cybersecurity rules for water infrastructure — paired with a $2.5 million grant programme to support smaller utilities — represent the kind of tiered, proportionate approach that the federal framework has not yet produced at scale: mandatory standards for the sector, with funded implementation support for entities that lack the resources to meet them independently. It is a model that the current federal architecture could adopt without new legislative authority in several sectors, and could adopt with relatively modest legislative action in others. The obstacle is not legal. It is political — the same politics of concentrated benefits and diffuse costs that have produced the current exposure in the first place.
The February 2026 campaign against US infrastructure was not, in strategic terms, a catastrophe. The most destructive potential of pre-positioned access was not activated. Critical services were disrupted, not destroyed. But that outcome reflects choices made by the adversary, not capabilities that the defender has reliably blocked. The next campaign may reflect different choices — and the infrastructure it targets will be substantially the same one, with substantially the same exposure, unless the political economy of investment changes in ways that the threat intelligence cycle alone cannot drive.
Bottom line assessment
The vulnerability of US and allied critical infrastructure to cyber attack is neither new nor primarily technical. It is the predictable output of a political economy that systematically defers the cost of security investment onto future budget cycles, future administrations, and the populations who depend on the services at risk. The February 2026 campaign against American infrastructure following the Iran strikes demonstrated, with operational precision, the consequences of that deferral — and the limits of a defensive posture built on threat intelligence and incident response rather than on structural resilience. A serious national resilience strategy requires mandatory security baselines, liability reform that internalises the cost of insecurity to those who create it, and a funding architecture proportionate to the actual scale of the risk. None of these are technically complicated. All of them are politically difficult. The question is whether the political calculus changes before the adversary chooses to activate the access it already has — or after.
Critical Infrastructure ICS Security OT Cybersecurity National Security Cyber Policy Volt Typhoon CISA Vladimir Tsakanyan


Leave a comment