By Vladimir Tsakanyan, PhD · Center for Cyber Diplomacy and International Security · cybercenter.space
On the morning of February 28, 2026, as US and Israeli strikes on Iranian nuclear sites were still being confirmed by official sources, social media platforms were already saturated with AI-generated video purporting to show the aftermath — some depicting devastation far beyond what the strikes had produced, some depicting Iranian missile impacts in Gulf cities that had not yet occurred, some simply recycling footage from earlier conflicts with new captions. By the time authoritative information was available, the fabricated narratives had been viewed hundreds of millions of times. The disinformation did not change the military outcome. It shaped the political environment in which that outcome was interpreted — in Tehran, in Riyadh, in Washington, and in the capitals of states that had not yet decided which side of the resulting crisis they were on.
This is not a story about fake news on social media. It is a story about the weaponisation of cognition as a domain of geopolitical competition — a domain that the WEF’s Global Risks Report 2026 ranked as the second most significant short-term global risk, behind only geoeconomic confrontation and ahead of armed conflict, climate extremes, and societal polarisation. It is a domain in which adversaries have been operating with strategic coherence for over a decade while the institutions responsible for democratic security have been calibrated for a different problem.
The definitional failure is the root of everything that follows. When governments, regulators, and platform companies frame disinformation as a content problem — a question of accurate versus inaccurate claims, of posts that should be labelled or removed — they import a set of assumptions that are appropriate for editorial decisions and catastrophically inappropriate for strategic defence. Content moderation is reactive by design: it responds to what has already been published. Strategic information warfare is proactive by design: it shapes the information environment before the events it is designed to interpret have occurred. A regulatory framework built around the first cannot address the second, regardless of how well resourced or how rapidly it operates.
Reframing the Problem: Cognition as Strategic Terrain
NATO’s Chief Scientist’s Cognitive Warfare report, published in 2025 and widely analysed in 2026, offers the most rigorous publicly available framework for understanding what adversaries have been doing and what allied governments have been failing to match. The report defines cognitive warfare as the contest over cognition itself — targeting how people perceive, make sense of, decide, and act — through synchronised military and non-military action across the full spectrum of competition. It frames cognitive attacks as instruments designed to hinder decision-making, erode national unity, sow societal division, exploit identities and narratives, and undermine the resolve to engage in conflict.
This framing is consequential because it places cognitive warfare outside the category of either information operations or psychological operations in the traditional sense. It is broader than strategic communications, which assumes a sender trying to convey accurate information more effectively. It is broader than propaganda, which assumes the manufacturing of false beliefs. Cognitive warfare, in the NATO framing, targets the sensemaking infrastructure itself — the processes by which individuals and institutions convert raw information into decisions. The objective is not to persuade. It is to degrade the capacity for coherent judgement. An adversary that cannot be persuaded to a particular view can still be left confused, paralysed, and incapable of coordinated response. That condition is, from a strategic standpoint, often sufficient.
Russia and China have each, in their own doctrinal frameworks, arrived at comparable formulations. Russia’s approach — refined through Ukraine, through multiple Western electoral cycles, and through the information operations surrounding the Iran conflict — prioritises the disruption of the OODA loop: the Observe-Orient-Decide-Act cycle that governs military and political decision-making. If an adversary cannot reliably observe what is happening, cannot orient coherently within a shared understanding of events, and cannot decide with confidence that its information reflects reality, the decision-action cycle breaks down without a single kinetic engagement. China’s formulation is, if anything, more comprehensive: cognitive warfare as the use of public opinion, psychological operations, and legal influence — “lawfare” — to achieve strategic objectives that would otherwise require military force.
Analyst note
The 2026 NDAA directed the US Secretary of Defence to define cognitive warfare for the Department, relate it to existing doctrine, and identify which organisations have functional responsibility. The fact that this legislative direction was necessary in 2026 — after a decade of Russian electoral interference operations, Chinese cognitive warfare campaigns targeting Taiwan, and the information environment of the Ukraine conflict — is itself a measure of how far Western institutional response has lagged the operational reality. NATO’s Allied Command Transformation has been developing a Cognitive Warfare Concept since 2021. The Pentagon is still being directed, by Congress, to define its terms. The gap between alliance-level conceptual development and national-level operational implementation is precisely the space adversaries have been operating in.
Three Case Studies in Strategic Failure
The cost of the definitional failure can be measured in three case studies that span the decade of inaction and illuminate different dimensions of the same structural problem.
Russia’s information operations surrounding the Ukraine conflict represent the most extensively documented case of strategic information warfare in the modern era. The Kremlin’s approach combined several distinct instruments: the manufacturing of a justificatory narrative for the invasion, deployed before and during the military operation to shape both domestic and international opinion; the targeting of European public support for Ukraine through disinformation about energy costs, refugee burdens, and the human cost of continued resistance; and the systematic exploitation of social divisions in Western democracies to amplify isolationist sentiment and undermine the political coalition sustaining military assistance. Only the third of these three instruments attracted significant attention from Western content moderation frameworks. The first two operated primarily through legitimate-appearing media channels, diplomatic messaging, and the organic amplification of content that was technically accurate but strategically curated.
China’s cognitive warfare operations targeting Taiwan are assessed by the Atlantic Council as the most sophisticated currently active disinformation campaign, distinguished by their AI-enabled scale, their continuous rather than episodic character, and their design for deniability. PRC actors blend AI-generated audio, video, and text with human-curated messaging and commercial infrastructure — a combination that produces content which is simultaneously voluminous enough to saturate fact-checking capacity and sophisticated enough to resist automated detection. The operational objective is not to make Taiwanese citizens believe specific falsehoods. It is to erode their confidence in their ability to distinguish truth from fabrication — to produce the epistemic paralysis that precedes political demoralisation. This is cognitive warfare in its most strategically refined form, and it has been operating continuously for years without generating a Western institutional response commensurate with its scale.
The Iran conflict of 2026 provided a live laboratory for the interaction of AI-generated disinformation and kinetic operations in real time. AI deepfakes, video game footage misrepresented as actual warfare, and chatbot-generated fabrications simultaneously represented the US-Israel strikes as either catastrophically destructive or largely ineffective, depending on the narrative interest of the actor distributing them. Iranian state media, non-state proxies, and autonomous bad actors generated content at a volume that overwhelmed verification capacity within hours. The strategic effect was not the propagation of any specific false belief but the creation of a contested information environment in which no actor — including the parties to the conflict — could confidently assess how their actions were being received by the audiences that mattered: the Gulf states deciding whether to facilitate or resist escalation, the European governments deciding whether to condemn or abstain, the US Congress deciding whether to authorise or constrain continued operations.
The adversary does not need to make you believe a lie. It needs to make you unsure enough of the truth that you cannot act on it. Paralysis, not persuasion, is the operational objective of modern cognitive warfare.
The Institutional Gap: Three Agencies, Three Different Problems
The structural failure of Western response to information warfare is not a failure of awareness, resources, or goodwill. It is a failure of institutional architecture — a consequence of having distributed responsibility for a single strategic problem across multiple agencies that are each solving a different subset of it without coordination sufficient to address the whole.
Intelligence agencies understand information warfare as a collection and analysis problem: identifying foreign influence operations, attributing them to state actors, and providing decision-makers with assessment of adversary intent and capability. This is valuable. It is also inherently retrospective — it describes what has happened, and it does so in classified channels that cannot be used to inform the public counter-narrative at the speed and scale the problem requires.
Foreign ministries and strategic communications operations understand information warfare as a messaging problem: constructing and distributing accurate narratives that compete with adversary disinformation. This too is valuable, and chronically under-resourced relative to the scale of the challenge. But strategic communications, however well executed, addresses the symptom rather than the cause. It assumes that a better message can outcompete a hostile information environment — an assumption that holds in conditions of good-faith competition and fails in conditions of deliberate epistemic pollution.
Technology regulators and platform companies understand information warfare as a content moderation and transparency problem: identifying false content, labelling it, removing it, and requiring platforms to be more transparent about how their recommendation systems amplify it. The EU AI Act’s Article 50 — requiring labelling of AI-generated content and disclosure of synthetic interactions, enforceable from August 2026 — represents the most ambitious regulatory attempt yet to address the production side of the problem. It is a necessary instrument and an insufficient one. Content labelling assumes users can act on labels. It does not address the cumulative effect of operating in an information environment where the volume of synthetic content has degraded the default assumption of authenticity on which content labelling’s effectiveness depends.
Analyst note
The most consequential insight in NATO’s cognitive warfare framework — and the one most consistently absent from Western policy responses — is that cognitive warfare targets sensemaking infrastructure rather than specific beliefs. An information operation that successfully implants a false belief can be countered by debunking the belief. An information operation that successfully degrades the sensemaking process — the institutional and social mechanisms through which individuals and communities convert information into shared understanding — cannot be countered by any amount of fact-checking. Rebuilding degraded sensemaking infrastructure requires investment in the social and institutional conditions of trust: in journalism, in educational systems, in civic institutions, and in the professional and ethical standards of the information environment. These investments operate on decade-long timescales. Information warfare operates on minute-by-minute timescales. The asymmetry is not incidental. It is the strategy.
Toward a Strategic Response
The path from the current fragmented institutional response to a coherent strategic one requires, first, the acceptance of the cognitive domain as a fifth domain of competition — alongside land, sea, air, and cyber — with equivalent doctrinal development, equivalent resourcing, and equivalent political attention. NATO’s Allied Command Transformation has been working toward this framing since 2021. The 2026 NDAA has directed the Pentagon to begin. The translation of alliance-level conceptual development into national-level operational capability, across every NATO member and partner, remains largely incomplete.
Second, it requires the integration of intelligence, communications, and regulatory functions into a coherent whole-of-government response architecture. This means intelligence about ongoing operations shared at speed and at a classification level that allows it to inform public counter-narrative. It means strategic communications capability funded and staffed at a scale commensurate with the adversary operations it is countering. It means regulatory frameworks that address not just the content of disinformation but the production economics that make it viable — the AI systems that generate it, the commercial infrastructure that distributes it, and the platform incentive structures that amplify it.
Third, and most difficult, it requires an honest reckoning with the domestic political conditions that make information warfare effective. Foreign disinformation does not create the social divisions it exploits. It finds them, amplifies them, and makes them harder to bridge. A strategic response to information warfare that does not address the domestic conditions of social trust, institutional legitimacy, and epistemic resilience — the conditions that determine how much purchase adversary narratives can gain — is a response to the instrument, not to the vulnerability it targets. Building those conditions is not a communications strategy. It is a governance challenge that precedes and underlies every other element of the response.
Bottom line assessment
The cognitive battlefield is not a new domain of conflict. Propaganda, psychological operations, and strategic deception are as old as warfare itself. What is new is the scale, the speed, the deniability, and the precision with which modern information warfare can be conducted — and the degree to which the combination of AI-generated content, algorithmic amplification, and social media distribution has made those operations available to actors who previously lacked the capability and cost-effective for actors who previously lacked the incentive. The WEF’s assessment of disinformation as the second most significant global short-term risk is not alarmism. It reflects an accurate reading of a decade of operational evidence. The institutional response of the states most threatened by this environment has been calibrated for a content moderation problem rather than a strategic domain of competition. Closing that gap — in doctrine, in resourcing, in interagency coordination, and in the domestic conditions of social resilience — is the defining security challenge of the current decade. It is also the one receiving the least commensurate attention.
This is Article 1 of the series “Disinformation & Information Warfare.” Next: The Axis of Narrative — How Russia and China Built the World’s Most Dangerous Disinformation Alliance. All articles available at cybercenter.space.
Cognitive Warfare Information Warfare Disinformation NATO Strategic Communication Hybrid Warfare Geopolitics Vladimir Tsakanyan


Leave a comment