States are deploying AI-driven weapons with no agreed rules of engagement, no attribution standards, and no treaty framework. The 2026 deadline for international agreement has arrived. The agreement has not.
By Vladimir Tsakanyan, PhD · Center for Cyber Diplomacy and International Security · cybercenter.space
In March 2020, a Turkish-manufactured autonomous drone — the KARGU-2 — reportedly hunted down and engaged a human target in Libya without receiving a command from any human operator. The UN Panel of Experts documented the incident. No government acknowledged it. No legal mechanism was triggered. No one was held responsible. The drone did what it was designed to do, and the world moved on.
That episode is not a warning of what is coming. It is a description of what already exists: autonomous lethal systems operating in active conflicts, in a legal vacuum, under no binding international framework, accountable to no one. And in the years since, rather than closing that vacuum, the major powers have made a quiet, collective decision to widen it.
The diplomatic record of the past decade on autonomous weapons governance is, at its core, a study in deliberate stalling by those who can afford to stall — and the mounting frustration of those who cannot.
A Decade of Deliberation, Zero Binding Rules
Since 2014, the United Nations Convention on Certain Conventional Weapons has convened a Group of Governmental Experts — the CCW GGE — to address the challenge of lethal autonomous weapons systems. The group has met regularly. It has produced principles, summaries, and rolling texts. It has not produced a single binding rule.
In December 2024, the UN General Assembly adopted a resolution on lethal autonomous weapons with 166 votes in favour and three against — Belarus, North Korea, and Russia. The symbolic weight of that margin was considerable. The practical effect was not. A General Assembly resolution is not a treaty. It creates no obligations, imposes no constraints, and generates no enforcement mechanism. It is, in the precise language of international law, soft power signalling dressed in the vocabulary of consensus.
The 2026 Review Conference in Geneva was presented, for years, as the decisive deadline — the moment when diplomacy would be forced to produce an instrument. What the conference is more likely to produce is a record of a missed opportunity and a mandate for further discussion. The Lieber Institute at West Point assessed, with understated precision, that short of a fundamental shift in the strategic calculus of the UN Security Council’s permanent members, the GGE was highly unlikely to produce a legally binding protocol by its deadline. That assessment has proven correct.
Analyst note
The consensus model of the CCW is not an accident of institutional design — it is a feature, exploited by those with the most to lose from binding restrictions. A single veto suffices to halt progress. The United States, Russia, and China have each, at different moments and through different mechanisms, ensured that the process produces deliberation rather than obligation. This is not diplomatic failure in the conventional sense. It is diplomatic success for those who benefit from the absence of rules.
The Accountability Black Hole
Beneath the procedural debate lies a problem that no amount of diplomatic language has yet resolved: when an autonomous system selects a target and kills the wrong person, who is responsible?
The answer, under current international law, is genuinely unclear. International humanitarian law was constructed around the assumption of human decision-making at the point of lethal force. The principle of distinction — the obligation to differentiate between combatants and civilians — presupposes a mind behind the trigger. Proportionality analysis requires a human commander capable of weighing anticipated military advantage against expected civilian harm. Precautionary obligations assume someone who can pause, reconsider, and abort.
Autonomous systems dissolve these assumptions. When a drone misidentifies a civilian convoy as a military column due to sensor failure or algorithmic bias, accountability fragments across the chain: programmers did not anticipate the operational context; commanders disclaim responsibility for a machine decision; manufacturers invoke technical complexity to evade liability. Scholars have described this as the “accountability black hole” — a structural condition in which violations of the laws of war become, simultaneously, inevitable and unpunishable.
This is not a theoretical concern. The KARGU-2 incident in Libya demonstrated it in practice. The Russia-Ukraine war has provided further evidence, with autonomous and semi-autonomous systems operating at a tempo and scale that renders traditional command-and-control accountability, in many engagements, a legal fiction.
When a machine selects and kills the wrong target, programmers blame the context, commanders blame the machine, and manufacturers blame the complexity. The accountability black hole is not a gap in the rules. It is a feature of the architecture.
The Pre-Proliferation Window Is Closing
There is a concept in arms control that analysts have called the “pre-proliferation window” — the interval between when a new weapons technology becomes operational and when it becomes so widely deployed that regulation is practically impossible. For nuclear weapons, that window closed in 1949. For landmines, it closed before the Ottawa Treaty made any serious attempt to address it. For lethal autonomous systems, the window is closing now, in real time, accelerated by the very powers blocking its governance.
The United States has requested a record $14.2 billion for AI and autonomous systems research in the 2026 fiscal year. Its Replicator programme — designed to fast-track the deployment of thousands of expendable autonomous drones and surface vessels — received $1 billion in 2025 alone. China’s industrial capacity to produce autonomous hardware at scale is unmatched. Russia has integrated loitering munitions with autonomous targeting capabilities across its military doctrine and is actively deploying them in Ukraine.
Each of these investments represents a political commitment that precedes and forecloses regulation. An administration that has spent $14 billion on autonomous systems does not arrive at a treaty negotiation as a neutral party. The financial scale of the current arms race has reached a point where binding restrictions would impose concrete economic and strategic losses on the leading states — losses that no administration, of any ideological stripe, has yet shown willingness to absorb.
Analyst note
November 2025’s UN First Committee resolution — passed by 156 nations, opposed explicitly by the United States and Russia — reveals the underlying geometry of the impasse with unusual clarity. The majority of the world’s states want rules. A small number of militarily dominant states do not. In any other policy domain, a 156-to-5 margin would constitute overwhelming consensus. In international law, it constitutes nothing, because the five who oppose it are the ones whose compliance would make the rules meaningful.
Attribution, Deniability, and the Cyber Dimension
The governance problem becomes structurally more complex when autonomous systems operate in cyberspace rather than on a physical battlefield. A kinetic autonomous drone leaves wreckage — physical evidence, geographic location, a chain of supply and manufacture that can, in principle, be traced. An autonomous cyber weapon leaves code. It operates through proxy infrastructure, across multiple jurisdictions, and at a speed that renders human oversight not merely insufficient but, in many cases, technically impossible.
The attribution problem in cyber conflict has long been characterised as a barrier to accountability. It is also, for some states, a strategic asset. Deniability is not a regrettable side effect of cyber operations — it is often the operational objective. An autonomous cyber weapon that conducts reconnaissance, selects targets, escalates operations, and covers its tracks faster than any human operator could review its decisions does not merely complicate attribution after the fact. It severs the causal chain between political decision and operational effect in ways that the existing framework of state responsibility was never designed to handle.
The Tallinn Manual — the most comprehensive attempt to apply international law to cyber operations — proceeds from the premise that human decision-making governs cyber conflict. Agentic AI systems, capable of adapting their behaviour to operational feedback without human authorisation at each step, render that premise obsolete at a pace the Manual’s authors did not anticipate and its revision process has not kept up with.
What Meaningful Governance Would Actually Require
The honest answer is that no governance framework capable of constraining autonomous weapons in any meaningful sense will emerge from the current diplomatic architecture. The CCW process is structurally incapable of producing binding obligations over the objection of its most powerful members. The UN General Assembly can express preference but cannot compel compliance. Bilateral agreements between the major powers on cyber and autonomous weapons — the most plausible alternative path — would require a degree of strategic trust that the present geopolitical environment does not support.
What remains is a narrower, more uncomfortable set of possibilities. Meaningful human control requirements — hard legal thresholds specifying the minimum level of human authorisation required before a lethal decision is executed — could be established in national doctrine and exported through alliance commitments. The US Political Declaration on Responsible Military Use of AI, now endorsed by more than thirty nations, represents a partial step in this direction. It remains non-binding, and its signatories notably exclude the states whose compliance matters most.
A two-tier approach — prohibiting fully autonomous systems incapable of IHL compliance while regulating supervised autonomous systems through enforceable standards — has emerged as the most technically and politically tractable framework under active negotiation. Whether it can survive contact with the strategic interests of the major military powers is the central question that the 2026 Review Conference will fail to answer definitively and that subsequent years will be forced to revisit under worse conditions.
The pre-proliferation window has not yet fully closed. But it is no longer meaningfully open. The choice the international community faces is not between a world with autonomous weapons and a world without them. It is between a world where their use is governed by some agreed moral architecture, however imperfect, and a world where it is governed by the indifferent logic of algorithmic efficiency and strategic advantage. Every month that passes without binding rules is a month in which that second world becomes more difficult to walk back from.
Bottom line assessment
The diplomatic vacuum at the heart of autonomous weapons governance is not a temporary condition awaiting a political solution. It is the intended outcome of the states most capable of filling it. The 2026 deadline has passed without a binding treaty, as the Lieber Institute and others predicted it would. The CCW process will continue, producing deliberation in lieu of obligation. Meanwhile, autonomous systems — including those operating in cyberspace — will continue to proliferate, to be deployed in active conflicts, and to make lethal decisions that no existing legal framework can adequately assign to a responsible actor. Analysts and policymakers who treat this as a technical problem awaiting a diplomatic fix misread the situation. The absence of rules is, for the dominant military powers, a strategic position. Changing it requires not better arguments, but a shift in the cost-benefit calculus that no current diplomatic instrument is positioned to deliver.
Autonomous Weapons International Law Cyber Diplomacy AI Governance LAWS UN GGE Geopolitics Vladimir Tsakanyan


Leave a comment