|
||
|
||
This Monday and Tuesday, all 193 UN member states meet in New York for the inaugural organisational session of the UN Global Mechanism on Developments in the Field of ICTs in the Context of International Security. After more than two decades of temporary expert groups and time-limited working groups processes that produced important consensus documents and then expired, leaving no institutional memory and no mechanism to hold states accountable, the international community has agreed to a permanent institutional home for cybersecurity diplomacy. That is a genuine achievement. But institutional architecture is not governance. A permanent mechanism can fail just as thoroughly as a temporary one; it will simply fail more visibly and over a longer period of time.
The question that matters in New York this week is not whether the mechanism exists. It is whether the states walking into that room are prepared to make it work. The evidence is not encouraging. The gap between what states have agreed in UN cybersecurity processes and what they actually do in cyberspace has never been wider. Consensus documents have accumulated across three decades. State behaviour has not meaningfully changed. If the new mechanism is to be more than a permanent forum for diplomatic non-commitments, it needs to deliver on a small number of concrete priorities. Here are five of them.
There is a principle running through every major UN cybersecurity consensus document since 2015: states must not knowingly allow their territory to be used for cyber operations that cause internationally wrongful harm. This is not a rule invented by multilateral diplomacy. It is an application of the customary international law obligation of due diligence, the same standard that governs state responsibility when armed groups use a state’s territory to attack a neighbour, or when industrial pollution crosses a border and damages another state’s environment.
The 2021 UN Group of Governmental Experts confirmed explicitly that this obligation applies in cyberspace. Every subsequent consensus process has reaffirmed it. It has never been applied. The most consequential illustration of this failure is not a sophisticated state-sponsored espionage campaign. It is the network of cyber-enabled fraud compounds operating openly across Myanmar, Cambodia and Laos large-scale, geographically fixed criminal enterprises that have extracted an estimated $75 billion from victims worldwide since 2020, operating with the apparent acquiescence, and in some cases the active protection, of local authorities.
The harms are documented. The locations are known. The legal obligation of the territorial states is clear. Nothing has happened. The December 2024 Hanoi Convention on Cybercrime offers one response, but a conditional one. It operates through mutual legal assistance and voluntary cooperation. Where a state is unwilling to cooperate, or where its own institutions are complicit in the criminal activity, the Convention reaches its limit.
The due diligence obligation does not have that limitation. It is a rule of general international law, not contingent on treaty membership or good-faith cooperation. It places responsibility directly on the state whose territory is being used, regardless of whether that state chooses to engage.
The new mechanism’s first thematic discussion group should produce a practical guidance document on what this obligation concretely requires: what a state must do when it becomes aware that criminal infrastructure is operating from its territory, what threshold of knowledge triggers the obligation, and what constitutes meaningful action to satisfy it. This is not a creative legal project. It is the application of existing law to documented facts. The reason it has not been done yet is political, not legal. The mechanism exists, in part, to change that calculus.
This is the issue that receives the least attention in public commentary on UN cybersecurity processes, and it may be the most structurally important one. Within the United Nations, cybersecurity and cybercrime have developed as entirely separate legal and institutional tracks.
The First Committee of the General Assembly, which deals with disarmament and international security, has been the home of the state responsibility discussion: how international law applies to state behaviour in cyberspace, what responsible conduct looks like, and what obligations states owe each other. The Third Committee, which deals with social, humanitarian and criminal matters, has been the home of the cybercrime discussion: how states should criminalise cyber offences domestically, how they should cooperate in investigations and prosecutions, how mutual legal assistance should work. The new Global Mechanism sits in the First Committee framework. The Hanoi Convention sits in the Third. Neither framework formally acknowledges the other’s existence.
This division had a political logic. In the early years of UN cybersecurity diplomacy, different groups of states had sharply different priorities. Keeping the tracks separate allowed incremental progress on each without forcing premature convergence on questions where geopolitical disagreement was fundamental. That logic has now expired. The costs of maintaining the separation have become prohibitive. And the reason is a structural change in the nature of the threats themselves, a change driven directly by technology.
The categories that once organised the legal response, state actor versus criminal actor, espionage versus theft, security threat versus law enforcement problem, have been dissolving for over a decade. In many of the most consequential cases, they are now meaningless. State-affiliated actors conduct large-scale cryptocurrency theft to finance weapons programmes conduct that is simultaneously an international security problem, a state responsibility question, and a transnational crime. Criminal networks operate ransomware infrastructure from jurisdictions where prosecution is structurally impossible because local authorities are complicit conduct that is simultaneously a law enforcement failure and a violation of the territorial state’s international obligations.
The same technical tools, the same malware families, the same exploit frameworks, the same money-laundering infrastructure serve both state strategic objectives and private criminal profit sometimes simultaneously, sometimes by the same operators wearing different hats at different times of day. This is not a temporary anomaly. It is a structural feature of the threat environment, and it has a clear cause: the internet does not recognise the distinction between a state intelligence operation and an organised crime operation. Both use the same infrastructure. Both exploit the same vulnerabilities. Both move money through the same channels.
The legal architecture we built to address them was designed for a world where those categories were operationally distinct. Technology has made them operationally indistinct. The law has not caught up. The governance consequence is severe. When a state-affiliated actor steals hundreds of millions of dollars in cryptocurrency from a financial platform, which legal framework applies? The First Committee framework asks whether the territorial state met its due diligence obligation and whether the victim state can invoke countermeasures under the law of state responsibility. The Third Committee framework asks whether the conduct is criminalised domestically and whether mutual legal assistance is available. Both questions are legally valid. Both answers matter in practice. The two frameworks barely interact. There is no formal institutional mechanism for them to do so.
The situation is compounded by the attribution problem that both frameworks share. Establishing that a particular cyber operation was conducted by a state actor, or by a criminal network with state knowledge, or by a criminal network with state direction, requires technical and intelligence capabilities that most states do not possess and that those who do possess them are reluctant to share through formal multilateral channels. The result is that both the state responsibility framework and the criminal prosecution framework depend on attribution standards that the international system has no agreed method for producing. Until that problem is addressed, both frameworks will remain partially paralysed in exactly the cases where they are most needed.
The new mechanism does not need to dissolve the two tracks. The political conditions for formal merger do not exist, and forcing premature convergence would likely produce a worse outcome than maintaining the separation. But it must do something that has never been formally done: establish an institutional bridge. This means a designated liaison function between the Global Mechanism and the Hanoi Convention’s implementation bodies. It means the mechanism’s thematic discussions explicitly acknowledging that responsible state behaviour in cyberspace includes maintaining the domestic legal conditions that make criminal justice cooperation possible that a state which shelters criminal infrastructure from prosecution is not merely a law enforcement problem but a state responsibility problem under the law applicable to the First Committee process. And it means beginning the long and politically difficult conversation about shared standards for attribution that both frameworks urgently require.
The separation of these two tracks was a political accommodation to the conditions of twenty years ago. Technology has moved on. The governance architecture needs to follow.
The mandate of the new mechanism’s first thematic discussion group covers, in the language of the founding document, “specific challenges in the sphere of ICT security.” That language is broad enough to encompass the intersection of AI and cybersecurity. The question is whether member states will treat it that way before the technological facts on the ground outpace the normative conversation.
The scale of AI-enabled harm is not speculative. AI-generated child sexual abuse material increased by 1,325 percent between 2023 and 2024. State-affiliated actors are deploying large language models and automated vulnerability discovery tools as components of offensive cyber operations against critical infrastructure. AI-generated content is being used in social engineering attacks against financial institutions and government systems at a scale that manually crafted operations could never achieve. These are not emerging risks. They are current operational realities.
The risk for the mechanism is familiar from every previous wave of technology governance: by the time a multilateral process produces an agreed framework, the practices it seeks to govern have become entrenched, the actors who benefit from them have organised to defend them, and the costs of changing course have multiplied.
The EU AI Act and the proposed revision of the directive on combating child sexual abuse represent serious domestic regulatory responses to parts of this problem. The mechanism is the appropriate forum to begin the work of internationalising those approaches to ask what responsible state behaviour looks like when AI tools are used for offensive operations, and what the due diligence obligation requires of states whose territory hosts AI-enabled criminal infrastructure.
That work needs to begin in years one and two. The window for getting ahead of this problem is narrow and closing.
The due diligence obligation is meaningful only if states have the technical and institutional capacity to comply with it. Most do not. This is not primarily a question of political will. Many states that consistently fail to meet their international obligations in cyberspace are genuinely trying to do so.
They lack the network monitoring infrastructure to detect that malicious activity is originating from their territory. They lack the domestic legal frameworks to investigate and prosecute it when detected. They lack the trained personnel, the institutional relationships, and the diplomatic resources to respond to international cooperation requests in a timely way.
The obligation exists in law. The capacity to implement it does not exist in practice for a large portion of the UN membership. The mechanism’s capacity-building workstream is the right vehicle to address this, but only if it moves beyond the level of principles that have characterised most UN capacity-building discussions.
That means structuring the work around three sequential priorities: prevention, detection, and response, and explicitly linking each to specific legal obligations. Detection capacity is the most urgent, because it is the prerequisite for everything else. A state cannot prevent its territory from being used for harmful cyber operations if it has no means of knowing such operations are occurring.
Building detection capacity is not a technical assistance programme. It is a prerequisite for the rule of law in cyberspace. Framing it that way, connecting the technical agenda directly to the legal obligation, it would transform a capacity-building exercise into a norm implementation programme. That reframing matters both for donor prioritisation and for the political salience of the work.
Cybersecurity governance is unusual among areas of international law in that the infrastructure it seeks to govern is largely not owned or operated by states. The physical cables, the routing protocols, the domain name system, the certificate authorities, the vulnerability databases, the threat intelligence feeds, these are predominantly in private hands, operated according to technical standards set by bodies that states participate in but do not control.
This has a direct implication for how the mechanism must work. Documents produced without structured input from the private sector, technical community, and civil society will not accurately describe the threat environment, will not reflect the operational constraints on implementation, and will not be credible to the actors whose behaviour they seek to influence.
Previous UN cybersecurity processes, particularly the OEWG, made genuine progress in creating space for non-state participation. Poland’s EU Council Presidency in early 2025 played a decisive role in securing those provisions in the OEWG’s final document. That progress is not guaranteed to survive the transition to a permanent institutional format, where procedural conservatism and geopolitical pressure to restrict access will be constants. The first organisational session should treat the protection of multistakeholder participation not as a courtesy to civil society but as a functional requirement for producing outputs worth producing.
Permanent mechanisms are easier to create than to make consequential. The history of international institutions contains many bodies that achieved formal existence and substantive irrelevance simultaneously. The new Global Mechanism will be judged by a single question: in five years, does the application of international law in cyberspace look more like reality, or does it still look like aspiration? That means the due diligence obligation invoked in specific, documented situations. It means states that currently have no detection capacity having built some. It means cybersecurity and cybercrime frameworks in formal institutional dialogue rather than parallel isolation. It means AI governance that addresses current harms rather than hypothetical future ones. None of this is technically complicated. All of it is politically difficult. The permanent mechanism is a beginning, not a conclusion. New York this week writes the first sentence. Whether that sentence leads anywhere is a choice that the states in that room will make, not this week, but in every session that follows.
Sponsored byVerisign
Sponsored byVerisign
Sponsored byIPv4.Global
Sponsored byWhoisXML API
Sponsored byCSC
Sponsored byRadix
Sponsored byDNIB.com