|
||
|
||
Mr. Chris McElroy’s CircleID article, “Alignment Between Internet Governance and AI Governance,” is thought-provoking. Asking twenty-one questions for readers’ consideration, for their opinions and inputs, really provokes me to think a lot.
As a long-time Internet governance participant and a current analyst of global AI governance trends, these are topics I think about a lot. And I know that to many of us in the Internet governance world, we like to think about how we can leverage what we’ve been through and impose those experiences on AI. But I also know that those in the AI or AI governance world never even thought about Internet governance, and many, if not most, of them have no idea about Internet governance, or have never even heard of ICANN.
One of the reasons is that “governance” is a very broad term. Many things can be, or should be, governed, or attempted to be governed. That does not mean these different things can necessarily be meaningfully compared, or can be governed or attempted to be governed in the same ways. The Internet and AI may seem to be similar when policymakers and the public think about the problems of the Internet or AI—inequality, privacy, surveillance, disinformation, authoritarianism, harm to children or other vulnerable communities, you name it. But that does not mean that the Internet and AI are the same thing, or should be or can be governed in the same ways.
think there are definitely things that can be learned from one to the other. But let’s not oversimplify things and think that there must be benefits by “aligning” the two.
There is a Chinese saying for “the right time, the right place, and the right people” (天時、地利、人和), and to me, the origin of Internet governance was one great example of such “favorable conditions” converging to result in a unique and viable system to govern a set of unique resources. Internet governance is not perfect, but, hell, it works, to a large extent, I think most would agree.
The “right time” was the circumstance of the commercialization of the early academic and research-oriented Internet. The “right place” was origin of the Internet—the United States—where its government actually wanted to “spin off” the administration of the key critical Internet resources. And, the “right people” were the Internet pioneers, from VInt Cerf and Bob Kahn to Jon Postel and many others, including all those who debated how to run the Internet on numerous mailing lists since the 1980s and 1990s, who created elements of this Internet governance structure from the Internet Engineering Task Force (IETF) to the Internet Assigned Numbers Authority (IANA) and then ICANN.
What makes this Internet governance model unique is also the fact that many of these critical Internet resources are limited and must be kept and run in a unique manner. The fact that domain names, while virtually unlimited, must also be uniquely resolved and hence give rise to not only the registries and registrars but also the means to _charge_ for each name means recurrent revenues to ICANN. And our Internet pioneers had the foresight to ensure these funds not only sustain the functioning of this Internet governance model, but also the bottom-up multi-stakeholder approach and the open participation and capacity building necessary to sustain it.
All these were possible because back then, people aspired to “one world, one Internet.” Today, are there any sane persons who think they want “one world, one AI”?
AI is not like the Internet. To most, AI is but an application that runs on the Internet. You go to your large-language model (LLM) interface by inputting a domain name on a browser on your PC, or through an app on your phone, which access the LLM services via some IP addresses. Instead of singularity, most users, and likewise most governments (maybe except the United States when it comes to the U.S. AI tech stack), would want more competition, more replacements, more choices. When one mentions Internet sovereignty, people tend to think negatively about fragmentation and shutdowns. When people talk about AI sovereignty, most countries think of it as something positive to protect their own autonomy, even their own languages and cultures.
But when it comes to regulating AI, at first, policymakers were worried that they would miss the boat and repeat the “mistakes” made with the Internet of being late and letting the Big Tech getting away with it. Then came the Trump administration’s going “all in” with AI, leading off a global race for AI competitiveness, which places “innovation” first (translated: no regulations), over safeguards and accountability, at its extreme.
When people talk about AI governance, what are they really talking about? Is it in the context of the three main divergent AI regulatory regimes in the world, being Europe, China, and the United States? Or is it about other governments trying to strike the balance between pro-competition development and regulations, either by a soft-law approach or by a more aggressive regime? Or are we talking about AI summits organized by governments? Or even the sort of self-regulatory proclamation by some AI vendors, such as Claude’s Constitution from Anthropic?
I can use an answer like all of the above. But if we are only thinking about those AI summits, thinking that they “look the part,” it is completely missing the point. Comparing them with the World Summit on the Information Society (WSIS) is like comparing an apple with a horse. Those summits have become more like a series of roaring roadshows by governments, from the AI Safety Summit 2023 in the United Kingdom, to the AI Seoul Summit 2024 in South Korea, to the AI Action Summit 2025 in France, and just now the AI Action Summit 2026 in India. It’s not even about safety or regulations anymore. By now, the hosts are more keen on getting major investment deals from the biggest Big Tech or AI unicorns. That’s not governance.
Mixing up concepts can be unproductive or misleading. Yes, the Internet is based on open standards. But open-source and open-weight models in the AI context are completely different and unrelated concepts from open standards. So, how do you meaningfully compare Internet open standards with closed AI models? I don’t know.
It is also problematic to pose a question with a lot of buzzwords based on a misconception or a subjective opinion that may not be substantiated. Some examples:
But there are a few questions that I like and believe can be answered straightforwardly. For instance:
I don’t think we need to try too hard to find analogies, or “alignment,” between governing two rather different things. But, on the other hand, we don’t need a perfect analogy to learn lessons from one instance of governance to another. The multi-stakeholder model fundamentally is about bottom-up decision-making and the sharing of power. We don’t have that in AI governance, whichever way we try to define what is happening right now. To me, that is what is missing and what we should try to get a voice for. It’s not going to be as easy as how we got there in the early days of the evolution of Internet governance. Things are different now, I’d say, unfortunately, for the worse.
But if we believe bottom-up decision-making is good, and if we believe the sharing of power away from Big Tech and the Big Governments is good, then we should focus on finding new ways to achieve them in this new environment. How to do it, that is the question. And the path for AI will be different from the way we took for Internet governance. Finding the way is AI’s ongoing challenge, but there is only so much to learn and reference from our last time with the Internet.
Sponsored byRadix
Sponsored byIPv4.Global
Sponsored byCSC
Sponsored byWhoisXML API
Sponsored byVerisign
Sponsored byDNIB.com
Sponsored byVerisign