Home / Blogs

Do We Need Alignment Between Internet Governance and AI Governance?

Mr. Chris McElroy’s CircleID article, “Alignment Between Internet Governance and AI Governance,” is thought-provoking. Asking twenty-one questions for readers’ consideration, for their opinions and inputs, really provokes me to think a lot.

As a long-time Internet governance participant and a current analyst of global AI governance trends, these are topics I think about a lot. And I know that to many of us in the Internet governance world, we like to think about how we can leverage what we’ve been through and impose those experiences on AI. But I also know that those in the AI or AI governance world never even thought about Internet governance, and many, if not most, of them have no idea about Internet governance, or have never even heard of ICANN.

One of the reasons is that “governance” is a very broad term. Many things can be, or should be, governed, or attempted to be governed. That does not mean these different things can necessarily be meaningfully compared, or can be governed or attempted to be governed in the same ways. The Internet and AI may seem to be similar when policymakers and the public think about the problems of the Internet or AI—inequality, privacy, surveillance, disinformation, authoritarianism, harm to children or other vulnerable communities, you name it. But that does not mean that the Internet and AI are the same thing, or should be or can be governed in the same ways.

think there are definitely things that can be learned from one to the other. But let’s not oversimplify things and think that there must be benefits by “aligning” the two.

Getting It Right From the Start

There is a Chinese saying for “the right time, the right place, and the right people” (天時、地利、人和), and to me, the origin of Internet governance was one great example of such “favorable conditions” converging to result in a unique and viable system to govern a set of unique resources. Internet governance is not perfect, but, hell, it works, to a large extent, I think most would agree.

The “right time” was the circumstance of the commercialization of the early academic and research-oriented Internet. The “right place” was origin of the Internet—the United States—where its government actually wanted to “spin off” the administration of the key critical Internet resources. And, the “right people” were the Internet pioneers, from VInt Cerf and Bob Kahn to Jon Postel and many others, including all those who debated how to run the Internet on numerous mailing lists since the 1980s and 1990s, who created elements of this Internet governance structure from the Internet Engineering Task Force (IETF) to the Internet Assigned Numbers Authority (IANA) and then ICANN.

What makes this Internet governance model unique is also the fact that many of these critical Internet resources are limited and must be kept and run in a unique manner. The fact that domain names, while virtually unlimited, must also be uniquely resolved and hence give rise to not only the registries and registrars but also the means to _charge_ for each name means recurrent revenues to ICANN. And our Internet pioneers had the foresight to ensure these funds not only sustain the functioning of this Internet governance model, but also the bottom-up multi-stakeholder approach and the open participation and capacity building necessary to sustain it.

All these were possible because back then, people aspired to “one world, one Internet.” Today, are there any sane persons who think they want “one world, one AI”?

AI is not like the Internet. To most, AI is but an application that runs on the Internet. You go to your large-language model (LLM) interface by inputting a domain name on a browser on your PC, or through an app on your phone, which access the LLM services via some IP addresses. Instead of singularity, most users, and likewise most governments (maybe except the United States when it comes to the U.S. AI tech stack), would want more competition, more replacements, more choices. When one mentions Internet sovereignty, people tend to think negatively about fragmentation and shutdowns. When people talk about AI sovereignty, most countries think of it as something positive to protect their own autonomy, even their own languages and cultures.

But when it comes to regulating AI, at first, policymakers were worried that they would miss the boat and repeat the “mistakes” made with the Internet of being late and letting the Big Tech getting away with it. Then came the Trump administration’s going “all in” with AI, leading off a global race for AI competitiveness, which places “innovation” first (translated: no regulations), over safeguards and accountability, at its extreme.

What is AI Governance?

When people talk about AI governance, what are they really talking about? Is it in the context of the three main divergent AI regulatory regimes in the world, being Europe, China, and the United States? Or is it about other governments trying to strike the balance between pro-competition development and regulations, either by a soft-law approach or by a more aggressive regime? Or are we talking about AI summits organized by governments? Or even the sort of self-regulatory proclamation by some AI vendors, such as Claude’s Constitution from Anthropic?

I can use an answer like all of the above. But if we are only thinking about those AI summits, thinking that they “look the part,” it is completely missing the point. Comparing them with the World Summit on the Information Society (WSIS) is like comparing an apple with a horse. Those summits have become more like a series of roaring roadshows by governments, from the AI Safety Summit 2023 in the United Kingdom, to the AI Seoul Summit 2024 in South Korea, to the AI Action Summit 2025 in France, and just now the AI Action Summit 2026 in India. It’s not even about safety or regulations anymore. By now, the hosts are more keen on getting major investment deals from the biggest Big Tech or AI unicorns. That’s not governance.

Good questions, Bad Questions

Mixing up concepts can be unproductive or misleading. Yes, the Internet is based on open standards. But open-source and open-weight models in the AI context are completely different and unrelated concepts from open standards. So, how do you meaningfully compare Internet open standards with closed AI models? I don’t know.

It is also problematic to pose a question with a lot of buzzwords based on a misconception or a subjective opinion that may not be substantiated. Some examples:

  • The term “multi-stakeholder” is often viewed as a buzzword that masks corporate or state capture. Do you have any opinion on that? (This is the first time I hear of this “often view.”.)
  • In Internet governance, “civil society” was often a decorative layer. How can AI architects ensure that “alignment” isn’t just a technical term for “corporate compliance”? (Logic: Why would AI architects care what civil society is seen to be in Internet governance?)
  • Transparency in ICANN meant public meetings; transparency in AI means “model interpretability.” Is it a mistake to treat a social governance problem as a technical black box problem? (Interesting but arguable comparison of the first two items in the opening statement, but what do they have to do with the two elements in the question?)

But there are a few questions that I like and believe can be answered straightforwardly. For instance:

  • ICANN was criticized for being too “U.S.-centric.” With the “compute divide” mirroring the early “digital divide,” how do we prevent AI governance from becoming a “digital colonialism 2.0?”Answer: Don’t “worry” this time around. In AI, we have China.
  • If ICANN’s accountability mechanism were mostly performative, what concrete, binding mechanism can AI governance use to hold model developers responsible? Answer: While I do not agree with the first statement (that ICANN’s accountability, however flawed, should be described as “performative”), more importantly, I ask, should we not think more seriously about holding governments, and their military, responsible, as compared to the normal evil actors, that is, played by the corporate “model developers”?
  • ICANN’s transition from U.S. oversight was meant to be a triumph of globalism. Why did it feel like a transition from state oversight to corporate capture for many, and how does AI avoid that?Answer: The first sentence was in past tense. That is to say, we thought so. Since then, people have learned a lesson. I get that. How do we avoid that in AI? Well, the starting points are different. With AI now, the starting point is already heavy “corporate capture,” and government-directed “governance” models all over the world. In the U.S., government policies are indeed proxies for _achieving_ corporate capture. In China, government policies are meant to _capture_ the corporates. What is common? There is nothing left for the voice of the people, the civil society.

Finding The New Way

I don’t think we need to try too hard to find analogies, or “alignment,” between governing two rather different things. But, on the other hand, we don’t need a perfect analogy to learn lessons from one instance of governance to another. The multi-stakeholder model fundamentally is about bottom-up decision-making and the sharing of power. We don’t have that in AI governance, whichever way we try to define what is happening right now. To me, that is what is missing and what we should try to get a voice for. It’s not going to be as easy as how we got there in the early days of the evolution of Internet governance. Things are different now, I’d say, unfortunately, for the worse.

But if we believe bottom-up decision-making is good, and if we believe the sharing of power away from Big Tech and the Big Governments is good, then we should focus on finding new ways to achieve them in this new environment. How to do it, that is the question. And the path for AI will be different from the way we took for Internet governance. Finding the way is AI’s ongoing challenge, but there is only so much to learn and reference from our last time with the Internet.

NORDVPN DISCOUNT - CircleID x NordVPN
Get NordVPN  [74% +3 extra months, from $2.99/month]
By Charles Mok, Research Scholar at the Global Digital Policy Incubator in Stanford University

Filed Under

Comments

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

Related

Topics

New TLDs

Sponsored byRadix

IPv4 Markets

Sponsored byIPv4.Global

Brand Protection

Sponsored byCSC

DNS Security

Sponsored byWhoisXML API

Cybersecurity

Sponsored byVerisign

DNS

Sponsored byDNIB.com

Domain Names

Sponsored byVerisign