Home / Blogs

Alignment Between Internet Governance and AI Governance

Internet governance grew out of technical coordination (IANA/ICANN), and AI governance is starting with political/ethical summits (AI Safety Summits). I’m wondering what people think about the similarities or differences between the two types of governance. Some of you here on CircleID have been at this for years and have been deeply involved in Internet Governance, and I appreciate your input if you have the time. But I welcome the opinions of everyone.

Excuse me if some of my comparisons seem naïve or idealistic, but in my experience as a member of several working groups and mailing lists over the years related to Internet governance, I see some similarities and maybe even some lessons to be learned and mistakes to be avoided from how Internet governance evolved.

Even if you consider some of the questions non-issues or a flawed comparison, your opinion is welcome.

The Institutional Question (ICANN vs. AI Hubs)

If ICANN represents the “failure” of attempting a single global multistakeholder body for technical coordination, are we repeating history by seeking a unified UN-style “IAEA for AI,” or is a distributed model safer?

ICANN’s mission was strictly narrow (DNS/IP). Can AI governance survive if it attempts to govern both the “technical logical layer” (weights and parameters) and the “social application layer” simultaneously?

The IETF’s “rough consensus and running code” prioritized functionality over fairness. In AI, where “running code” can have immediate societal harm, is a technical-first governance model even possible?

How do we prevent “Mission Creep” in AI bodies when the history of ICANN shows that technical coordination bodies inevitably become proxies for content regulation and political disputes?

Is the current “AI Safety Summit” model merely a recreation of the World Summit on the Information Society (WSIS), and if so, how do we avoid the 20-year deadlock that followed?

Multistakeholderism vs. Reality

The term “multi-stakeholder” is often viewed as a buzzword that masks corporate or state capture. Do you have any opinion on that?

Given the high capital requirements for LLMs, is “multistakeholder” governance even viable, or are we moving toward a “Bilateral” governance era between Big Tech and Big States?

The Internet Community was largely academic and technical however, the AI Community is largely corporate and proprietary. How does this change or affect the legitimacy of governance?

ICANN was criticized for being too US-centric. With the “Compute Divide” mirroring the early “Digital Divide,” how do we prevent AI governance from becoming a “Digital Colonialism 2.0”?

In Internet governance, “Civil Society” was often a decorative layer. How can AI architects ensure that “Alignment” isn’t just a technical term for “corporate compliance”?

If “Rough Consensus” failed to solve DNS abuse, why do we think it will solve AI bias or existential risk?

Technical Layers and Fragmentation

The Internet is a standardized protocol (TCP/IP); AI has no equivalent unified standard.

Internet governance relies on a three-layer framework (Infrastructure, Logical, Social). In AI, where the Logical layer, the model, is often a proprietary black box, can it actually be governed without breaking the system?

Does AI need an “IETF for Weights”? If OpenAI, Google, Meta and others don’t need interoperability standards to function, is “Technical Governance” even a relevant concept?

The Internet thrived on open standards. If AI governance moves toward Closed/Safety-First models, are we effectively “Splinternet-ing”, (new word?) the AI stack before it even matures?

ICANN manages a root that must be unique. AI has no root. Does this lack of a central point of failure make AI governance harder or easier than Internet governance?

How do we govern the compute layer (hardware) without triggering the same sovereign tensions that led to the “Sovereign Internet” movement in the 2010s?

Lessons from the ICANN Experiment (I’ll admit a little bias in this part due to my own experience)

If ICANN’s accountability mechanisms were mostly performative, what concrete, binding mechanisms can AI governance use to hold model developers responsible?

ICANN’s transition from US oversight was meant to be a triumph of globalism. Why did it feel like a transition from state oversight to corporate capture for many, and how does AI avoid this?

Is the “Open Source AI” movement the new “Cyber-libertarianism,” and will it be crushed by regulation just as the early “Free Internet” was?

Transparency in ICANN meant public meetings; transparency in AI means “Model Interpretability.” Is it a mistake to treat a social governance problem as a technical black box problem?

Finally, if Internet governance was about connectivity and AI governance is about content/output, are we trying to use a hammer (ICANN’s model) to fix a problem that requires a scalpel; sector-specific regulation?

Summary of Resources Used:

Thanks in advance for your comments.

NORDVPN DISCOUNT - CircleID x NordVPN
Get NordVPN  [74% +3 extra months, from $2.99/month]
By Chris McElroy SEO, Founder Chris McElroy SEO Agency

Filed Under

Comments

Insigths James Görgen  –  Feb 18, 2026 4:59 AM

Excellent insights, Chris. I wrote a small piece trying to address some os these issues.

https://circleid.com/posts/welcome-to-meltnet-a-blueprint-for-digital-sovereignty-in-a-fragmented-world

Regards,
James

Thanks James Chris McElroy SEO  –  Feb 18, 2026 5:33 AM

I liked your article too, at least the parts I understood. :)

This part though…

“The difference between a democratic and a dystopian sovereign Internet lies in the institutional antidotes we incorporate from the start. Every decision affecting digital perimeters must be public and justified, with policy transparency that allows scrutiny. Technical entities without corporate ties regularly auditing trust infrastructures would create layers of accountability. Open records of executive orders, content removals and technical interventions would allow civil society, media and oversight bodies to monitor abuses.”

Exactly how we felt when, and even before ICANN got started. They had public input through working groups, but it doesn’t matter if they don’t let any of the input get in the way of their agenda. Not sure when it was that there were technical entities without corporate ties. And open records? Yeah, right. Karl Auerbach had to sue to see the books when he was elected to the BoD from what I remember.

I’ll admit some bias when it comes to talking about ICANN, the same ICANN that abandoned first come, first serve with domains and basically sold TLDs to companies that others had already created on other roots. Do Biz was a great example. The same ICANN whose board was supposed to step down and hold elections, then didn’t.

The same ICANN that said introducing multiple TLDs was a threat to Internet stability, yet here we are. It seems what they meant was introducing the TLDs that we created in other roots, expecting them to be added to the ICANN root once we proved they worked, but instead offered others to “apply” for the TLDs we created if they were willing to put up about $180,000 putting it out of the reach of most of the creators.

Sorry for the rant, but if we allow any AI governance that goes anywhere near the way the Internet was governed, then we’re all in trouble.

Original sin James Görgen  –  Feb 18, 2026 10:27 AM

Fully agree. Unfortunately, I think we are repeating the same original sin in AI and without solution to IG..Let’s see what India’s summit will cook.

Public vs. private Mark Datysgeld  –  Feb 25, 2026 7:04 PM

Mr. McElroy,

I find it tricky when you say: “The Internet Community was largely academic and technical however, the AI Community is largely corporate and proprietary. How does this change or affect the legitimacy of governance?”

That framing leans on a very U.S.-centric, visibility-driven read of how LLMs developed and how they’re evolving. A large share of LLM progress is still coming from academic and open research, with major contributions coming from territories such as China. Open-weight models are also catching up quickly to prior generations of closed models; for example, Qwen3.5-397B-A17B is in the same broad capability band as what GPT-4o was being judged against a few months ago.

Casual inspection of the AI/LLM space is risky because it hyperfocuses on a handful of high-visibility firms. In practical terms there is a large open/research ecosystem operating alongside (and sometimes directly competing with) the most visible corporate entities, and posts like yours tend to collapse that distinction.

If the legitimacy question is about “who has decisive control” over frontier training, deployment, and evaluation, that’s a narrower claim than “the AI community is largely corporate and proprietary,” and it’s where the governance analysis should be more precise.

I don't disagree with you Chris McElroy SEO  –  Feb 26, 2026 2:51 AM

There are some very altruistic people working in this space. Doesn't change the similarities I'm pointing out.

In the early days of Internet governance there was a whole slew of people outside of ICANN that were doing their best to create an Internet open to everyone.

Just look at the other roots that were there and how many TLDs were introduced. There was the TLDA even.

That didn't stop the corporate takeover of governance. That didn't stop ICANN from ignoring the first come, first serve principle of domains and take what others had created and auction them off.

Why do you believe that academic and open research will influence how the US-centric corporations from changing the rules as they see fit?

Mark Datysgeld  –  Feb 26, 2026 3:39 AM

I perceive both situations as entirely different. There are much stronger incentives around this field of research and its publications. It is not about kind-hearted people publishing these findings in the open “for the common good”, but rather a cut-throat arms race to demonstrate a lead in technical development.

DNS governance had a unique technical chokepoint that is unlikely to be reproduced: a single canonical root that everyone needs for universal resolvability. It doesn't matter if there are others, you need to pick one. That makes capture structurally feasible. There is no equivalent root (or need for one) in the case of LLMs... you can just fork weights, re-train, fine-tune, deploy independently, and route around policy choices.

So I believe the real question isn’t “will U.S. corporations try to change the rules?” The question is where the chokepoints are (compute supply?, distribution platforms?, compliance regimes?) and whether they become so binding that alternatives can’t route around them, which I find unlikely. This is not to say I didn’t appreciate the article... I did. I’m responding because I found it thought-provoking.

And I really appreciate your responses Chris McElroy SEO  –  Feb 26, 2026 12:22 PM

Definitely not trying to be argumentative and I'll admit I'm a little biased, well, maybe more than a little, about how things went down with ICANN and all that. And I'd hate AI governance to go the same route. I think it's too important.

Your points about the choke points and how things can be routed around problems. How do you think governments might regulate things?

Will they favor the big players and maker laws that restrict competition?

I'll give you an analogy. I used to be a small contractor. The city of Philadelphia got 60 million from the federal government for revitalizing the old houses in certain areas of town. The stipulation from the feds was that they have to allow anyone to bid on the contracts.

They did. But they added a stipulation. If you get the contract, you have to wait 1 year to get paid. So, of course none of the small contractors had that ability, effectively giving the contracts to the larger companies.

I'm using that example because governments and large companies seem to always find a way.

Mark Datysgeld  –  Feb 27, 2026 1:29 AM

I’d argue along these lines: as things stand right now, with no additional development, it’s possible to accomplish a very wide range of legal and illegal tasks with existing open models and a 12–16 GB VRAM consumer-grade graphics card. Even if these models stopped evolving, a lot of the slack can be picked up with tooling. This is what OpenClaw does; i.e., a wrapper/tooling layer that chains models and automation so you can get more out of the models.

What can governments do or enforce about that? These models aren’t going away... the weights already exist and are replicated across physical hard drives around the planet. What can be regulated in practice? Short of pervasive endpoint control (basically digital totalitarianism), that ship has sailed. The wave of bots plaguing the Internet in 2026 are not running on ChatGPT or Claude. They are local instances running Qwen or DeepSeek.

Governments can still shape the chokepoints they can touch (compute supply, mainstream distribution platforms, procurement, compliance regimes), and yes, that can easily favor big players in exactly the way your Philadelphia example describes. But that mostly determines who can operate at scale through official channels; it doesn’t erase baseline capability, because open weights and local inference create alternative routes around a lot of that.

Bonus: governments don't even know these models exist or that they are different from OpenAI. They will go after what they can see, as presented in a friendly webiste hosted and registered in the U.S.

I understand your point about alternative routes Chris McElroy SEO  –  Feb 27, 2026 5:58 AM

And funny coincidence using that phrase when we had alternate roots as well.

But alternatives are even being redone in the domain space. Years and years ago there were set ups where you could go in and create your own TLD and domain name. The only way anybody would see your website on that would be if they enabled the DNS on their computer.

And they’re back doing it again. There’s a company running a lot of ads on Facebook about registering your domain name for life. They still would have to get people to enable their PC to see a website.

So of course there’s always going to be edge cases where there are platforms that exist outside mainstream. Mostly I’m talking about how AI governance over mainstream AI is related to internet governance years ago.

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

Related

Topics

Domain Names

Sponsored byVerisign

Brand Protection

Sponsored byCSC

DNS

Sponsored byDNIB.com

Cybersecurity

Sponsored byVerisign

DNS Security

Sponsored byWhoisXML API

IPv4 Markets

Sponsored byIPv4.Global

New TLDs

Sponsored byRadix