NordVPN Promotion

Home / Blogs

About Those Root Servers

There is an interesting note on the ITU Strategy and Policy Unit Newslog about Root Servers, Anycast, DNSSEC, WGIG and WSIS about a presentation to ICANN’s GAC. (The GAC website appears to be offline or inaccessible today.)

The interesting sentence is this:

Lack of formal relationship with root server operators is a public policy issue relevant to Internet governance. It is stated that this is “wrong” and “not a way to solve the issues about who edits the [root] zone file.”

Let’s look at that lack of a formal relationship.

But before we begin, I’d like to raise the following question: Where does the money come from (and where does it go) to provide DNS root services?

Over the years I’ve put together estimates of what it would take to deliver root services, and I’ve probably always undershot the actual costs. The raw hardware for a root server site isn’t all that much - server computers, firewalls, load balancers, network switches/routers, and power distribution gear don’t really cost all that much - a few tens of thousands of dollars in capital and installation costs per site depending on the desired capacity and the availability of reliable power. But there are recurring costs that can be rather higher, particularly the costs for bandwidth, funds for replacement and upgrade, and maintenance. If one wants to throw in physical security beyond that found in a typical high-quality shared facility or use dedicated links to multiple providers, the one-time and recurring costs will rise.

And that’s just one site. Today, because of the use of anycast to replicate many of the 13 legacy servers, there are more than 100 root server sites spread around the world. Compared to the cost of an aircraft carrier, the total isn’t that much. But we’re still talking about a system that in total costs several millions of dollars per year.

So where does the money to operate this system come from?

Much of it is donated. But donated money is fickle. And it often comes with hidden strings. Unfortunately the root server operators have been very secretive about such matters.

We know that some of the root server operators are run by for-profit commercial corporations that are answerable to their stockholders and that may be acquired on the open stock exchanges. And some root operators are operated by the United States military establishment - which is ultimately obligated to protect the United States, any obligation to others is subordinate. There are root servers operated by university and non-profit entities. In the case of the former, there is little to guarantee that the trustees of the university will continue to want to expend money to provide DNS root services as educational costs continue to rise and educational budgets become ever more difficult to balance. As for the latter, they are under the control of trustees of boards that may have insular points of views or subtle biases in favor of certain industrial segments they consider to be “stakeholders”.

In this whole system the flows of cash, the fiscal constraints and pressures, the ultimate allegiances, the chains of authority, and the hierarchies of authority are as unclear and vague as the flow of water through a Louisiana bayou.

All in all we can see that the root server operators are like a herd of cats - they may act in concert today but they could scatter to the four winds tomorrow as each responds to the pressures it feels and the attractions it sees.

There is no denying that to date the root server operators have done a job that deserves great praise.

But the internet community is building its future on nothing more than faith that the status quo will endure.

Suppose a root server operator found itself in a tough financial situation. There are ways they could use their position to raise money:

  • An operator could charge for root services or adopt the more subtle method of charging for preferred root server access and relegate the rest of us to fight over the left-overs.
  • An operator could mine the incoming query stream for marketing data. The full domain name being resolved is visible in the queries that go to the root servers, and even though the number of queries that reaches those servers represents a fraction of the total number of queries made by users it still forms a stream of raw data that can be mined using statistical techniques to form a rich lode of data about what domains are of interest and to whom.
  • An operator could sell response rates, much like a search engine sells words, so that queries for sponsored names are given priority over queries for names that are unsponsored.
  • And operator could skimp on protection, backups, and recovery planning. This is like skipping payments on an insurance policy - it feels like a good idea as long as nothing bad happens.

Or suppose one of the military root server operators received a command from its government, say perhaps, that that government declared itself to be at war with some country or some group of people. That root server operator would find itself in a position to observe enough of “the enemy’s” queries to generate intelligence data. And that operator would also be in a position to poison the responses to those queries so that, for example, some portion of “enemy” VOIP or web traffic was vectored through a man-in-the-middle that observes that traffic.

Some may consider these scenarios to be hyperbole and unlikely. But those same people can not deny that what I have said above is possible.

And all of us have observed the unlikely turn into reality. Take for example the Pacific Lumber Company.

The Pacific Lumber Company is in the business of growing and producing redwood lumber. The best of this lumber comes from old-growth trees. The Pacific Lumber Company held a large inventory of such trees and protected that inventory and its market value. The company cut just enough trees to satisfy the demand of the upper tier of the market. As a result the company had a good balance sheet with good long-term prospects and a very good reputation for environmental protection. However the company was acquired via a leveraged buyout - that’s a technique that uses the company’s own assets are used to pay much of the purchase price. The Pacific Lumber Company found itself suddenly having to liquidate its assets to pay for the buyout. The company swiftly switched from careful conservation to massive clear cutting. Assets that would have lasted decades or longer and brought top dollar were liquidated as fast as the loggers could cut and sold into a glutted market.

There is no reason to believe that the commercial root server operators are immune to the kind of involuntary reversal of personality such as was suffered by the Pacific Lumber Company.

And there is no reason to believe that the US military won’t decide that the US should use all of its weapons, including its root servers, in its wars.

So the question we need to ask is this: How do we institutionalize root server operations so that the community of internet users has the assurance that it will be able to obtain root server services continuously, equitably, and without its activities being observed (or manipulated) for commercial or other purposes?

It seems to me that contracts - clearly enforceable and clearly binding contracts - are the appropriate vehicle. The notion of contract is, with only relatively minor variations, recognized by every nation on the planet.

We know that in the extreme we can never contractually bind sovereign national governments - or their military operations. And that may mean that it is time to thank and excuse the military root server operators and replace them with providers who are willing to enter into enforceable agreements.

What should these agreements require, with whom should they be made, and who should be allowed to demand that the obligations be enforced?

I will address these in reverse order:

We want to make the right to require enforcement to be as broad as possible. Far too frequently people who are affected by a contact obligation find themselves locked out because they lack standing. For this reason any root server contracts should explicitly recognize that the users of the internet are third-party beneficiaries with explicit powers to require that the parties to the contract live up to their obligations. There is, of course, a danger that some people could use this right to become nuisances in order to obtain unwarranted settlements. So some careful thought would be needed when crafting this third-party right.

There needs to be some body with whom the root server operators make these contracts. I have no clear idea who or what this body is, but I do feel that this body will also need to hold the strings over the contents of the root zone file that the root servers will be obligated to publish. This linkage to the root zone file is necessary so that the oversight body can exercise final authority over who is and who is not a root server for its root zone file. My own personal feeling is that ICANN has disqualified itself from consideration for this role.

And finally - what should be the terms in those agreements? My list is found below. Most of the obligations in that list are things that the root servers do already; most of the obligations have no affect on current operations. Rather most of the obligations ensure that the status quo remains the status quo into the future. I’ve listed these obligations in qualitative terms; in practice these obligations should be restated into quantitative service level agreements.

  • Servers must be operated to ensure high availability of individual servers, of anycast server clusters, and of network access paths.
  • Root zone changes should be propagated reasonably quickly as they become available.
  • User query packets should be answered with dispatch but without prejudice to the operator’s ability to protect itself against ill formed queries or queries that are obviously intended to cause harm or overload.
  • User query packets should be answered accurately and without manipulation that interferes with the user’s right to enjoy the end-to-end principle and to be free from the undesired introduction of intermediary proxies or man-in-the-middle systems.
  • Operators should coordinate with one another to ensure reasonably consistent responses to queries made to different root servers at approximately the same time.
  • There should be no discrimination either for or against any query source.
  • Queries should be given equal priority no matter what name the query is seeking to resolve.
  • There should be no ancillary data mining (e.g. using the queries to generate marketing data) except for purposes of root service capacity planning and protection.
  • The operator must operate its service to be reasonably robust against threats, both natural and human.
  • The operator must demonstrate at reasonable intervals that it has adequate backup and recovery plans. Part of this demonstration ought to require that the plans have been realistically tested.
  • The operator must demonstrate at reasonable intervals that it has adequate financial reserves and human resources so that should an ill event occur the operator has the capacity (and obligation) to recover.

Obligations go two-ways. The oversight body should ensure that there is wide and free dissemination of the root zone file so that people, entities, and local communities can cache the data and, when necessary, create local temporary DNS roots during times of emergency when those local communities are cut-off from the larger part of the internet.

—-
Originally published on CaveBear Weblog.

By Karl Auerbach, Chief Technical Officer at InterWorking Labs

Filed Under

Comments

JFC Morfin  –  Aug 2, 2005 1:33 PM

You may remind that nearly two years ago I introduced the question of the vulnerability of nations to the Internet. I was then preparing national security meetings after having carried for two years the “dot-root” test bed along with the ICANN ICP-3 (Part 5) document. I documented on the IETF several points. Vint Cerf then stated that the ICANN’s obligation of stability of the DNS was better fulfilled in NOT interfering with the RSSAC (what I do believe). This was confirmed by John Klensin. The debreifing of the AFNIC incident where Verisign took over the root to correct a misconfiguration of the root file resulting of a bug in treating IANA requests for AFNIC, shows that the procedure is not the one usually documented.

RIPE recently documented (printed publication) the number of requests to the root servers. It explains that the root file is updated 90 times a year (I maintain on http://nicso.org/intldate.org) the list of all the IANA file changes. The delays in getting these updates is often several weeks or months.

If _ALL_ the Internet users were delivered _once_a_month_ a _non_compressed_ copy of the root file, the global load of the root related traffic would _decrease_ by 90%. We know that 97.5% of the root requests are illegitimate.

A compressed copy of the root file is 15K.

I would like also to underline, to kill what has been dubbed as the “bluff of the century”, that ICANN is the first (ICP-3) to consider the end of the single authorititative root. That today there are hundreds of variations of the root file in use (people having got a copy of the rs.internic.org file, at various time - including at the time it was wroing with the AFNIC/EDU mix) and that the .... real root zone is not exactly what the NTIA believes it is: there is a TLD pollution introducing new nameservers and even more :-)

DNS is a very unprecise rocket science. And this is just be the begining of it, now we face a pressure for multilingualism.

What I do fail to understand is why you want an oversight body? The oversight body is us all. In real world it is named democracy, in network it is named efficiency. Believe me with one billion people to check the root consistency bugs will be quickly documented :-)

All what we need is the TLDs to document a few sites under national jurisdiction where they store their db.file. And a simple routine of the private users resolvers or ISP to call them once in a while. If 2/3 of the listed files for a TLD match, the file is used. This accounts for hacks, wars, revolutions, catastrophes, etc.

BTW, this way TLD can produce several versions. You can have Capo Verde zone without the adult sites which marred them. A KID’s root, etc.

Obviously you can easily have other TLDs? What is the problem? They will never overload the root file :-)

The Famous Brett Watson  –  Aug 2, 2005 4:23 PM

I’d like to thank JFC Morfin for that comment. I’m afraid that I started to lose my grasp on what was being communicated towards the end, but the earlier part about the nature of root server traffic and the possibility of disseminating that zone more widely are truly fascinating. In hindsight, I’m slapping myself for not considering this sooner.

The issue of “junk” traffic at the root servers is the kicker here. Rather than query the root servers for all this chaff, it would seem to make sense for the typical DNS server to *mirror* the root zone, as a stealth secondary nameserver for the root. All the “junk” queries would then be given a negative response at this point, rather than being forwarded to the root servers. Come to think of it, all the *valid* queries would also be answered at this point too. The actual burden on the root servers would be reduced to the occasional AXFR (from fresh servers with no existing copy of the root), and infrequent polling to watch for updates.

At first glance this seems like such a sensible way to do things that I can’t imagine why it’s not being done already. Maybe it is, here and there—it’s purely a matter of practice, requiring no change to existing protocols. Perhaps someone can point out to me why this won’t work—or maybe it will come to me in the morning. Whatever the case, it’s the most intriguing idea I’ve encountered today.

JFC Morfin  –  Aug 2, 2005 4:37 PM

The root IS mirrored in every namesevers. The joke is to put it in the cache with a TTL, when it should be in the root.ca file (you know the file where you only have the root servers and not the TLDs). The reason why is to make sure you cannot add other TLDs by you own….

Now, you can download it from a friend or you can build it. From the TLD Manager data. Actually you can do that for everything. You recall the Hosts.txt file? When I want to access ICANN, I just go on my browsers and I enter “nuts”. I swear you I do not use an alt-root. It works perfectly.

The Famous Brett Watson  –  Aug 3, 2005 12:03 AM

You are only partly right when you say that the root is already mirrored in all nameservers. A DNS server requires a root-hint file containing information about the root zone nameservers, but those hint records form only a rather small portion of the total root zone. In normal operation, a DNS server will hold a substantial portion of the root zone in cache—plus the possibility of creating “negative” records to cache root misses. These factors make acting as a stealth secondary for the root quite a reasonable course of action, whereas it would be sheer madness for most sites to do the same for the “.com” zone.

I am also familiar with hosts-file tricks, having used my fair share, but I don’t see them as entirely relevant to the matter at hand.

JFC Morfin  –  Aug 3, 2005 1:36 AM

Yes. This is what I say: instead of being in cache, the root file should be in the hint-file. And the hint-file (root.ca) to be updated with the IPv6 data. To get the root file in cache only call a DN of each TLD. The Hosts.txt is much in line if you think the way to manage it… to create many TLDs etc. Until you replace with a local resolver… even under Windown someone should know how to do it.

Jothan Frakes  –  Aug 4, 2005 3:48 PM

We’re straying away from Karl’s article, but I just have to comment here, because there was a statement made that reminded me of the movie Soylent Green…

>“The oversight body is us all.”

Jeffsey, I admire your optimism, and I think that there are a lot of people who use the internet that this could work with.  They would not be the issue.  There are others out there that it wouldn’t work with. 

How many alternative root operators want to offer .web, .game, .shop, and other certain to be popular gtld extensions?  How many will put them in place?

There are already quite a few alternative root namespace providers out there, and there is already some overlap of their choices of tld extensions.

That happens now, with as many alternative roots as there are.  The next product is that a root operator gets to play god with their downstream consumers (people who are consumers of their DNS root responses).

Yet the internet experience is such that my visit to a website, say icann.org, should be the same if I type in the domain name in New York, London, Paris, or Munich.

Alternative roots could potentially have a whole different .org zone.  If the internet cafe I might use in London was a consumer of an alternative root that operated a different .org namespace, I might not go to the same website.

Worse yet, as we’d roll the calendar ahead and our internet evolves, there are some other issues.  Now, today, alternative roots have .web and .shop and other TLDs.  As the ICANN process starts to release such proposals into the root, this will exacerbate the issue.

The solution at the root operator level is to either stitch the TLD zones together, or choose between the ‘authoratatize’ sources for that TLD.

The net product is namespace collision.  We must avoid this.

Namespace collision in alternative roots is a bad user experience.  There needs to be central orchestration to ensure that this does not happen.

How does someone keep the root(s) organized to avoid namespace collision?
ICANN can issue an ICP. http://www.icann.org/icp/icp-3.htm

One can proclaim it here on CircleID. http://www.circleid.com/article/1127_0_1_0_C/

Goverment options - A government can, say, seal off their country from this issue by creating their own root server system and introducing their own TLDs.

Now I’ll stray us back onto topic.

From the point of view that the experience of internet users being consistent is important, I’d come to the conclusion that having contracts with the (IANA Root - root-servers.net hosts) root server operators is a good thing. 

Having some relationship between an operator and the central coordination body helps keep that set of root servers organized, consistent, and predictable.

John Palmer  –  Aug 4, 2005 5:12 PM

Johthan - Your collision argument is a straw man argument for a problem that does not exist. Apart from several problem child roots, who have no traction as far as usage, the Inclusive Namespace community DOES NOT create collisions. This is another ICANN FUD tactic that you have swallowed hook, line and sinker. NO ONE has nor will clone .ORG and if they did, they would be laughed out of existence - no one would use them.

If you oppose the Inclusive Namespace, just say so and give valid reasons, rather than this nonsense argument about those “evil alternative roots creating colliding TLDs”. Remember, the only major collision in internet history was created when ICANN stole .BIZ from Leah Gallegos/AtlanticRoot.

Jothan Frakes  –  Aug 5, 2005 4:23 PM

Wow John, I think I’m misunderstood. 

I don’t think alternative roots are evil, quite the contrary.  In fact, I think they are a great way to let the internet public choose what their experience is.

I used .ORG as an arbitrary TLD for example, and I attended the meeting in Marina Del Rey where .biz was selected through…. well, I honestly could not label that ‘process’ of the 7 tlds that happened that day. 

What happened in Leah’s case with .biz (and unforunately may happen to Chris Ambler [gawd I hope IODesign would get .web]) was a consequence of somebody not taking ‘everything’ into consideration.

My position is that I would like to see organization amongst the various players to avoid that stuff from happening.

Those of us who have been around this industry for as long as you and I have, we know that the term “namespace collision” absolutely does not mean that there is a screeching sound while your internet device shatters into a million bits as it collides with another device.

We know that it just means that the results of a lookup may prove unpredictable for a users in the cases of identical delegations in non-identical roots.  The term might be better labelled ‘namespace ambiguity’. 

As I mentioned in an earlier comment to Paul Vixie’s article http://www.circleid.com/article/1127_0_1_0_C/ we’ll get an opportunity to see what happens with Earthlink and Wannado and other large ISPs who were on board with the new.net experience once a .travel or .xxx join the root.

It puts those ISPs in a position to have to select one, or stitch the results, or chose other options to resolve the ‘namespace ambiguity’.

Jothan Frakes  –  Aug 5, 2005 4:25 PM

Ah, and .travel is already in the root.  http://blog.lextext.com/blog/_archives/2005/7/27/1082547.html

Simon Waters  –  Aug 10, 2005 5:27 PM

Brett, I’m surprised you hadn’t pondered using secondaries at big ISPs for the root zone, I’ve been espousing it’s virtues off and on for a while.

I think it is a more satisfactory system than creating many single point of failure with no contractual obligations to the end users, your ISP arranges to get the root zone from an IANA source, and if you are a big ISP buying transit you can get it from your “upstream” ISP or IANA.

However I don’t think “contracts” are a simple answer for the current root servers. And excluding the millitary for having other obligations ignores the fact that companies will often choose to fail contractual obligations if there is a more profitable avenue. The UK government discovered this when privatising it’s trains, ending up with some companies not running services because it was cheaper to do this and be fined.

Similar questions affect other aspects of Internet life, especially routing, ultimately networks only work if enough people (including the important players) want them to, and cooperate to make it so. That is life.

McTim  –  Aug 10, 2005 9:24 PM

What is in it for the rootops to sign a MoU or contract? What is their incentive to commit themselves contractually to obligations they currently don’t have? What is the up-side for them?

Karl Auerbach  –  Aug 11, 2005 6:57 AM

Response to McTim:

Why should the root-ops enter into a contract?  Because if they don’t the author of the root zone file can point the name/IP address of the delegation to a more willing operator.

Yes, it becomes a tug-of-war between the root opps and “the” root zone file author for buy-in from users or their agents/ISPs.

But at the end of the day the root op who refuses to sign risks acquiring the negative reputation of being unwilling to make a firm, binding commitment to the community of internet users of fair and reliable services.  The current crop of root ops deserve medals, not bad reputations.  But they I’d suspect that they’d want to avoid the risk of tarnishing those virtual medals.

Your comment reminded me of one other point - it is unclear to me whether the IP addresses (and the right to advertise those address/prefixes into the global routing protocols) should belong to the root zone author or the root server operator.

To Simon Waters:

You make a good point that sometimes it is easier and less troublesome to walk away from contractual obligations.  For that reason it might be reasonable to couple these contracts to substantial performance bonds.  I suggest this merely for purposes of discussion - my gut feel is that, at least among the current operators, that there is a strong committment to protect the net and thus to live up to the contractual promises.  On the other hand, over the long run it is better to build infrastructures on institutions rather than individual people.

Also related to the question of who might own the IP addresses - a contractual term might cause the server’s IP address ownership to in some way spring back to the root zone author should a root server operator fail to live up to its obligagions. (And yes, I recognize that there is a lot of unclarity about what “ownership” of an IP address might mean in light of the hierarchy of delegations of addresses.)

There are a lot of devils in these details.

JFC Morfin  –  Aug 11, 2005 9:46 AM

The main reason why the root servers system (a different mater from the root file) is not acceptable is its cost to operators. Because this cost must be justified or paid by someone. This cost could be drasticaly reduced in fighting the illigitimate calls (reports say 97.5%). But with the development of the Internet there is always a risk that the cost becomes too much. The problem of the cost is also that if one operator cannot bear it anymore, what is the procedure to replace it? There are root systems operators all over the plannet, everyone can be. But - like for the root file - only 13 (s)elected ones are in the file. What is the procedure?

But most of all the root operations cost must be justified. It must have a return. I accept that some have pride in it. I accept that RIRs are non profit organisations serving their members (but I do not want an Internet submitted to RIRs only: there are 6 /3 blocks in IPv6: RIRs have a disorganised one, ITU has accepted one and wants to organise it for common interest services, Govs, etc. UN agencies could have one for common interest infrastructures (alarms, geodesic, atomic control, e-health water and weather reports, R&D, etc.)

But I am sure that the two US military servers are not here for pride nor world common interest. I “suspect” they are here for intelligence gathering through the logs.

And this is somethings the 191 other sovereign Sates of the world cannot accept. I suggest that we find a technical solution before the UN General Assembly starts a root server loggers debate similar to the veto debate.

Because, again and again, we _do_not_need_the_root_server_system. The root fo TLDs is a centralised network fossil. A distributed network DNS can only be based upont a TLD forest.

Karl Auerbach  –  Aug 12, 2005 2:07 AM

Reply to JFC Morfin:

I have long suggested that because the root zone file is so small -  about 15K to 20K bytes compressed, i.e. smaller than many of the cutsie buttons and gifs that decorate web pages - that the entire root zone could be disseminated directly to end users (or their ISP’s) via multicast, BitTorrent, or other mass distribution techniques (and maybe even by non-internet mechanisms.)

Message digests could be widely posted so that those who want to check the legitimacy of what they receive could do so.

I have no personal objection to the “everyone (or his/her ISP) is his/her own root” approach.  I ran my systems that way for about a year and observed no problems.  And such an approach does remove a lot of the “single point of attack” that I suspect is attracting a lot of the ill traffic that is hitting the root and TLD servers.

I don’t know how this would interact with DNSSEC.

JFC Morfin  –  Aug 12, 2005 4:21 PM

To Karl Auerbach,

yes. Its seems the here an increasing gap, divide even an opposition between what the “Internet Governance” thinks and what the network is or tend to be. This is most probably due to the current commercial leadership of R&D. RFC 3869 is to be read for the evaluation IAB gives of this leadership. But at the same time RFC 3869 seems out to be out of the picture with a “network-centric” culture of interest to sophisticated operators. It seems that IETF has reached its “X.500” days.

Let get real. The secure and stable architecture we need to disseminate is user-centric and must be generalised/global for a simple reason of cost: the system I used home must transparently work everywhere in the world because I may be everywhere in the world tomorrow. It should based on three layered visions:

- a “network continuity” infrastructure: a seriously structured addressing built on national basis, insuring an universal equal bandwidth access to all, whatever the medium and the protocols, at the lowest cost and greatest simplicity. This is the job of the ITU NGN.

- a “network of networks” structure made of SNHN (small/stand-alone network/home network): standalone smart gateway acting as a network donjon of the network. This is the distro we all need and no one wants to develop: fire-wall, gateway to the local network, apache, mail services, named, services crons [stats, updates, back-ups], etc. It should be built as a basic Linux, BSD or QNX distro, on a personalised bootable CD sent every month to the users by their “Intelligent Services Provider” (along with the users individual requirements). It should include last updates, locales, DNS locales, etc. etc. This is the job of the Internet community: ICANN can certainly contribute in supporting the real namespace (names, numbers, tags, etc). IETF should replace RFCs by operational service slots everyone can plug as network building blocks. Simple, neat, updateable. If they do not do it, why don’t we do it?

- the “networks within the networks” metastructure: a myriad of externets (virtual external network look-alike) people/cpu may freely chose to participate into one or thousands, providing specialised spaces of common references (languages, cultures, nations, trade, communities, services, corporations, cities, etc.) you may chose to adapt to your own context. This is to be achieved through the intergovernance (a extension of the Plato’s paradigme the WGIG should learn “fissa”) of the externets governances, using - among others - a dramatic extension of the IANA concept of common reference centre.

Coherence and stability like everything else in distribution must be by the people for the people, otherwise they vote with their feet. The WGIG tries to sort it out with difficulty because, if they made the correct evaluation that Government, Economy and Civil Society must share into the Governance, they did not defined what is the Internet and forgot the users. They also believed the hollywood story tale and believe the Internet technology is an US God’s gift which will never change: for example that the problematic root file is other use than helping the NSA to collect real time information on the status of the world, through the DoD root server loggers. They forgot that the internet governance constitution is in the source code and that the code is now written by a technical society (many SDOs) of which some of the users are the real leaders. As NATs, VoIP, P2P, Grids, etc. show it.

The situation today is rather simple. Every user has the communication, processing and on-line nuisance capacity of a full country 20 years ago.

These “countries” (old and new ones) have no ITU to organise their standards together, nor the many network partitioning they need (national development and security, stability, cultural empowerment, etc. and first: multilingualism). This results in the technical balkanisation (the WGIG totally ignores), first the alt-root experiment, now China, and actually the USA removing themselves from the good guys game. At the time we need to achieve this partitioning as a compartmentalisation, not as this lethal balkanisation (the most complex move for a technology (Robert Tr?hin and Joe Rinde succeeded in 1977/78 and OSI partially solved) guess what? they forget, the technical pole ?

The rest in another comment ... the system does not want it ....

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

Related

Topics

Domain Names

Sponsored byVerisign

Brand Protection

Sponsored byCSC

IPv4 Markets

Sponsored byIPv4.Global

Threat Intelligence

Sponsored byWhoisXML API

Cybersecurity

Sponsored byVerisign

New TLDs

Sponsored byRadix

DNS

Sponsored byDNIB.com

NordVPN Promotion