|
After blogging about ICANN’s new gTLD policy or lack thereof [also featured on CircleID], I’ve had several people ask me why I care so much about ICANN and new top-level domains. Domain names barely matter in a world of search and hyperlinks, I’m told, and new domains would amount to little more than a cash transfer to new registries from those trying to protect their names and brands. While I agree that type-in site-location is less and less relevant, and we haven’t yet seen much end-user focused innovation in the use of domain names, I’m not ready to throw in the towel. I think ICANN is still in a position to do affirmative harm to Internet innovation.
You see, I don’t concede that we know all the things the Internet will be used for, or all the things that could be done on top of and through its domain name system. I certainly don’t claim that I do, and I don’t believe that the intelligence gathered in ICANN would make that claim either.
Yet that’s what it’s doing by bureaucratizing the addition of new domain names: Asserting that no further experiments are possible; that the “show me the code” mode that built the Internet can no longer build enhancements to it. ICANN is unnecessarily ossifying the Internet’s DNS at version 1.0, setting in stone a cumbersome model of registries and registrars, a pay-per-database-listing, semantic attachments to character strings, and limited competition for the lot. This structure is fixed in place by the GNSO constituency listing: Those who have interests in the existing setup are unlikely to welcome a new set of competitors bearing disruptions to their established business models. The “PDP” in the headline, ICANN’s over-complex “Policy Development Process” (not the early DEC computer), gives too easy a holdout veto.
Meanwhile, we lose the chance to see what else could be done: whether it’s making domain names so abundant that every blogger could have a meaningful set on a business card and every school child one for each different face of youthful experimentation, using the DNS hierarchy to store simple data or different kinds of pointers, spawning new services with new naming conventions, or something else entirely.
I don’t know if any of these individually will “add value.” Historically, however, we leave that question to the market where there’s someone willing to give it a shot. Amazingly, after years of delay, there are still plenty of people waiting in ICANN queues to give new gTLDs a try. The collective value in letting them experiment and new services develop is indisputably greater than that constrained by the top-down imaginings of the few on the ICANN board and councils, as by their inability to pronounce .iii.
“How do you get an answer from the web?” the joke goes: “Put your guess into Wikipedia, then wait for the edits.” While Wikipedians might prefer you at least source your guess, the joke isn’t far from the mark. The lesson of Web 2.0 has been one of user-driven innovation, of launching services in beta and improving them by public experimentation. When your users know more than you or the regulators, the best you can do is often to give them a platform and support their efforts. Plan for the first try to break, and be ready to learn from the experience.
To trust the market, ICANN must be willing to let new TLDs fail. Instead of insisting that every new business have a 100-year plan, we should prepare the businesses and their stakeholders for contingency. Ensuring the “stable and secure operation of the Internet’s unique identifier systems” should mean developing predictable responses to failure, not demanding impracticable guarantees of perpetual success. Escrow, clear consumer information, streamlined processes, and flexible responses to the expected unanticipated, can all protect the end-users better than the dubious foresight of ICANN’s central regulators. These same regulators, bear in mind, didn’t foresee that a five-day add-grace period would swell the ranks of domains with “tasters” gaming the loophole with ad-based parking pages.
At ten years old, we don’t think of our mistakes as precedent, but as experience. Kids learn by doing; the ten-year-old ICANN needs to do the same. Instead of believing it can stabilize the Internet against change, ICANN needs to streamline for unpredictability. Expect the unexpected and be able to act quickly in response. Prepare to get some things wrong, at first, and so be ready to acknowledge mistakes and change course.
I anticipate the counter-argument here that I’m focused on the wrong level, that stasis in the core DNS enhances innovative development on top, but I don’t think I’m suggesting anything that would destabilize established resources. Verisign is contractually bound to keep .com open for registrations and resolving as it has in the past, even if .foo comes along with a different model. But until Verisign has real competition for .com, stability on its terms thwarts rather than fosters development. I think we can still accommodate change on both levels.
The Internet is too young to be turned into a utility, settled against further innovation. Even for mature layers, ICANN doesn’t have the regulatory competence to protect the end-user in the absence of market competition, while preventing change locks out potential competitive models. Instead, we should focus on protecting principles such as interoperability that have already proved their worth, to enhance user-focused innovation at all levels. A thin ICANN should merely coordinate, not regulate.
Sponsored byIPv4.Global
Sponsored byRadix
Sponsored byVerisign
Sponsored byWhoisXML API
Sponsored byVerisign
Sponsored byCSC
Sponsored byDNIB.com
Historically, however, we leave that question to the market where there’s someone willing to give it a shot.
True, however TLDs are a very restricted marked. The bureaucracy of creating and managing one isn’t the main problem. The real stumbling block is that there are no - at least no effective - guidelines what happens if one TLD - by what ever means - is deemed a failure and should be closed down. There have been a few cases of not yet wildly used ccTLDs that have been abandoned or transitioned to a new ccTLD. However even moderately clear situations like .SU aren’t handled. A market where failures (like .COOP and to some extend .MUSEUM) aren’t removed is a very unhealthy one. You only seem to treat the failure of the entity managing the TLD. While this is an important aspect, failures of complete TLDs are a vital ingredient for marked driven innovation.
An interesting article, and I agree that things in the DNS world need to become more dynamic if genuine innovation in usage is to occur. But arguably the problem isn’t the policies imposed by ICANN, so much as the existence of the organisation in its current monopolistic incarnation. ICANN as one root amongst many would be the sober elder statesman, providing a conservative option for those who need it, instead of its current role as Saturnine break on progress.
I worked on some exciting DNS technology at Telnic (the people behind the dotTel TLD) only to have the plug pulled because of fears over diverging too far from existing TLDs - even though for ICANN that must have been an attractive part of the proposition. As a consequence I’m of the opinion that things will not improve until the market in DNS roots is deregulated, not only allowing new TLDs to select between competing root providers (or even establish their own root) but also encouraging a greater number of domain start-ups with clever provisioning models geared towards non-traditional applications.
Deregulation would also establish a credible free-market in trust. Forget hacks like OpenID, with a credible multiplicity of DNS roots and the consumer-oriented tools to support them, application developers and internet users will finally be able to take proper charge of their online identities - maybe even choose to be their own TLDs. Internet 2.0 is just over the horizon: driven from the grassroots; dynamically provisioned; rich with short-lived and randomised domain names; featuring many different DNS provisioning models. The question is, are these traits compatible with any model of DNS understood by ICANN?