Home / Blogs

The Site Finder Reprise

I have been attending the the report [PDF] from the Security & Stability Advisory Committee regarding Site Finder.

In reading the committee’s report I discovered what I believe is an incredible breakdown in logic and as a consequence, a very mistaken, or at least confused, set of conclusions. So, why do I say that?

VeriSign’s Site Finder service effectively took a large number of non-existent domain names and “turned them on” through the use of a wildcard in the .com and .net root zones. Instead of “domain name not found” the names were treated as valid domain names and (for those protocols not dealt with by VeriSign) applications seeking to bind to the relevant protocol received protocol level errors.

The security and stability report claims that domain names that are active, but for which many protocols are not live, break the end-to-end principle of the Internet.

So here is my point. The 63 million owned domains rarely have active support for most protocols. Most do not even have a web site using the http protocol. All that VeriSign’s Site Finder did was to turn many more domains into the equivalent of “live domains”, in other words ones which behave like many of the 63 million domains already active. Just like real domains they became live but did not support all protocols.

If Site Finder breaks the Internet in any way, it certainly is the case that normal domain name practice also does this.

The proof of the fallacy in the Security and Stability report is best given through an example. Let’s say I could buy the error logs from the root servers and discover the domains that I would need to buy in order to recreate the equivalent of Site Finder. I then bought those domains and pointed them all to my new search engine, but left all other protocols inactive. I would have legitimate ownership of the names, but all of the criticisms of Site Finder’s negative consequences for applications would be still true. The same end result, but through purchase rather than through a wild card. Nobody could stop me buying those domains and doing as I like with them. But nothing would have improved in terms of the Security and Stability of the Internet compared to a wildcard implementation.

What does this all mean?

It means that Site Finder doesn’t break anything that is not already broken with normal practice by domain name owners today. To single Site Finder out, and not also criticize all domains that do not enable all protocols is a very obvious error. There is actually no difference between the two from the point of view of the application. Normal domains do not return “domain does not exist”. If they are in the DNS but not running protocols then the application returns an error at the protocol api level due to a failure to bind to the required protocol, just as with Site Finder. Arguably, because of VeriSign’s efforts to deal with some of the more popular protocols, Site Finder was a rather more stable environment than normal domain names, which often only implement http.

Hope that clarifies. I’m pleased to say that Steve Crocker told me afterwards that “I get points for figuring this out”. It seems to be a rather enormous discovery given the fuss Site Finder caused. Steve Crocker wouldn’t, I’m sure, agree with this, but I think it entirely invalidates the committee’s findings.

NORDVPN DISCOUNT - CircleID x NordVPN
Get NordVPN  [74% +3 extra months, from $2.99/month]
By Keith Teare, Principal

Filed Under

Comments

George Kirikos  –  Jul 23, 2004 5:04 AM

Security isn’t an absolute concept—it’s a relative one.

I’m entirely happy with a system that would cost on the order of $6*(37)^60 per year (or more) to “break”, just as I’d be happy with a $10 bicycle lock protecting a $500 bike that would cost $75,000 for an attacker to cut through. Indeed, I’d be happy to see someone go ahead and spend the money to prove Keith right. The fact that they don’t proves that his point is invalid, as economics is part of the security equation. I’m sure Bruce Schneier has made the same point on numerous occasions in his newsletters (that’s where I’m sure I picked up the concept years ago), but it’s a bit too late tonight for me to find a reference.

SiteFinder is bad for a large number of other reasons besides security, as one can read in the almost 20,000 petition signatures at Stop VeriSign DNS Abuse. One reason that’s important to people like myself who are concerned about competition is that VeriSign is getting those domains for free under their dream scenario, giving it an advantage solely due to it’s monopoly. That’s an abuse of its monopoly, and outside of the scope of the Registry agreement. As one person had noted, it would be like the company who was contracted to clean the highways decided to earn some extra money by putting up signs along the highway too, without authorization. SiteFinder is worse, because of all the cost-shifting it causes on third parties.

Another concern is the massive typosquatting that SiteFinder is taking advantage of, diluting the value of the PAYING domain registrants. Furthermore, because there is no WHOIS for those domains, under VeriSign’s dream scenario, they’d not be liable under UDRP, etc. Since a lot of the damage is of the nature of “death by a thousand cuts” (i.e. $2/yr here, $1/yr there $0.06 yr elsewhere), the individual damage from each domain that is typosquatted on is small, but added up (on the order of 37^60), it becomes hundreds of millions of dollars worth of damage to the worldwide community (i.e. enough traffic to vault SiteFinder into the top 20 internet sites in the world is a lot of typos). The non-zero price of domain registrations prevents a lot of typosquatting, because it would become uneconomic. As any economist will tell you, a lot of bad things can and will happen if something is made “free”. “Limit”-case solutions can be ugly.

Keith is still listed on the board of directors of SnapNames, which has an interest in keeping favourable relations with VeriSign, due to WLS. I remember the good old days of SnapNames and the WLS debate, when SnapNames employees were visiting my website almost everyday. I don’t see many hits from them anymore, as they’ve really scaled back to a skeleton crew, and reduced their product offerings, as they take a beating from Pool.com, eNom, Namewinner and other competitors. RealNames was another disappointment. Join the winning team, Keith, against SiteFinder, as 3-strikes and you’re out, and so far the count isn’t looking too good. :)

The Famous Brett Watson  –  Jul 23, 2004 4:06 PM

Keith says, “The security and stability report claims that domain names that are active, but for which many protocols are not live, break the end-to-end principle of the Internet.” Based on the reference to “the end-to-end principle”, I can only assume that this is his interpretation of finding (2), which is as follows.

“Finding (2): The changes violated fundamental Internet engineering principles by blurring the well-defined boundary between architectural layers. VeriSign targeted the Site Finder service at Web browsers, using the HTTP protocol, whereas the DNS protocol, in fact, makes no assumptions - and is neutral - regarding the protocols of the queries to it. As a consequence, VeriSign directed traffic operating under many protocols to the Site Finder service for further action, and thus, more control was moved toward the center and away from the periphery, violating the long-held end-to-end design principle.”

In my opinion, Keith’s characterisation of this finding misses the mark.

The Site Finder service was intended, first and foremost, to be a web service. If it had been designed as a web service, utilising the end-to-end principle, then it would have been a browser plug-in, HTTP proxy service, or similar. In other words, it would have operated at a point near the network edge, and had no specific impact on any other kind of service. Changes to the DNS are simply not necessary to create this service—except in the case that you want to foist it on the entire Internet without consent. The point is that introducing a protocol-specific service or feature by adjusting core infrastructure (in general use by all protocols) is architecturally unwise. Whatever protocols are live (or not) at various domains has nothing to do with it.

I’d venture that VeriSign made the change not because it was an architecturally sound idea to do so, but because the means and opportunity to do so was (uniquely for them) within easy reach. The appropriate way to go about it would have involved marketing effort and voluntary participation from interested users, whereas changing the DNS eliminated the marketing problem by delivering the “service” to everyone, everywhere, whether they wanted it or not. Significant unilateral changes to the DNS are not good practice, but they achieved VeriSign’s desired ends in the most expedient manner. VeriSign created for themselves, by fiat, a very nice little advertising cash cow; the expense caused by unexpected breakage in other protocols was our problem, not theirs.

As to Keith’s argument that an equivalent of Site Finder could be implemented in DNS without resorting to a DNS wildcard, it neglects several important facts. As has been pointed out, any person attempting to do this would face significant logistic, economic, and legal hurdles, but even if they cleared those hurdles, the process of registering and disseminating the necessary domain information would take a long time. Gradual change in the DNS is situation normal; drastic change is not. The misguided soul who attempted to go about Site Finder the hard way would still not be able to do what VeriSign did: cause the concept of a non-existent “.com” domain to disappear overnight.

Jane Clinton  –  Jul 23, 2004 10:24 PM

I smell sophistry.

If somebody bought up all possible domain names and decided not to turn them on they would not be “breaking” the end to end principle but simply failing to connect up the respective piece of the “edge” belonging to each domain name. Anything that is “broken” by that failure is broken only for that domain name.

End-to-end was broken by Site Finder at the center, the only place it can actually be “broken”, where it affects everybody.

Buying up all possible domain names and not using them would, unsurprisingly, render the Internet significantly less useful than it is today, but so what? If you want to think of ways to mess up the Internet there are plenty, and not all of them break the end to end principle.

Jothan Frakes  –  Jul 24, 2004 12:21 AM

Keith makes very strong points, in that nothing becomes truly broken by Site Finder that is not already implicitly broken by design.

The real question is, “Is it that the design of things that get ‘broken’ by a wildcard in the root of a TLD zone were designed based upon an expected behavior of a system, and was that expectation of behavior ever implicitly defined to begin with?”

The SecSac report is clearly structured against the future inclusion of a site finder style service, and it is unforutnate that it is so skewed because it steals, IMHO, credibility from many respected members of the community.

Although Verisign is a popular ‘bad guy’ target because of their sheer velocity and size, I can state that I don’t think that the inclusion of a wildcard to the .com or .net zone is a bad idea.

I pose this:  In our everyday internet lives, HTTP and HTML Frames get used in a creative way every day to nest content or cloak the domain that one actually visits.  Was this by an original design?  Probably not.  Was this an innovation based upon creatively meshing the behaviors of two protocols in a way that created better service to domain name registrants, and improved the experience of web users?  Probably so.
Does this lend itself to abuse?  Sometimes.

Yet the cloaked forwarding that occurs, in essence ‘hijacks’ the user to a location other than they had intended to visit in many circumstances.  And these individual ‘purpetrators’ also have advertising in many cases.  Oh, my!

Individual choice to URL forward a domain also obfuscates the ability of the average internet user to report abuse or security issue that might be related to a spammer advertising the domain that forwards, or the content of the nested/cloaked site.

Yet use of forwarding/cloaking a URL is commonly done by domain registrants.

Because the masses have done this at individual levels, it would be hard to enforce some form of moratorium on this type of behavior (and I am not arguing that URL Forwarding is a bad thing).

With SiteFinder, there was one convenient target to focus on, a large corporation.  With URL Forwarding, the masses are individually participating.

I venture to say that the approach in doing singling out Verisign for doing something somewhat creative might seem like a great idea for the internet Jedi knights to pull together on, yet is this really being thought through?

What future types of offerings can no longer see the light of day because of the outcome of the SecSac findings? 

As the number of TLDs expand, I can only see that this obtuse decision to deem wildcard in a TLD bad severely diminishes the ways that TLDs could clearly differentiate their value at being added to the root.

As for my ties to Verisign, my career with them ended about a month ago, and I have a pretty neutral Verisign stance.  This post is my own opinion.

I feel that the outcome of the SecSac findings seemed more targeted at smiting Verisign and nailing the topic shut on the issue of inclusion of a wildcard in the root, as opposed to showing forethought being about the benefit of the internet community.

There are years of innovation that could have been based upon leveraging DNS in unimagined ways which this precedent may have impacted, and it is unfortunate to show resistance to change in a manner that might stifle future innovators.

Jane Clinton  –  Jul 24, 2004 1:38 AM

>nothing becomes truly broken by Site Finder
>that is not already implicitly broken by design

I can’t figure out what something looks like that is “implicitly broken by design”. I can make sense of it by taking it to mean that the thing never worked in the first place. But basic infrastructure that Site Finder broke was the end-to-end principle, and it works just fine right now, thank you very much. It is not broken, not implicitly, not by design - not broken. It was broken when my request for yaddayaddaboom.com was answered by SiteFinder with a “200 OK”. It is not broken now.

If I want my browser to tell me, “Gee I’m sorry Jane but we can’t seem to find yaddayaddaboom.com, honey, why don’t you try yaddayadda.com”, I can arrange that. I couldn’t arrange that (without a lot of palaver and no guarantee that I wouldn’t have to do something completely different the next day) while SiteFinder was working.

You can say it doesn’t matter that Site Finder broke end-to-end, you can say it is good that Site Finder broke the end-to-end principle, but you can’t say that SiteFinder doesn’t break it. Nor can you say, as Keith Teare has tried to do, that failing to use all the available protocols on a domain breaks the end to end principle. His thought experiment only results in something that looks like Site Finder but is implemented at the edges not in the center. That doesn’t break end to end. It may break the same apps that Site Finder broke, but it doesn’t break end to end. And there are plenty of other ways of breaking those apps, but that proves nothing.

The simple, undeniable, fact of Site Finder is that - while it was running - there was no way to get past it.

You can argue that SiteFinder doesn’t violate any of the RFCs that define DNS. You can argue that VeriSign didn’t violate its registry agreement by deploying SiteFinder, but you can’t deny that Site Finder introduced complexity at the center where there was transparency before.

Jothan Frakes  –  Jul 25, 2004 12:18 AM

So basically, my point was that there was no documentation to illustrate abend or error conditions for how a .com or .net domain must behave before or after wildcard was added.

Granted, because it behaved a particular way for so long, it seems reasonable that one can assume a particular default behavior and develop based upon it.  This is clearly what happened.

Now, I am not stating that sitefinder was a utopian solution, nor am I stating that the concerns that people raise are valid or invalid. 

George Kirkos and Jane Clinton have good points.

The point I was trying to make is that nailing shut any changes to wildcard behavior in TLDs seems a bit of a drastic move.

There is a lot of potential and ability to differentiate service offerings in a TLD, and Wildcard use can be a massive boost to the value of service that TLDs offer to their user base.

The consequence that I am raising attention to is that innovation in TLDs is going to be slowed by the SecSac driven ICANN posture on Wildcard.

I feel that we are focusing on the battle and losing sight of the war…  Because we lose sight of the fact that this precedent may completely dilute the value of TLDs for future stakeholders in new TLDs in the future.

Jane Clinton  –  Jul 25, 2004 11:45 PM

>So basically, my point was that there was
>no documentation to illustrate abend or error
>conditions for how a .com or .net domain must
>behave before or after wildcard was added.

>Granted, because it behaved a particular
>way for so long,  it seems reasonable that
>one can assume a particular default
>behavior and develop based upon it. This is
>clearly what happened.

I don’t think it’s as arbitrary as you make it sound. The end-to-end principle is well articulated and was articulated early on as a governing principle of the Internet. See http://en.wikipedia.org/wiki/End-to-end_principle for a short history.  With that as part of the accepted context, a situation which is not defined in detail has to be handled in a way which is consistent with previously defined and accepted principles.

>The point I was trying to make is that
>nailing shut any changes to wildcard behavior
>in TLDs seems a bit of a drastic move.

What was nailed shut by Site Finder was any other way of handling non-existent domains. Before Site Finder, there were various good and bad ways of handling that, which the market, in its inimitable way, is sorting through. SiteFinder put an end to that, nailing shut any other way of handling it but the VeriSign way - because the VeriSign way was implemented at that unique point in the network controlled by VeriSign.

>There is a lot of potential and ability to
>differentiate service offerings
>in a TLD, and Wildcard use can be
>a massive boost to the value
>of service that TLDs offer to their
>user base.

SiteFinder didn’t offer anything that users didn’t already have.  MS, AOL and others were showing users screens like the SF screen in similar circumstances. The unique thing about SF was that it knocked all the other alternatives out of the running by intervening at VeriSign’s chokepoint, disabling anybody else’s response.

>The consequence that I am raising attention
>to is that innovation in TLDs is going to be
>slowed by the SecSac driven ICANN posture on
>Wildcard.

>I feel that we are focusing on the battle
>and losing sight of the war. Because we lose
>sight of the fact that this precedent may
>completely dilute the value of TLDs for future
>stakeholders in new TLDs in the future.

The end-to-end principle does of course limit the innovation that can take place at the center, because the simplicity you get from doing that supports innovation at the edges of the network. Likewise, complexity at the center inhibits innovation at the edges.

So any argument in defence of Site Finder inevitably becomes an argument in favor of complexity at the center of the network. And any defence of complexity at the center has to overcome the well known and widely accepted fact that complexity at the center inhibits innovation at the edges. And the edges is where useful things like email and the worldwide web came about.

What we need from the registries is stability and predictability. If that makes the registry business insufficiently profitable for VeriSign then VeriSign may wish to get out of the registry business.

New TLDs with different characteristics aren’t being ruled out, as far as I’ve seen. .aero is run like one big website with lots of servers, which I think is a model that can go places and be really useful. I expect they will make it so eventually.

Jothan Frakes  –  Jul 26, 2004 8:35 PM

The end-to-end principle is indeed well documented. 

It applies to low-level functions such as TCP/IP, and makes complete sense to technical folks.

Thanks for the link to the definition.

I still opine, as someone who actually ran more than one root listed TLD (with active wildcards), that to imperically see how the wildcard actually functions, in practice, makes a great deal of difference in perspective, versus merely offering up quotations, links, hyperbole, and conjecture.

DNS is far more like a protocol (aka SMTP or HTTP) in its basic behavior, vs, say a core technology like TCP/IP.

I agree that end-to-end rule does apply in DNS in the format and nomenclature of an A record response, but not in the content of what is returned within that response.

But lets go ahead and say that the end-to-end principle is truly applicable.

Stating that the end-to-end principle is applicible to .COM and .NET, in essence implies that it is applicible in all cases with all TLDs. 

The trouble I find with applying that end-to-end argument logic, is that one can not simply single out Verisign for their use of TLD wildcards lest they be guilty of selective enforcement, so the approach ICANN took was to freeze wildcard in all TLDs.

Again, I am neither pro nor con Verisign on this matter, I am neither pro nor con sitefinder.

I simply feel that the net outcome was subutopian, as it impacts TLDs beyond just those that Verisign operates.

The end-to-end principle argument, while it offers an organic enough edge to it to apply to SF, the net product of doing so is that it is so organic an argument that it cripples other innovation in TLDs, OR it is arguably selective enforcement against Verisign.

Populism being that Verisign seems to be targeted as the ‘big bad villan’ here for putting forth SF, and that they broke the end-to-end principle in doing so.

Certainly, the vocal minority (who I would venture may have had predisposition against Verisign) have spoken volumes elsewhere about the pros and the cons of SF and the approach taken in launching it.

I still feel that we find ourselves no better off with the net product of the SecSac results (though I deeply respect them), as there were many potential uses of a wildcard technology that will have been restricted or euphenized by the outcome rules that ICAN put in place as result.

Daniel R. Tobias  –  Jul 27, 2004 3:13 PM

The proper address of ICANN, a noncommercial organization, is www.icann.org, not icann.com as mistakenly linked in this article.

Jane Clinton  –  Jul 27, 2004 8:07 PM

>>But lets go ahead and say that the end-to-end
>>principle is truly applicable.

Thank you. I think this is a productive way of looking at it. The Salzer-Reed-Clark paper describes the end-to-end principle as providing “a rationale for moving function upward in a layered system, closer to the application that uses the function.” Which suggests it’s applicable here. You could also look at it as applying the economic principle of avoiding unnecessary monopoly - which wild card in the registry is bound to be.

>>The trouble I find with applying that end-to-end argument
>>logic, is that one can not simply single out Verisign for
>>their use of TLD wildcards lest they be guilty of selective
>>enforcement, so the approach ICANN took was to freeze
>>wildcard in all TLDs.

Yes, I was overly generous in talking about .aero. There is a temptation for me to say, “do what you want with a closed group like .aero as long as you leave the Internet that I use alone!”.  On the other hand, maybe it is true that you could apply different principles in a smaller and/or closed system than you would want to apply in the Internet at large (.com, .net and the other gTLDs) and still be consistent.

>>I simply feel that the net outcome was subutopian,
>>as it impacts TLDs beyond just those that Verisign
>>operates.

Well, I think life is subutopian so maybe the outcome was realistic. But maybe it was optimal as well.

However, let’s look at the possibility that it’s not. I wonder if these current arguments in favor of SF don’t come down to the assertion that unless we allow potential new registries to innovate at the center they are not going to have the dynamism needed to grow new TLD businesses. If so, I can see the logic to that. I am not in the position of an entrepreneur thinking about launching a new business model like that, nor a VeriSign thinking about expanding its TLD registry business, but I can certainly see how - for the registry operator - unfettered innovation is preferable to more constraints in those circumstances.

So it may be true that keeping open the really large and varied possibilities for innovation at the vast edges of the Internet—by continuing to follow the end-to-end principle—may mean that we have to forego a completely unfettered expansion of the DNS. If I had to choose between those two possibilities I would opt for keeping open the vast possibilities for innovation at the edges of the Internet even if it meant slower growth in TLDs.

Jothan Frakes  –  Aug 2, 2004 5:07 PM

There is an attempt to document the misbehavior of legacy applications and the queries that hit the TLD roots and the root itself, and it is being drafted by Matt Larson of Verisign.

Though obtuse to the discussion of Sitefinder and wildcard, this is more of an appropriate manner to approach solutions, IMHO.

I think that it is important to identify that this is pragmatic and vital work to put parameters around appropriate messaging and negative responses in DNS.

This is a far better approach to identifying what is and is not supposed to be sent in DNS, and what expectations SHOULD be, than to come in from an ideological, ivory tower approach that singles out a particular TLD.

Matt has a ton of experience in the Domain Name system, and I have had the opportunity to work with him and have high regard for his contributions to the Internet naming system.

http://www.ietf.org/internet-drafts/draft-ietf-dnsop-bad-dns-res-02.txt

This merits reading in addition to the links to the ‘end-to-end’ ideology in earlier posts on this thread.

-Jothan

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

Related

Topics

IPv4 Markets

Sponsored byIPv4.Global

Threat Intelligence

Sponsored byWhoisXML API

New TLDs

Sponsored byRadix

Brand Protection

Sponsored byCSC

Cybersecurity

Sponsored byVerisign

Domain Names

Sponsored byVerisign

DNS

Sponsored byDNIB.com