|
Seems that DNSSEC is being subjected to what an old boss of mine used to call the “fatal flaw seeking missiles” which try to explain the technical reasons that DNSSEC is not being implemented. First it was zone walking, then the complexity of Proof of Non-Existence (PNE), next week ... one shudders to think. While there is still some modest technical work outstanding on DNSSEC, NSEC3 and the mechanics of key rollover being examples, that work, of itself, does not explain the stunning lack of implementation or aggressive planning being undertaken within the DNS community. Perhaps we need to review in a wider context the incentives to implement DNSSEC - for registry operators and domain owners - given that it is not a trivial process.
The negative incentives are clear, scary and unacceptable. First, we have the minor problem of not knowing whether our browsers are really taking us to ourfinancialinstitution.com or weregonnastealyourpasswords.com. Second, more work is being done and published on DNS exploits and cache poisoning - the nascent DNS hacking community is getting a thorough and on-going education. Third, more organizations are running with nice short value TTLs to help the wannabe attackers get in some serious poisoning practice. Fourth, we are creating caches in stub resolvers and browsers all over the place such that, in the event bad data gets in there, we can be absolutely sure its pernicious effect will last for a long time. Given this non-exhaustive list DNS administrators or owners who are not scared witless lack both imagination and professionalism.
With all the acknowledged weaknesses of the current DNS hierarchy and assuming that domain owners are not stupid the question is - why is DNSSEC not being implemented as fast as possible? And in passing lets dismiss the “we are just waiting for feature X or Y or Z and then…” assertion. DNSSEC is being pushed by the technical community not pulled by users.
The simple answer is - given the current DNS operational infrastructure even if we get all the technical details right - and they are largely right now - there is still no compelling incentive to implement DNSSEC.
Registry operators who would have the responsibility for doing serious DNSSEC work - and hence incur a burden of cost - cannot see a way to make additional revenue. Why? Because domain owners who sign their zones have no guarantee that an end user will receive the data that was sent from the authoritative name server. If a user is going to pay filthy lucre for something perhaps they want it to work. Period. Not “downhill with a following wind”, or “most of the time”, or “on every third Sunday in the month”. They want a deterministic, guaranteed solution.
In the current DNS architecture most end-users have at least two levels of intermediate DNS functionality, only one of which they have some limited control over, between user access software, say a browser, and the authoritative DNS records.
This DNS infrastructure has evolved pragmatically and functions perfectly in a world where all DNS access routes are equally insecure. But even if the target zone is signed and the caching nameserver security-aware - highly unlikely - the communication leg from the caching nameserver to the user application is still wide open to abuse. With the current DNS infrastructure we have not achieved end-to-end security - even with a DNSSEC implementation. And arguably never can.
Accentuate the Positive
So now it’s time to take away the “fatal flaw seeking missile” targets and get positive.
Those fiendishly clever guys in the IETF DNSEXT working group have provided a solution in which all the weaknesses inherent in the current infrastructure can be removed - it just needs a few lines of code here and there to make it all work!
The current DNSSEC standards define a security-aware (stub) resolver that would be located at the users PC and which can indicate to a security-aware intermediate nameserver that it will perform its own DNSSEC validation by setting the Checking Disabled (CD) flag in the DNS query Header. This has the effect of inhibiting DNSSEC at the security-aware nameserver causing all necessary records to be supplied to the resolver to enable it to perform the security validation. The net result is we have achieved end-to-end security. The signed domain owner can be assured with this architecture that all the hard work and pain involved in implementing DNSSEC will generate the predictable and desired result.
So it remains to consider the tactical details of how could we make all this happen. Could we make this a commercial service? So here in an attempt to start the discussion is one “straw-man” solution to make the DNS world a safer place.
The security-aware stub resolver could either replace the existing stub-resolver on the PC or be embedded into the browser. The latter method would clearly be relatively trivial with an Open Source browser - and would do wonders perhaps for the marketing of Mozilla - but would have the disadvantage of not making the service available to all PC applications, for example, a mail client. The former method has a problem in that standard library calls to a local stub resolver do not have a means to return an indication that the security check failed. However is should be noted in passing that there are already serious problems with this interface best illustrated by MSIE’s browser based cache which keeps resolved names for 30 minutes (and thus rendering useless all those short TTLs) simply because the interface also does not have a method of returning TTLs. So perhaps this interface needs overhauling in any case.
And what about building that mythical security-aware stub resolver. Well it exists (UNBOUND) at least in architectural and prototype form due the insight and support of VeriSign, Inc. and USC/ISI and is currently being ported to C by NLNETLABS.NL.
The security-aware resolver needs a security-aware nameserver to do the heavy lifting of resolving DNS queries. While not vital it nevertheless seems foolish to bypass this useful level of caching. In the classic architecture this function is typically performed by a service provider’s caching nameserver which cannot be guaranteed to be configured to be security-aware. We need a means for our security-aware stub resolver to get to a guaranteed suitably configured security-aware nameserver. The obvious way to do this is that our security-aware stub resolver is simply ‘pre-configured’ with the (anycast) addresses of suitable nameservers. Such a nameserver could either be used for every query or only if a test query to the default nameservers failed to find a DNSSEC service with the appropriate trust-anchors.
So finally we are left with a minor hole in the architecture. The current gTLDs, ccTLDs (with the honorable exception of Sweden) and sTLDs and, just in passing, the root is not secure.
There are two solutions to this problem. First - wait, perhaps forever. Second, bypass the normal DNS hierarchy when validating DNSSEC. Here again there is a solution which depending on your point of view is either skullduggery or inspired - DNSSEC Lookaside Validation (DLV). One of the main criticisms leveled against DLV is that it does not scale. By this is normally meant that we cannot have hundreds of possible DLV zones each requiring a trusted anchor (the same by the way is true of any “island of security” strategy). But perhaps there is no reason for scaling. Suppose two or three vendors were to offer the services outlined here - here the Mastercard and Visa analogy springs to mind - zone owners would select their chosen supplier for DNSSEC services and these vendors would provide browser or security-aware stub-resolver enhancements and security-aware caching nameserver services. There is no need for scaling.
The solution outlined is not the only possible one but tries to be faithful to the spirit of DNSSEC and could be attractive to critical infrastructure, financial and revenue earning domain owners who might even be persuaded to part with modest sums to let their DNS administrators sleep nights.
Perhaps the bottom line here is this - if the registry operators do not provide the appropriate DNSSEC end-to-end services someone is going to eat their lunch. The depressing question that would then follow is, once this alternate architecture is in place (driven by user demand), is there any residual value-added left for the registry operators?
Sponsored byCSC
Sponsored byDNIB.com
Sponsored byIPv4.Global
Sponsored byRadix
Sponsored byVerisign
Sponsored byVerisign
Sponsored byWhoisXML API
FYI, the DNSSEC-Tools project (http://www.dnssec-tools.org) already has a security aware browser (firefox), email reader and other applications…
Very interesting article. I think that for many of the reasons you list, Paul Vixie and ISC were working on DLV.
But recently the F Root/NeuStar announcement came out. By inserting authoritative serves inside major ISPs—the DNS Shield—don’t you get some of the benefits of DNSSEC? Not for everyone of course, but for the customers of NeuStar/UltraDNS?
Ron, I manage a registry, and must share with you that the reason suggested above is not accurate in the least. Implementing DNSSEC is a fundamental security initiative, not a revenue initiative. In my view, registries manage a public trust, and had better work to implement appropriate measures to secure core infrastructure.
The problem we have faced is complete apathy from both network operators and registrars, who together occupy a significant part of the value chain. They, in turn, say that there is no perceived demand from the end-users.
That is why I am a proponent of branding DNSSEC, into something recognizable and accessible to a non-techie. Many years ago, someone clever decided to market the “lock” on the browser, not SSL - and it seems to have worked into becoming a mainstream demand.
-Ram
Ram:
It seems to me that we may be in danger of violent agreement!
To egregiously simplify there are two registry operator models - the “act of faith” model (to which Sweden, RIPE and others belong) and let’s call it the user responsive model (cynics may term it the commercial model) who will implement DNSSEC if they perceive a user demand - which I agree is entirely lacking at this time.
I tried in the article to address why, given the real and present dangers, user demand is not present and concluded that - given the current DNS infrastructure landscape - domain owners cannot guarantee end-user integrity of their domain data and until this issue is addressed domain owner demand will not follow. For sure there are other issues, such as domain-owner education, which will contribute to the total equation.
The standards as currently written do allow for end-user domain data integrity but to achieve this objective may require a fairly radical overhaul of the way DNS data is delivered to an end-user application (say a browser). The tools and packaging being developed and experimented with at UNBOUND and http://www.dnssec-tools.org (thanks to Wes for the link) I think point the way forward.
However, the packaged DNSSEC delivery vehicle is as you point out best accomplished by a simple confidence-inspiring button or symbol easily recognised by an end-user.
If the current DNS hierarchy does not embrace the end-user problem some other organization(s) will. Those organization(s) will become very powerful, by being sited in a controlling position, from which they can leverage all kinds of benefits - one of which may be to remove any possibility of future value-added from the current operators.
Ron, just to give you a little hope that things are starting to move (slowly).
.cz and .0.2.4.e164.arpa registry is going DNSSEC next year (most probably I.Q-II.Q). Unfortunately we are too busy this year.
Ondrej