|
On Tuesday July 8, CERT/CC published advisory #800113 referring to a DNS cache poisoning vulnerability discovered by Dan Kaminsky that will be fully disclosed on August 7 at the Black Hat conference. While the long term fix for this attack and all attacks like it is Secure DNS, we know we can’t get the root zone signed, or the .COM zone signed, or the registrar / registry system to carry zone keys, soon enough. So, as a temporary workaround, the affected vendors are recommending that Dan Bernstein’s UDP port randomization technique be universally deployed.
Reactions have been mixed, but overall, negative. As the coordinator of the combined vendor response, I’ve heard plenty of complaints, and I’ve watched as Dan Kaminsky has been called an idiot for how he managed the disclosure. Let me try to respond a little here, without verging into taking any of this personally.
Q: “This is the same attack as <X> described way back in <Y>.”
A: No, it’s not.
Q: “You’re just fear-mongering, we already knew DNS was terribly insecure.”
A: Everything we thought we knew was wrong.
Q: “I think Dan’s new attack is <Z>.”
A: If you guess right, you can control the schedule, is that what you want?
Q: “I think Dan should have just come right out and described the attack.”
A: Do you mind if we patch the important parts of the infrastructure first?
Q: “Why wasn’t I brought into the loop?”
A: Management of trusted communications is hard. No offense was intended.
Now for a news bulletin: Tom Cross of ISS-XForce correctly pointed out that if your recursive nameserver is behind most forms of NAT/PAT device, the patch won’t do you any good since your port numbers will be rewritten on the way out, often using some pretty nonrandom looking substitute port numbers. Dan and I are working with CERT/CC on a derivative vulnerability announcement since it appears that most of the NAT/PAT industry does indeed have this problem. The obvious workaround is, move your recursive DNS to be outside your NAT/PAT perimeter, or enable your NAT/PAT device to be an ALG, or use TSIG-secured DNS forwarding when passing through your perimeter.
Please do the following. First, take the advisory seriously—we’re not just a bunch of n00b alarmists, if we tell you your DNS house is on fire, and we hand you a fire hose, take it. Second, take Secure DNS seriously, even though there are intractable problems in its business and governance model—deploy it locally and push on your vendors for the tools and services you need. Third, stop complaining, we’ve all got a lot of work to do by August 7 and it’s a little silly to spend any time arguing when we need to be patching.
Sponsored byRadix
Sponsored byVerisign
Sponsored byDNIB.com
Sponsored byIPv4.Global
Sponsored byWhoisXML API
Sponsored byVerisign
Sponsored byCSC
Just to be clear, would regular clients (Macs, Windows, Linux) behind a NAT firewall (e.g. a Netgear, Linksys or similar routers used for sharing a network connection) be vulnerable if they have DNS set to a non-vulnerable external server in their local client settings? (i.e. does the NAT make them now vulnerable?)
Would most routers used for sharing a network connection in a LAN need firmware updates due to this issue, if the routers themselves are DNS servers?
I think folks do take the advisory seriously, but want a “no brainer” guide “for dummies” to make sure they’ve done all they can, especially given that many vendors haven’t made an official statement yet.
yes and no. if you have a de-randomizing NAT/PAT device in front of your randomizing DNS questioner, then you will be less safe as a result of the de-randomization. however, non-caching (end-user; client only) DNS questioners are inherently less vulnerable to spoofing since they won’t save or re-use any bad data that someone might be able to fool them with. so while there is a danger, it’s not quite as dramatic. most non-caching DNS questioners will never be upgraded to randomize their UDP ports, so most of the time a de-randomizing NAT/PAT device will not make them even less safe.
almost certainly, but you’ll have to contact your router vendor to be sure. note, if you can do a “dig porttest.dns-oarc.net in txt” from a client of that router or any other DNS server, you can learn a lot about how random its UDP ports are. a rating of FAIR or GOOD means you have no worries. any other rating means you need to call your vendor and set their hair on fire.
for early notification we mostly picked on the large enterprise DNS vendors since a small number of patches and upgrades would protect a large number of endpoints. also we knew all of them personally :-). for the router vendors who embed a DNS server in their product, we were mostly told “it’s hopeless, noone ever installs our firmware updates, but we will try.” so, call your vendor, large or small, and tell them you need a fix to CERT #800113.
Carl, Standard deviation is a measure of variation of a set of numbers which has nothing at all to do with the normal distribution. All distributions have a standard deviation. The normal distribution is just one among an infinite number of random distributions. The use of the standard deviation in statistics does not imply necessarily that the numbers are random. Even a normal distribution might not be at all random. However, the point is certainly correct that the std dev does not tell you whether the sample of 26 UDP source ports is random. Randomness tests are rather deep analysis which can get quite philosophical. As I understand it, the purpose of the UDP source port std dev measure is to just distinguish between the extremes of the expected kinds of UDP source port behaviours. E.g. between constant port number (std dev = 0), simple linear sequence (std dev = about 7.5) and a uniform distribution over 64,000 values (std dev = 32,000/sqrt(3) = about 18,000). So if the distribution is uniform, you can infer the range of the sample space (the range of possible ports) from the std dev. But like all statistical estimation, you have to start with some sort of model. Testing whether a sample does or does not seem to be consistent with a particular statistical model is probably too deep to put in a DNS server. Perhaps another idea would be to return a TXT field containing the complete list of 26 ports and let the client do the stats. But then this might assist a bad guy to hack some DNS resource using the explicit port numbers. The std dev just anonymizes the mean, minimum and maximum of the UDP port distribution. It certainly is enormously useful for the present purposes. Not that many routers map UDP source ports to a bi-modal distribution in my experience.
Carl and Alan, I'm the author of the porttest tool. Thanks for your comments about how standard deviation doesn't necessarily relate to randomness. While writing the tool I spent some time researching other ways to report randomness, but became overwhelmed by it all. As Alan said (better than I), a simple standard deviation calculation seems to be a very strong indicator of port randomness in this case. I am working on a new web-based tool that will also report the actual ports (and query IDs) seen, so stay tuned for that. Duane W.
Thanks for posting the test, Paul. I appreciate that it’s nice and straightforward, as many other people’s descriptions have been very technical and less than clear. That’s something that everyone can easily run from the command line, to see if they’re vulnerable:
I just ran it, and got the result:
which is obviously a failure of the test (you might want to make the language of the failure even more obvious, e.g. DANGER!! YOU ARE LEAST SAFE!!). I suspect I’m not alone! (probably millions of people with consumer-grade cable/DSL routers face the same issue). I already put in a call to my cable/DSL router manufacturer (before I posted the initial question even) and await a reply.
Once again, for the “for dummies” crowd, if the vendor doesn’t do anything, are there any countermeasures we can easily employ today? I noticed OpenDNS.com suggested using their nameservers, and then firewalling off any other DNS responses. Is that a possible solution that consumers can deploy? (I assume we’d need to put their nameservers into the router’s DNS configuration, as well as the DNS settings of all the clients? Or should we point all clients to the DNS of the router, and then point the router’s DNS to OpenDNS? And then figure out what access rules to employ on the router’s firewall to ensure that poisoning can’t occur?)
Oddly, it was the 4.2.2.1 and 4.2.2.2 nameservers that I had been using previously (as they have a great reputation for speed), and which gave me the "poor" result, before I switched to OpenDNS. I just switched back to them on my Mac (and left the OpenDNS servers on the router), and got back the following:
If you do a WHOIS of 209.244.4.18 it appears to be the same owner as 4.2.2.1. The Doxpara.com test also says I'm "vulnerable" with those nameservers on my Mac client behind the NAT firewall/router, although it reports the IP address as 209.244.4.24. Repeating the test again after switching the router's DNS to 4.2.2.1 and .2 (and keeping the Mac at 4.2.2.1 and .2 too) yields the similar failure: Switching everything back to OpenDNS again, and I get the "GOOD" result. Must be something happening in the router, if indeed 4.2.2.1 and 4.2.2.2 are secure for everyone else. Has everyone else reported that 4.2.2.1 and .2 are secure? (maybe it depends on one's geographical location?)I've learned from Level(3), operators of 4.2.2.1 and 4.2.2.2, that they will be rolling out UDP port randomization well in advance of Dan's August 6 BlackHat talk, but it's not an overnight process owing to the largeness of their anycast cloud. Something about doing the odd numbered servers first. Note: they also said they would eventually restrict 4.2.2.1 and 4.2.2.2 to customer access only, so if you're not a Level(3) customer, you probably need to find another solution. Almost every ISP has recursive name servers, and if yours is honest -- sends you an error rather than advertising if you type in a nonexistent domain name -- you should be using it. If your ISP is dishonest, then you should consider opendns or neustar's dnsadvantage, or do what I do, run your own RDNS. I use BIND, but I've also heard good things about PowerDNS and Unbound. There are also many non-free RDNS servers.
It appears anycasted, but the one I tested against failed.
You're right, I just tested 4.2.2.1 and .2 from one of my dedicated servers that had been using those in its configuration, and only got a "FAIR" result, thus it appears that not all of their (anycasted) systems have been updated accordingly, or something.
I am not an expert, but.... I tried the dig test a few times and got the opposite to the truth sometimes. Case 1. I ran dig @patched.dns.server porttest.dns-oarc.net in txt and got POOR. It turned out that my patched DNS server was connected on a 192.168.1.0/24 subnet to the ADSL modem, which then duly de-randomized the UDP source ports on the way out. So the dig test gave me a POOR. Then I reconfigured the ADSL modem to only S-NAT the source IP address, and got a GOOD. This is a case where running a packet trace does not reveal the truth because the harm is done further up the path by the ADSL modem. So the dig test is very valuable here to expose this problem. Case 2. This might be more serious. I pointed dig at the ADSL router (on different premises in a different city) as in dig @adsl.router porttest.dns-oarc.net in txt and got a GOOD. That surprised me because I was sure this modem was using fixed UDP source ports for DNS requests from the built-in resolver. It turned out that the ADSL router's DNS server was getting DNS translations from the two DNS resolvers of the ISP, and the ISP's DNS requests were using randomized ports already. So the dig test gave GOOD, although the DNS server was definitely not good at all. Now if I have vaguely understood the attack mode at all, it seems to me that a DNS server inside an ADSL modem using fixed UDP source ports to request translations from the ISP is going to be vulnerable if the source IP address of the ADSL modem is the same as that used for DNS requests. In other words, this very, very typical DNS set-up is going to be as vulnerable as the worst case because the application destination IP address host will know the IP address of the DNS server which is using fixed UDP source ports. Conclusion: You get GOOD from the dig test, but the DNS server is bad, apparently. Question: Has anyone clarified this anywhere? Case 2 gives a false sense of security apparently, and it is a very common scenario. And the ADSL modem needs a firmware upgrade to fix it. And most people don't want to buy an extra box to do DNS cached resolving within their LAN, even if they knew how to do it.
Does securing zone transfers with TSIG eliminate the need for the BIND upgrade? Or does TSIG just mitigate the NAT issue after upgrading?
Just a quick followup, I changed the DNS settings on both my cable/DSL router and on my LAN clients behind its NAT firewall to OpenDNS, and now I get:
which appears to be a lot more secure (although, it would be nice if DNSSEC or an alternative was available for even greater security). I did not need need to alter my firewall rules on the cable/DSL router.
That's great George. I'm glad we were able to help!
why does opendns not provide DNSSec ?
a multitude of reasons though I do think we can do something to encourage adoption...
I also know that ICANN and USG lack the political will to sign the root zone with Secure DNS,
While I obviously cannot speak for the USG (and I don’t speak for ICANN), stating that ICANN lacks the political will to sign the root is obviously wrong, see the IANA signed root zone demo. But Paul knows this, having been asked long ago if ISC would be interested in discussing providing secondary service for said signed demo root (which, for the record he indicated he would, but the demo effort got derailed by other politics).
While ISC is totally ready and willing to cooperate with ICANN on the root zone demo David mentioned, that demo is of technology, not political will, and is therefore somewhat off-topic. I want the real root zone signed with a real key so that TLDs who sign themselves will have a secure place to publish their TLD keys. I have searched the world and I've busted into every smoke filled room on it, and I have still not found the person who can say "yes" to that, nor have I found the people who are currently saying "no". What I do know is that if ICANN and USG had the political will to make this happen, it would happen.
that demo is of technology, not political will, and is therefore somewhat off-topic.
One of the points of the demo was to indicate IANA was undertaking to be in a position to sign the root, even at non-trivial cost (you think multiple FIPS 140-3 hardware security modules come free?). I’m not sure how that cannot be a demonstration of political will on the part of ICANN to see that the root gets signed, but it is actually irrelevant. Even if the root were signed today, it would be essentially meaningless to address this particular vulnerability in the foreseeable future since:
a) last I checked, a total of 4 TLDs are currently signed (SE, PR, BR, and BG);
b) infinitesimally few caching servers are configured to validate responses and a goodly portion of the caching servers that people use either do not now support DNSSEC (e.g., Microsoft’s DNS server, PowerDNS, OpenDNS, etc.) or will never (according to the author) support DNSSEC (e.g., djbdns);
c) even if every zone on the planet were signed and trust anchors were appropriately configured and maintained, the mechanisms by which validation failure is returned to the end user is indistinguishable from a variety of network problems for the vast majority of applications. As a result, an ISP turning DNSSEC on will likely be subject to a flood of expensive support calls, greatly encouraging that ISP to turn DNSSEC off.
That is not to say that I wish to discourage you from tilting at that particular windmill (after all, any journey starts with a single step and all of the above can be fixed with sufficient effort), but there is a lot more to seeing DNSSEC usefully deployed than “signing the root”. Further, as you well know, the shorthand “sign the root” means quite a bit more than running dnssec-signzone over the root zone data and it is simply silly to assume ICANN is or even should be in a position to undertake the steps to “sign the root” unilaterally.
What I do know is that if ICANN and USG had the political will to make this happen, it would happen.
While I know in some circles it is considered a fun sport to bash ICANN, asserting ICANN doesn’t have the political will to see the root signed is both wrong as well as somewhat insulting to the folks at IANA and ICANN who have spent considerable amount of time, resources, and energy to see forward motion.
Our windows admin claims they are recently updated. I’ve tested the AD server providing caching DNS service and zone transfer from our bind, and it responds to the test like so:
dig +short porttest.dns-oarc.net in txt @adc1
z.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net.
There is no GOOD or POOR. What does this mean?
Hard to say exactly what it means. Perhaps the reply was blocked for some reason. Or maybe the test timed out. You can try 'dig' without the '+short' option. Also you can try our new web-based test: https://www.dns-oarc.net/oarc/services/dnsentropy
Since July 9, I’ve been trying to get clarity from many people on this question, and have tried to infer the facts from the publicly available information. But after more than 8 days, I just really have to ask this question here directly. Either no one is answering it because the answer is too obvious, or maybe because it would cause mass panic. (If it would cause mass panic, please delete this e-mail!!)
My question:
Could you please tell me is the Dan’s DNS protocol vulnerability applies to the following scenario?
Suppose we have these nodes in a DNS request path.
1. A desktop computer (on a LAN) with a typical internet application, like a browser.
2. An ADSL (or other kind of) router, which contains a DNS server which fetches DNS translations for the computers on the LAN.
3. Two IP addresses supplied by the ISP for domain name resolution. The customer premises router gets its translations by sending requests to these two nodes.
4. An IP host belonging to the ISP which runs a nicely patched DNS server which uses random UDP source ports to request DNS translations from the world-wide DNS.
5. The world-wide domain name system.
In this set-up, typically the ISP’s node 4 is fine. (This is the DNS server whose IP address shows up in the dig porttest.dns-oarc.net test when dig is run against the ADSL router, node 2, on the LAN-facing interface.)
It seems to me that since the application software in node 1 will use the S-NAT-translated source IP address of node 2 (the router) for its application requests (like on TCP port 80), and node 2 contains a DNS server which is
not
using randomized UDP source ports, then the router must surely be fully vulnerable to the worst of whatever Dan has in store for us. Even though the DNS requests from this router normally only go to the ISP’s two hosts (nodes 3), surely the attacker can mess around with that router to the maximum extent permitted by the vulnerability. The attacker can send DNS traffic to the router’s internet-facing interface.
In other words, even though node 1 is using safe node 4 for the last hop of the DNS request chain before hitting the wide open DNS (nodes 5), the intermediate node 2 can be poisoned, and the ISP’s virtuous patching will have been to no avail.
Question 2:
If the above scenario is correct, does this imply that hundreds of millions of routers of users at home or in small offices will be toast on 7 August?
Just curious…
yes and no. the worst is when you not just believe something that’s not true but you also remember it and pass it for a while until you eventually forget. in the scenario you’re outlining, the ADSL router isn’t a full DNS server, it just SNATs to the ISP’s fully-randomizing DNS server. so, yes, your ADSL router is vulnerable. but since it has no cache, it’s not a very fat target.
“toast” isn’t exactly the right word for most of them. try “garlic bread.” however, some of them are running RDNS (patched or not) inside their SNAT, which will de-randomize their UDP port numbers, and they are toast. Others are running RDNS (with caching) in unpatchable SOHO routers, and they are also toast. but i’d say hundreds of thousands, not hundreds of millions, since those configurations aren’t all that common.
So, is it possible that one might pass the above “dig” port test (i.e. receive a “GOOD” or “FAIR” result), yet have a false sense of security because one is still vulnerable to the DNS cache poisoning attack?
Or is passing that test sufficient to sleep reasonably well at night? If it’s not a sufficient test (i.e. it catches only a subset of all vulnerable people), perhaps a more exhaustive test is required?
George, yes, unfortunately, the false sense of security is a possibility, for three reasons: 1) your DNS queries may go through more than one resolver. All resolvers in the path should be patched to be safe(er) from this vulnerability. 2) your DNS queries may go through a NAT device that does not preserve source port randomness. 3) the porttest server calculates standard deviation, which does not necessarily equate to randomness. We think it is a good indicator, but it is not perfect. For 1) and 2) the porttest response will tell you what IP address it received queries from. So if that address doesn't match where you sent queries to, you know that they are either going through NAT or another resolver. For 3) you can also check out https://www.dns-oarc.net/oarc/services/dnsentropy. This will present the results graphically so that you can see the actual port distribution.
My question is basically the flip of some that have been asked (if I understand correctly). If I run the test on my own nameserver (CentOS 5, bind-9.3.4-6.0.2.P1.el5_2, 12-Jul-2008 12:46), I get a GOOD, which is good:
dig @72.83.159.115 porttest.dns-oarc.net in txt
porttest.dns-oarc.net. 60 IN CNAME z.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net.
z.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net. 60 IN TXT “72.83.159.115 is GOOD: 26 queries in 2.3 seconds from 26 ports with std dev 14019.46”
But if I use my provider (Verizon) nameserver I get POOR, which isn’t good:
dig @71.252.0.12 porttest.dns-oarc.net in txt
porttest.dns-oarc.net. 60 IN CNAME z.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net.
z.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net. 60 IN TXT “71.252.0.38 is POOR: 26 queries in 2.2 seconds from 21 ports with std dev 17.55”
Does that mean I’ve done all I can to my machine but I need to bug Verizon? I’d think this is the sort of thing Verizon would care about and would deal with pretty quickly.
Henry, your analysis is correct. The nameserver at 71.252.0.12 needs to be upgraded/patched. Hopefully they will get it done soon. If not you may be able to configure your systems to use your local nameserver and bypass the ISP nameserver entirely.
So, I have 2 name servers, both running bind 9.5.0-p1, yet they both are rated “POOR: 26 queries in 3.0 seconds from 1 ports with std dev 0.00”... What now? I ran the test from the machines themselves…. Thanks.
If you upgraded your BIND9 to one of the -P1's and you're still seeing "POOR" from "dig porttest.dns-oarc.net in txt" then it's either because you still have "query-source" set to a single port in your named.conf (in which case your syslog should be warning you about this) or because you are behind a de-randomizing NAT of some kind.
Thanks, Paul. the 'query-source' is not set (I understand that default is random) and I don't *think* I'm behind a NATting device, but I'll check... Thanks again... Steve
Steve, If you use http://entropy.dns-oarc.net/test/ it will show you the actual port numbers received by the server. DW
Thanks, Duane. BTW... what do you mean by 'use' the link? Sorry, sometimes things have to be spelled out in order to make sense... Thanks...
I mean enter the URL into your web browser Location bar
That's what I thought, but wanted to be sure... I've tried that numerous times but I'm guessing it's extremely busy... I can't get to it... Thanks again, Duane.. I appreciate your time... Steve