DNS |
Sponsored by |
Previous posts (Part 1 and Part 2) offer background on DNS amplification attacks being observed around the world. These attacks continue to evolve. Early attacks focused on authoritative servers using "ANY" queries for domains that were well known to offer good amplification. Response Rate Limiting (RRL) was developed to respond to these early attacks. RRL, as the name suggests, is deployed on authoritative servers to rate limit responses to target names.
There are some real problems in DNS, related to the general absence of Source Address Validation (SAV) on many networks connected to the Internet. The core of the Internet is aware of destinations but blind to sources. If an attacker on ISP A wants to forge the source IP address of someone at University B when transmitting a packet toward Company C, that packet is likely be delivered complete and intact, including its forged IP source address. Many otherwise sensible people spend a lot of time and airline miles trying to improve this situation... The problems created for the Domain Name System (DNS) by the general lack of SAV are simply hellish.
Last week, The New York Times website domain was hacked by "the Syrian Electronic Army". Other famous websites faced the same attack in 2012 by the Hacker group "UGNazi" and, in 2011 by Turkish hackers. Basically, it seems that no Registrar on the Internet is safe from attack, but the launching of new gTLDs can offer new ways to mitigate these attacks.
This post follows an earlier post about DNS amplification attacks being observed around the world. DNS Amplification Attacks are occurring regularly and even though they aren't generating headlines targets have to deal with floods of traffic and ISP infrastructure is needlessly stressed -- load balancers fail, network links get saturated, and servers get overloaded. And far more intense attacks can be launched at any time.
Since Verisign published its second SSR report a few weeks back, recently updated with revision 1.1, we've been taking a deeper look at queries to the root servers that elicit "Name Error," or NXDomain responses and figured we'd share some preliminary results. Not surprisingly, promptly after publication of the Interisle Consulting Group's Name Collision in the DNS [PDF] report, a small number of the many who are impacted are aiming to discredit the report.
Geoff Huston's recent post about the rise of DNS amplification attacks offers excellent perspective on the issue. Major incidents like the Spamhaus attack Geoff mentions at the beginning of his post make headlines, but even small attacks create noticeable floods of traffic. These attacks are easy to launch and effective even with relatively modest resources and we see evidence they're occurring regularly. Although DNS servers are not usually the target of these attacks the increase in traffic and larger response sizes typically stress DNS infrastructure and require attention from operation teams.
One of the most prominent denial of service attacks in recent months was one that occurred in March 2013 between Cloudflare and Spamhaus... How did the attackers generate such massive volumes of attack traffic? The answer lies in the Domain Name System (DNS). The attackers asked about domain names, and the DNS system answered. Something we all do all of the time of the Internet. So how can a conventional activity of translating a domain name into an IP address be turned into a massive attack?
When the domain name system (DNS) was first designed, security was an afterthought. Threats simply weren't a consideration at a time when merely carrying out a function - routing Internet users to websites - was the core objective. As the weaknesses of the protocol became evident, engineers began to apply a patchwork of fixes. After several decades, it is now apparent that this reactive approach to DNS security has caused some unintended consequences and challenges.
Throughout this series of blog posts we've discussed a number of issues related to security, stability, and resilience of the DNS ecosystem, particularly as we approach the rollout of new gTLDs. Additionally, we highlighted a number of issues that we believe are outstanding and need to be resolved before the safe introduction of new gTLDs can occur - and we tried to provide some context as to why, all the while continuously highlighting that nearly all of these unresolved recommendations came from parties in addition to Verisign over the last several years.
ICANN continues to flail, pointlessly. The latest in a series of missteps that could easily have been avoided is its recommendations on what to do about a report on the potential for confusion and misaddressing when someone's internal network names match the name for a new gTLD, and they have misconfigured their routers and/or DNS to the extent that someone typing in a new gTLD name might end up in the middle of someone else's network.