|
When the domain name system (DNS) was first designed, security was an afterthought. Threats simply weren’t a consideration at a time when merely carrying out a function—routing Internet users to websites—was the core objective. As the weaknesses of the protocol became evident, engineers began to apply a patchwork of fixes. After several decades, it is now apparent that this reactive approach to DNS security has caused some unintended consequences and challenges.
Steps to Secure DNS Data
The first improvements to DNS security aimed to make the data safer, such as early efforts to add more information about data sources. Later, DNSSEC (DNS security extensions) was developed to make forging data much more difficult. One unintended consequence, which wasn’t a concern at the time, responses to DNS queries now contains more data than ever before. The gain in response size, versus query size, became considerable.
Steps to Secure the System
While initial steps were being taken to secure DNS, the system gained greater capacity. That is, it was over-provisioned to secure availability and resiliency. The cost of computers and bandwidth fell, enabling organizations to deploy more and more servers, plus the servers themselves were becoming more powerful. This helped to prevent servers from being flooded with queries, both from legitimate users and illegitimate users launching DDoS attacks.
Progress was being made. But…
The Solutions Become a Problem
Recently, the DDoS attackers have capitalized on the way DNSSEC amplifies the size of query responses and has used the incredible capacity to focus packet floods on targets. In these attacks, which have been labeled reflection attacks, a computer sends a small DNS query to a server that responds with a large DNSSEC-fueled response, which is not sent to the computer but instead to the falsified return address. This process is repeated to many DNS servers in parallel, resulting in a traffic pattern that is barely noticable by the DNS servers but has severe consequences for the victim. The victim’s servers are overwhelmed and websites go offline, causing revenues, customer service and brand reputations to suffer.
No Good Deed Goes Unpunished
To recap, while the DNS has been armed for security, it has also become a global, high-capacity, very reliable utility for generating attack traffic. Much like an electric utility generates power for cities; the DNS can generate immense amounts of traffic to flood victims.
So, What’s Next?
We can’t undo the improvements to DNS security. If we did, the DNS would once again be an untrustworthy and unreliable source for data. To move forward, we must figure out a way to eliminate reflection attacks and continue on with our security strategies. Or rethink these strategies overall.
On the horizon: expanding the role TCP (transmission control protocol, one of the core protocols of the Internet) plays in the DNS. Historically, expansion has been a taboo subject because TCP doesn’t make much sense for DNS, but it removes the effectiveness of reflection attacks, for the most part. While there are reasons not to utilize TCP—another topic for another blog post—the fact is that DNS is already defined to operate over TCP, despite years of building devices that assume it doesn’t. While this is a subject to experiment with, it’s not a certain solution.
Other options exist, but it is too early to even begin to describe them. New topologies and new arrangements are in the works. Other changes to the protocol are being considered. Where we go next is anyone’s guess.
Sponsored byCSC
Sponsored byVerisign
Sponsored byWhoisXML API
Sponsored byRadix
Sponsored byDNIB.com
Sponsored byVerisign
Sponsored byIPv4.Global
One of the biggest things that could be done is for ISPs to filter outbound traffic. DNS reflection attacks only work if you can forge the source of the query to appear to be the victim. It’s impractical for the DNS server’s network to filter queries for validity, but your ISP knows what IP network numbers they operate. They can apply a standard outbound filter: all packets with a source address not in a network the ISP operates are dropped. This is complex to do for interior networks that handle transit for many other networks, but at the edges where ISPs serve mostly either their own networks or clients whose networks don’t carry transit traffic it’s much more feasible. Applying sanity filters like that would go a long way towards eliminating the nastiest attacks.
Similarly inbound filtering should be standard. Again it’s too complex to apply to interior interfaces, but near the edges there are a lot of interfaces to non-transit ISPs (or ones who only handle transit for a known set of non-transit clients) where even if the ISP doesn’t apply an outbound filter the network they’re connecting to can apply an inbound filter to drop any traffic from the ISP that doesn’t originate from a network the ISP is responsible for. This helps deal with rogue networks that don’t filter outbound.
This filtering has been promoted for a long time by a lot of people without a lot to show for their efforts. Yes ISPs should, but they don’t. This leaves DNS operators with a choice to keep shouting “deploy BCP 38” or do something within the control of the DNS operators (at least try).