Home / Blogs

Can We Stop IP Spoofing? A New Whitepaper Explores the Issues

In March 2013, Spamhaus was hit by a significant DDoS attack that made its services unavailable. The attack traffic reportedly peaked at 300Gbps with hundreds of millions of packets hitting network equipment on their way. In Q1 2015, Arbor Networks reported a 334Gbps attack targeting a network operator Asia. In the same quarter they also saw 25 attacks larger than 100Gbps globally.

What is really frightening about this is that such attacks were relatively easy to mount. Two things made these attacks possible: bots with the ability to spoof the source IP address (setting it to the IP address of a victim) and “reflectors”—usually open DNS resolvers. A well selected DNS query can offer a 100-times amplification, meaning that one needs only to generate queries totalling 3Gbps to create a merged flow of 300Gbps. A relatively small set of clients can accomplish this.

Of course there are DDoS attacks that do not use these two components; they hit the victim directly from many globally distributed points. But they are traceable and two orders of magnitude more difficult and expensive to accomplish.

Mitigating the reflection component of the attack is one way of addressing the problem. As reported by the OpenResover project, in the last two years the amount of open DNS resolvers has dropped almost by half—from 29M to 15M. However, there are other types of amplifying reflectors—NTP and SSDP are among them, and even TCP-based servers (like web servers, or ftp servers) can reflect and amplify traffic.

And reflectors are just the accomplices. The root cause of the reflection attacks lies in the ability to falsify, or spoof, the source IP address of outgoing packets. As Paul Vixie put it, “Nowhere in the basic architecture of the Internet is there a more hideous flaw than in the lack of enforcement of simple SAV (source-address validation) by most gateways.”

Tackling this problem is hard. Lack of deployment of anti-spoofing measures is aggravated by the fact that the implementation of anti-spoofing measures is often incentive-misaligned, meaning that networks only help other networks by implementing the practice and do not directly help themselves. There are also real costs and risks for implementing anti-spoofing measures.

In February 2015, a group of network operators, security experts, researchers and vendors met at a roundtable meeting organized by the Internet Society with a goal to identify various factors that aggravate or help solve the problem, and to identify paths to improve the situation going forward.

The main conclusion is that there is no silver bullet to this problem and if we want to make substantive progress it has to be addressed from many angles. BCP38, which is sometimes promoted as *the solution* to the problem,—is just one tool, not effective in some cases. Measurements, traceability, deployment scenarios and guidance, as well as possible incentives, communication and awareness—are among the areas identified by the group where positive impact should be made.

For example, measurements and statistically representative data are very important; if we want to make progress, we need to be able to measure it—the ability currently missing to a great extent.

Another recommendation that came out of this meeting was the possibility of anti-spoofing by default. The only place you can really do anti-spoofing is the edge (or as close as possible). Addressing the challenge at the edge seems only possible with automation and if anti-spoofing measures are switched on by default.

Read more about this in a whitepaper that contains the main takeaways from that discussion and articulates possible elements of a comprehensive strategy in addressing source IP address spoofing challenge.

I ask you to read the whitepaper, and ultimately deploy these anti-spoofing technologies in your own network. Can you do your part to prevent DDoS attacks? And if you are willing to do your part, how about signing on to the Mutually Agreed Norms for Routing Security (MANRS) and joining with other members of the industry to make a more secure Internet?

This post was previously published on the Internet Society’s blog.

By Andrei Robachevsky, Senior Technology Programme Manager at Internet Society

Filed Under

Comments

Ever since I configured my first always-on Todd Knarr  –  Sep 10, 2015 6:27 PM

Ever since I configured my first always-on Internet connection with a firewall, I’ve felt that ingress and egress filtering like this should be standard where it’s technically feasible. It benefits my network directly in that I don’t have to handle irate calls/emails from other network admins about infected machines on my network causing trouble, and the task of cleaning up the infections (and dealing with the users responsible, who inevitably think they can’t possibly be responsible) doesn’t come out of the blue stamped with an emergency/ASAP priority. For me that’s worth it.

It really ought to be part of the interconnection contract too, to protect the central networks that carry so much transit traffic it’s not technically feasible to filter. The problem seems to be that network operators are allergic to disconnecting other networks for fear of impacting people who aren’t responsible for the problem. But that impact’s what gets the attention of network operators who’d otherwise ignore the problems on their networks. With many consumers using routers supplied by their ISP, it should be straightforward for the ISP to manage the filtering for them.

Not what you think C Drake  –  Sep 10, 2015 10:33 PM

The spamhaus “attack” is almost all media hype.  Their providers published their MRTG logs to back up their claim that the attack was in fact so small it did not register on any graphs at all.  It merely interrupted the spamhaus *web servers* for a few hours - that’s it.  Spamhaus itself was never unavailable (a fact they “proudly” hyped up at the time as well - since you’re suggesting they were taken out, it looks like this original fake news story has begun it’s own false evolution as well!).

Unfortunately, fake claims like “biggest ever DDoS” are pure marketing and publicity gold, so this bogus one will live on forever.

Yes, sometimes media report such incidents too Andrei Robachevsky  –  Sep 11, 2015 1:30 PM

Yes, sometimes media report such incidents too enthusiastically and the measurements methodologies may not always be totally accurate. The point of providing this example, however, was to highlight how easy it is to generate traffic exceeding several tens of Gpbs due to source IP spoofing.

The point is that such DDoS attacks are EASY to mount Dan York  –  Sep 11, 2015 1:39 PM

C Drake - I think the point is not so much that the Spamhaus attack was big, but rather that attacks like that one and the other one mentioned are so easy to mount because IP address spoofing is possible. If we can get more source address validation happening on the gateways of local networks, we'd take away fuel that feeds these online fires.

I'm not an engineer, but have worked Christopher Parente  –  Sep 16, 2015 8:46 PM

I’m not an engineer, but have worked in this vertical for a long time. I remember a SSAC report in early 2014 that laid out steps to take. Beyond a call to action—which is certainly needed - does this whitepaper add substantially to these suggestions? 
SSAC Report

What is the added value? Andrei Robachevsky  –  Sep 17, 2015 10:16 AM

Thank you Christopher for pointing to this useful report. While the paper does not present the solution to the problem it does outline areas where impact can be made, beyond BCP38 and uRPF technologies. Specifically, credible data allowing us to measure progress, and anti-spoofing by default, facilitating the deployment of these solutions at the edge.

Thanks for reply Christopher Parente  –  Sep 17, 2015 12:46 PM

You bet Andrei and happy to share your post with my followers.

Retro-fitting source verification into IP The Famous Brett Watson  –  Sep 20, 2015 12:46 PM

I looked into the problems arising from forged source addresses as part of my PhD thesis a few years back. In section 7.3, I suggested that it may be a good idea to invest in a “verified internet protocol”, being a layer on top of IP which provides the same kind of service, but uses a cookie exchange (in the style of Photuris, SYN Cookies, and other protocols that have since adopted the practice) to verify that the sender can receive packets at its alleged source address. At the moment this kind of verification is done on a protocol by protocol basis at higher layers, and it makes sense to factor it out into a lower layer. I didn’t pursue it any further at the time, because new protocols of that sort are notoriously hard to push into adoption, and I had to get busy earning an income again in any case.

It strikes me, though, that the same effect could be achieved in a more incremental manner via ICMP and an IP option. Speaking in terms of IPv4, a cookie exchange could be performed between two IP end-points using an ICMP request/reply pair in the style of echo request/reply or timestamp request/reply: the initiating host sends a cookie request, and the target host responds with a cookie reply containing some octets specific to that IP source. On receipt of this, the initiating host can start transmitting IP packets with those octets in a cookie option header field. The target host can then discriminate between incoming IP packets which are source-verified (contain a valid cookie option) and those which aren’t.

This behaviour can be applied opportunistically: a host which implements the cookie mechanism need not wait for the ICMP response before transmitting IP packets—it can simply start adding the option field if and when it receives an expected cookie reply. Cookies would be subject to a TTL in the style of DNS records, and the sub-system responsible for maintaining cookies can request a new cookie if the current one is about to expire, and there is still active packet exchange between the hosts. Only the requesting party needs to bear a memory cost for cookie storage: the responder can compute the cookie using a message authentication code (MAC). Verification need not be performed in both directions: given that the verification status is of most importance on initial contact between hosts, only session-establishing kinds of actions need to trigger cookie exchange; reply-like actions may continue to operate as they do now.

It would take some analysis to determine exactly how useful a system like this could be, applied opportunistically. It’s not going to be a silver bullet for anything, unfortunately. New protocols, however, could make use of the facility by requiring that certain packets only be sent when a target IP cookie is available (with the recipient granted latitude to ignore any non-compliant message). That could be a proper solution to source IP forgery in the longer term.

Any takers?

No. The most common protocol for DDoS Todd Knarr  –  Sep 20, 2015 5:37 PM

No. The most common protocol for DDoS is UDP, and requiring a round-trip exchange for the cookie before the UDP request can be sent would double the overhead instantly. That makes it infeasible right off. In addition it opens up yet another path for DDoS: sending large numbers of cookie requests causing the spoofed target to receive a large number of cookie responses. That it can discard them isn't useful because they still use up incoming bandwidth. And if the cookie can be computed by the receiver from the contents of the packet, then it becomes trivial for the spoofing host to forge valid cookies because it knows the content of the packets it's sending. You'd need the receiving host to remember cookies (eg. the way it works for HTTP, where the server maintains the list of which cookie is valid for every client), which would open up yet another way for a malicious sender to target a host: sending a large number of spoofed cookie requests from a large number of different sources, causing the receiver to exhaust it's cookie storage. A defense that opens up more avenues of attack is useless, the malicious parties don't care how or why they tie up a target's resources as long as they tie up enough of them to make the target unable to operate normally. It becomes more and more difficult to handle things correctly the further away from the source you get when you're dealing with malicious behavior from a host. Source address filtering has the virtue of being applied at a point where it's possible to positively determine the packets can't be legitimate and to stop them before any other host has to deal with them. Hosts and networks other than the source don't have to take on any additional overhead nor handle issues related to packet loss during the verification process.

Give me a little credit, please The Famous Brett Watson  –  Sep 21, 2015 10:53 AM

I think your response is a little knee-jerky, as I did anticipate objections like these. Allow me to point out the bits you clearly glossed over on a first reading and explain them further.

The most common protocol for DDoS is UDP, and requiring a round-trip exchange for the cookie before the UDP request can be sent would double the overhead instantly.
It's opportunistic, so no round trip delay is involved. Note that I said, "a host which implements the cookie mechanism need not wait for the ICMP response before transmitting IP packets — it can simply start adding the option field if and when it receives an expected cookie reply." This behaviour is necessary in the general case, as you won't know whether the peer even implements the option. As such, unless the host already has a cookie, the first IP packet sent isn't going to have a validation cookie. In most cases, this shouldn't be a problem: after all, that's how all packets are sent now. However, if the target server is throttling requests due to a suspected attack, then this request might be dropped and require retransmission. By then, the cookie response will probably have arrived, and the next packet can be sent with verification in place. The server can reply to such a packet with confidence that it's not part of a reflector attack -- different grades of service for different packets, you see.
In addition it opens up yet another path for DDoS: sending large numbers of cookie requests causing the spoofed target to receive a large number of cookie responses.
You can do that with any protocol which solicits a response, which is just about all of them. It only becomes a significant problem when the response exceeds the request size, in which case the protocol becomes a packet amplifier. This is why DNS is such a juicy target: it's relatively easy to craft a small request which solicits a large response. In the case of this protocol, even a naive implementation is only going to have a response size which exceeds the request size by the cookie length, and something like six octets should suffice for that. If you wanted to ensure that the reply was no bigger than the request, you could have mandatory padding bytes in the request to cover the maximum cookie length, at which point the protocol can provide no packet amplification.
You'd need the receiving host to remember cookies (eg. the way it works for HTTP, where the server maintains the list of which cookie is valid for every client)...
No: as I said, "only the requesting party needs to bear a memory cost for cookie storage: the responder can compute the cookie using a message authentication code (MAC)." That is, if you ask for a cookie, you bear the cost of remembering it. The host issuing the cookie, on the other hand, can perform a quick one-way hash between the requester's IP address and a local secret, returning the result as the cookie. This can be verified on subsequent packets by performing the computation again: there is no per-peer storage cost, and the computation need only be performed in cases where the server actually cares about verification. The server's secret should be long enough and short-lived enough that an attacker can't derive it in the space of the cookie TTL (by which point the server should have randomly generated a new secret).
Source address filtering has the virtue ...
This isn't a replacement for source address filtering: it's a complement to it. We don't have a silver bullet, but we can throw a number of small rocks at the problem. This protocol is just another rock for the sling.

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

Related

Topics

IPv4 Markets

Sponsored byIPv4.Global

Threat Intelligence

Sponsored byWhoisXML API

DNS

Sponsored byDNIB.com

Cybersecurity

Sponsored byVerisign

Brand Protection

Sponsored byCSC

Domain Names

Sponsored byVerisign

New TLDs

Sponsored byRadix