|
The recent Internet outages caused by the DDoS attack on Dyn’s infrastructure highlights deep architectural issues that need resolution. Security and performance are intertwined, and both need fundamental upgrades.
A few days ago I was working at a friend’s house. He likes to have Magic FM on during the day. They regurgitate the same playlist of inoffensive 70s, 80s and 90s pop music, with live drive-time shows. Later in the day I heard the DJ sputter how their Twitter access had gone wonky, so you couldn’t expect to interact with them via that channel. I thought little of it.
Many of you will have seen news stories that explained what was going on: a huge DDoS attack on the infrastructure of Dyn had taken down access to many large websites like Twitter. A great deal of digital ink has since been spilled in the mainstream press on the insecurity of the Internet of Things, as a botnet of webcams was being used.
Here are some additional issues that might get missed in the resulting discussion.
An unfit-for-purpose security model
The Internet’s security model is completely unsuitable for these connected devices. The default is that anyone can route to anyone, and that all routes are always active. This is completely backwards. The default ought to be that nobody can route to anybody until some routing policy is established that is suitable for that device.
This process is called “association”, and it precedes the “connection” that is done by protocols like TCP. The camera needed to be on its own virtual network that should be isolated from websites like Twitter. This is a fundamental architecture issue, and one that cannot be fixed by tinkering around with DDoS mitigation code in routers.
The present Internet has been likened to running MS-DOS. It has a single address space, and doesn’t have any real concept of “multitasking”. We now have to move to the Windows or Unix level of sophistication, where different concurrent users and uses exist, but are suitably isolated from one another in terms of network resource access.
This issue highlights why investment in new modern architectures like RINA is essential. TCP/IP is just the prototype, and lacks the necessary association functions for future demands!
Weak technical contracts on demand
The very nature of a DDoS attack is to aggregate lots of small innocuous flows into a large and dangerous one. The essential nature of the attack is to overload the resources of the target. This means we need to master a new skill: managing network (and networks of networks) in overload.
This is a problem faced by the military, since their networks are under active attack by an enemy. Part of the solution is to have clear technical “performance contracts” between supply and demand at ingress and traffic exchange points. These not only specify a floor on the supply quality, but also impose a ceiling on demand.
With the present Internet we typically have weak contracts at those points, which don’t set a supply quality floor or demand ceiling, or do so in a fashion that can’t sufficiently contain problems. A DDoS attack is merely a special case of performance management in overload, and the real issue is broader than security management.
The Internet needs an upgrade to be able to manage quality issues.
Lack of economic incentives
My final point is that we don’t have good feedback mechanisms in the long run to prevent this problem from getting worse. It’s a kind of “environmental pollution” issue where the cost of insecure devices and poor operational practises is not borne by those who designed and deployed them. There has to be a way of putting more “skin in the game”.
That could partly come from resolving the above two technical issues. Breach of the technical contact on the demand ceiling would result in some kind of commercial penalty for overloading downstream resources. In the extreme case it should be possible to end the association, so that it becomes impossible to route to the destination that is overloaded.
Ultimately the knowledge of which devices are involved in attacks versus legitimate interactions is distributed at the network edge. If a user is willing to pay for the additional resources to raise the contracted quality when the network is stressed, then the traffic probably isn’t a denial of service attack, as the costs don’t scale.
These attacks are exploiting economic arbitrage opportunities of mispriced resources. A solution to DDoS attacks will come from a wider re-thinking of the economic model for the Internet. We need one that favours price signals and market feedback over “net neutrality” style rationing and government diktat.
People demand a better living environment as they get older and richer, and today’s Internet is a shanty town next to a festering garbage dump, built from many ramshackle structures. Now it is time to clean up the neighbourhood and modernise our architecture and engineering.
Sponsored byRadix
Sponsored byVerisign
Sponsored byCSC
Sponsored byIPv4.Global
Sponsored byDNIB.com
Sponsored byWhoisXML API
Sponsored byVerisign
By definition, a DDOS attack means involved devices were lying (UDP) about their source addresses, which means they were detectable at the source. Thus their “environmental pollution” could have been eliminated by the source network without any changes to current protocols.
>This process is called “association”
Yes, the packet was not “associated” with the sender, current protocols allow this to be obvious at the source network.
>“today’s Internet is a shanty town next to a festering garbage dump”
wow ....
Your comment suggests you do not know what association control is. The ability to "fake" the sender exists because there is no association management defined in TCP/IP. It should be architecturally impossible, not left to the whim of operational configuration and hope.
I am referring to the partitioning of the source IP address at the "wire" from which the packets originate. I am pointing out the concept of "association" IS useful and does not need a complete rewrite of decade of code base to implement. A much more simple and cheaper to implement solution exists now. If IPS's accept responsibility for the source IP address then DDOS could end, especially for cameras located in homes where there is the lowest level of packet responsibility on the part of the device owner (versus say colocation or various styles of server leases). > The ability to "fake" the sender exists because there is no >association management defined in TCP/IP. DDOS is implemented using UDP, not TCP. It uses the asymmetry of a simple request to generate a much larger response targeted at an unrelated third party (the "lie" of the source address). TCP source address spoofing serves zero purpose as the connection could never be established and there is thus no ability to multiply packet size as there is in UDP, nor the ability to inflect that response packet on an unrelated third party. Filtering of TCP source address lies will have no impact on legit comm. UDP spoofing can serve a purpose as their is no connection "associated" with the comm, however an "increased cost" to do so may make sense. That is service defaults to no UDP spoofing allowed, and you pay an added fee to remove that barrior. So I fail to see your point as to why the simple to solve problem of UDP (source spoof for packet size multiplication) is justification to replace the unrelated TCP protocol which does not support such undesired behavior.
Now I know for sure you don't know what association management is :) Please go read up on the RINA architecture. http://pouzinsociety.org/education/highlights?_ga=1.107318346.934011447.1468509500 I am very familiar with how the present Internet works. No need to regurgitate it. The transport protocols (UDP or TCP) exist on top of a routing fabric that has no concept of establishing secured associations and routes. The world is moving on - do join us!
We had that. IP networks won out because they didn't require association and thus didn't allow network operators to control what services were created and offered on the networks. I'll have to agree with Charles contention that your idea of association management isn't needed. The problem you describe isn't that there isn't any association, it's that the origin networks themselves permit their hosts to lie about their origin network. That problem can be and has been solved for IP networks, the solution simply needs to be applied by network operators. The same issue applies to your position on technical QoS contracts. Again, this is a solved problem. The most recent attack would never have exceeded any QoS limitations on the source end, and at the target end it wouldn't matter because the incoming traffic exceeding the QoS limit causing throttling would've caused exactly the same results as traffic overload did: legitimate DNS queries would fail on a massive scale because the malicious traffic outweighs the legitimate traffic by such a large amount. That's why "distributed denial-of-service" as opposed to the more general "denial-of-service". Your last point, however, is spot-on. The fundamental problem is that the people who can do something about the problem don't bear any significant costs associated with the problem (and in fact would probably suffer some losses if took effective action about the problem), and the people who bear the costs aren't in a position to do anything about the problem. Allowing and encouraging the shifting of the costs back to the origin of the problem (or at least the malicious traffic) would probably see the required technical measures taken post-haste. Unfortunately that shifting can't be done by technical means, it requires contractual and/or legal measures.
> Unfortunately that shifting can't be done by technical means, >it requires contractual and/or legal measures. Name and shame might be useful. That is, for the masses to know there is a solution and that is being ignored. >The fundamental problem is that the people who can do something about >the problem don't bear any significant costs associated with the problem Users can't expect or demand what they do not know is an option. And on the flip side, how many might appreciate their ISP letting them know they own a device that is compromised? And is that compromised device just performing DDOS attacks, or is it trying to steal their banking information, the DDOS was just easier to detect .... In turn potentially doing damage to that brand (who's device was hacked) and thus the Brand accepts more responsibility for their designs and implementations, or their brand dies. Almost 10 years ago I allowed a friend to connect his laptop to my network, only time I have ever had a problem. His laptop infected a server and my ISP (Qwest) disconnected me. I was upset at the loss of access, and appreciated the notice. It was my fault, I cleaned it up, and then implemented a completely separate guest network in my home. Point is, for some issues, ISP's clearly CAN and WILL implement filtering and total disconnection and again that was many years ago. Why not for UDP address spoofing? That has never made sense to me. As for cost, I now have Comcast Business service at my home. That is not a choice as Comcast residential has so many blocks and filters as to make my normal work literally impossible. One such filter being whois queries. So Comcast "protects" registrar/registry whois servers from scraping, and yet "DDOS filtering" is not an industry SOP? Again, I am left scratching my head. With Business class there are no blocks, and I pay 3 to 4 times what residential service is and have 1/2 the connection speed. So they have done well to "shift a cost" to my bank account. So my personal experiences are not reconciling with address spoofing being off the list of ISP filtering. Something is wrong with this picture. Being the cynic that I am, and to draw an analogy, no Doctor ever made money from a healthy person. Problems sometimes translate into profit in less than desirable ways. Back to educating the masses that the recent DYN experience could have been largely avoided, shine a brighter light on the facts and tools we have today.
It would have sufficed to run DNS over SCTP, but it was found that firewalls block it...
>The world is moving on - do join us!
Shall I take that to mean you feel ISP’s have no responsibility as to the source addresses of the packets they place onto the internet?
And I did take a cursory look at RINA before my first post, and then applied Occams Razor.
Sometimes present moment awareness is better than living a future that does not now exist, especially when there is a solution available at this moment and its being ignored.
I’m very interested in discussing the design of protocols, and how the ones we have may be sub-optimal (or even quite bad) in many ways, but having followed the trail of links into the “TCP/IP vs RINA” world a little, I’m struck by the research team’s apparent desire to offend and alienate everyone associated with the development of TCP/IP. Or, if that wasn’t the intended effect of a presentation like “How in the Heck Do You Lose a Layer?”, then I’m struck by the astounding lack of diplomacy—and I say this as someone who is naturally somewhat deficient in that quality myself.
I’d love to engage in a sober analysis of these attacks and the protocol design weaknesses which facilitate them (this kind of thing was the bread and butter of my PhD thesis), but the intemperate tone of this article and the linked material makes me think that I will be disappointed if I seek it via this avenue.
My perspective carries no special weight, so feel free to ignore it, but there it is for what it’s worth. I’d rather be discussing protocol design than the social aspects of research, but the latter really is a glaring issue here, in my view.