|
Comcast’s furtive and undisclosed traffic manipulation reminds me of a curious, red herring asserted by some incumbent carriers and their sponsored researchers: that without complete freedom to vertically and horizontally integrate the carriers would lose synergies, efficiencies and be relegated to operating “dumb pipes.” For example, see Adam Thierer, Are “Dumb Pipe” Mandates Smart Public Policy? Vertical Integration, Net Neutrality, and the Network Layers Model, 3 Journal on Telecommunications & High Technology Law 275 (2005)
Constructing and operating the pipes instead of creating the stuff that traverses them gets a bad rap. It may not be sexy, but it probably has less risk. But of course with less risk comes less reward, and suddenly no one in the telecommunications business is content with that. So incumbent carriers assert that convergence and competitive necessity requires them to add “value” to the pipes.
Put another way, they would assert that any limitation on a carrier’s “right” to add value is an unconstitutional taking. Of course we used to have common carriers that operated as neutral conduits carrying the content produced by someone else, but apparently that is an anachronism now.
The dumb pipe argument comes across to me as disingenuous. Would anyone buy an argument from an electricity carrier that it should not have to provide a neutral conduit for the carriage of electricity? It would seem that everyone makes more money and has more fun using the electricity to make something more valuable than just carrying electrons.
So it appears with Comcast. Hellbent to cash in on convergence, or at least generate greater returns for its pipe investment, Comcast wants to operate a non-neutral network with all sorts of intelligent packet sniffers ready to prioritize or degrade traffic. And I thought consumers would beat a path to Comcast instead of Verizon, because Comcast offered faster and better service. Who would want that when they can have a smart pipeline whose genius owners stand ready to delay and drop packets according to some secret and real smart plan?
Sponsored byIPv4.Global
Sponsored byRadix
Sponsored byVerisign
Sponsored byWhoisXML API
Sponsored byVerisign
Sponsored byCSC
Sponsored byDNIB.com
There are so many errors and false assumptions in this post I don’t know where to begin critcizing. For one, Verizon offers a faster service than Comcast: nominally, 15 Mb/s downstream and 5 Mb/s upstream compared to 4 Mb/s and 350 Kb/s respectively.
Secondly, Comcast’s (and everybody else’s) traffic shaping makes the network more responsive for most customers, not less. That’s why they do it.
Thirdly, the common carrier analogy doesn’t fit this scenario because we’re talking about the roads, not the trucks. Residential networks carry three kinds of traffic: TV, voice, and Internet, and they’re all prioritized relative to each other and always have been. Internet traffic is further prioritized based on type on common wire networks, and should be. Comcast’s TV service isn’t affected by the traffic on its Internet service, but Internet customers affect each other.
In America, we have traditionally paid for infrastructure through the sale of services. You don’t pay a separate fee for the wire that carries your electricity, you simply pay for the electricity (or water, or gas, or sewage as appropriate.) If we want to promote the development of multiple residential information network options, we do have to allow investors to make money from them, and services seems like the best bet.
Internet access is subsidized by voice and TV on residential networks accordingly, and until it starts paying its own way nothing much is going to change.
Critics of net neutrality regulations aren’t necessarily paid by phone companies, any more than proponents are paid by Moveon.org and George Soros. We just happen to know the score.
Richard Bennett said:
I always appreciate constructive and substantive criticism, something not always practiced in the network neutrality debate or by some of the blog operators listed on Mr. Bennett’s site.
For one, Verizon offers a faster service than Comcast: nominally, 15 Mb/s downstream and 5 Mb/s upstream compared to 4 Mb/s and 350 Kb/s respectively.
Fair enough, but I what I wrote could have been inferred as comparing Verizon’s DSL service vis a vis cable modem service. At any rate this atikes me as a quibble as opposed to the best example of an error or false assumption.
I have absolutely no problem with tariff shaping and management designed to reduce congestion and conserve network resources. I do have a problem when an ISP deliberately causes the functional equivalent of congestion to reduce the volume of a specific type of traffic (peer-to-peer networking) that at the time may not have caused actual congestion. See http://telefrieden.blogspot.com/2007/11/response-to-two-columns-on-comcasts.html.
I do NOT see forging TCP resend packets as your garden variety of legitimate traffic management or “shaping.”
As to the thirdpoint I have written extensively on network neutrality issues and do not have a problem with prioritizing traffic provided it’s transparent, offered to anyone on similar terms and does not trigger punitive artifical congestion, e.g., dropped packets, for regular service subscribers.
Lastly I would have a problem with undisclosed, unsponsored researchers on either side of the network neutrality debate. Mr. Bennett site contains links to a number of folks who do not disclose the financial support they receive. For the record I receive nothing for my posts and call the issues as I see them.
As far as I know, I don’t link to a single site that gets undisclosed financial support, and if you have a specific instance I’d like to know about it. Otherwise, kindly correct your false charge.
Your argument against Comcast rests on your belief that TCP RST spoofing isn’t a “garden variety” traffic management technique, which would depend on what kind of garden one has. In the FTTH and DSL gardens it’s not necessary, as each user has a dedicated pipe to a CO where packet-dropping can take place. On the cable modem network, however, the upstream pipe is shared and excess packets need to be dropped before they hit it or it will become unstable for everyone. TCP RST accomplishes this goal.
This apparently bothers you because it feels like pre-emptive invasion or something of the sort; you say that traffic management must be reactive. This is an extremely picky point in network engineering, one that isn’t made by the professionals. I suspect that many traffic management principles have unattractive aesthetics for civilians, as their primary goal is to ensure that the trains run on time. Packets aren’t people, and don’t need to be treated as if they were.
Comcast sells two tiers of service, residential and commercial, and doesn’t place restrictions on file servers on commercial accounts. If you want to seed files with BitTorrent, the commercial account is the one you need.
Richard Bennett said:
So Hands Off the Internet is a fully transparent organization that fully discloses its funders? And Scott Cleland operates a blog in the public interest.
Of course there’s the plausible deniability argument. Someone at AEI for example just happens to writeabout matters that are of great interest to a contributor to AEI. The contributor has not direct link to an AEI affiliate or employee, but curiously the affiliate or employee just happens to find time in his or her research agenda to come up with work that serves the funder’s interest.
That’s what I call undisclosed, sponsored research.
And yes Mr. Bennett several of the sites you link to engage in that sort of practice.
Hands Off the Internet’s member page lists the companies which support it, as does Scott Cleland’s Net Competition member page. That’s full disclosure in any rational person’s book.
You’re busted.
From a technical perspective, TCP RST packets are not a reasonable way for an intermediate party to enforce bandwidth policies. In fact, I can’t think of a legitimate reason why an intermediate third party would ever generate this signal. For bandwidth control of TCP, it is far more reasonable to drop packets, as though the network were congested and engaging in Random Early Reduction. It also strikes me that they could just apply an unfavourable queueing policy to this kind of traffic. Admittedly, I’m not a network engineer, but this is the kind of thing that I learnt as an undergraduate, and I’m pretty sure that my suggestions are more reasonable than the RST approach.
This would seem to leave us with three possibilities.
1. There is a compelling reason to be using TCP RST packets for bandwidth control, and I’m an ignoramus because I’m unaware of it. (In which case I’d like my ignorance to be cured, so please provide an explanation.)
2. The network engineers at Comcast have less understanding of network management than I do, which seems very unlikely.
3. It is not Comcast’s intention to throttle this particular TCP usage, so much as sabotage it. When carriers selectively sabotage customer data, we call that “non-neutrality”.
Pick one, or offer one I’ve missed.
The validity of RSTs has been discussed to death in several venues, perhaps most intelligently by George Ou.
In essence, Comcast isn’t concerned with controlling congestion on the Internet (which is what RED and other forms of packet drop are about,) it’s concerned with controlling it on their internal network, and RST is a very expedient way to do that. Or so I believe, at any rate.
Richard Bennett said:
I will assume you’re just an interested and talented individual with something to contribute.
People like Scott Cleland and the doezen’s of “astroturf organizations” that particpate in the network neutrality discussion never fully disclose their agenda, or their benefactors. The “disclosure” sites you reference raise two questions:1) is the disclosure complete, or representative? and 2) who is behind those noble sounding organizations that funnel money into the site?
You seem to channel the Cleland mode: personally attack opponents; use hyperbole; reframe the issues. For any instance where you offer constructive criticsm, you add a special sauce of snarkiness and arrogance.
Aside from you, I don’t hear from my academic and business colleagues that my insights are rife with errors and “false assumptions.” On the network neutrality issue I engage people like Chris Yoo and Alred Kahn in a constructive dialogue.
Excuse me? I provided you with links to the Hands Off and Cleland funding disclosures that you claimed were non-existent, and this is all you have to say, repeating the same old misrepresentations?
Professor, that’s downright sad. Everybody in the network regulation debate knows that the phone companies lobby, and everybody knows who their lobbyists are. And everybody knows that on the other side Save the Internet is a front group for Free Press with an anti-capitalist agenda, etc.
None of that is substantial.
Richard, the source you cited includes the following explanation.
The part I’ve italicised seems questionable, if by that he means using TCP RST segments. Reducing the number of open TCP connections doesn’t necessarily reduce the packet rate, since TCP normally adapts to the available bandwidth between the endpoints. If you just kill off one TCP, the others will speed up to consume the freed bandwidth unless the bottleneck lies elsewhere. Random early detection, on the other hand, works because TCP backs off its transmission rate when it encounters data loss of this sort, thus reducing the packet rate, which is the desired effect.
The RST technique would probably have the desired effect on an implementation of the Torrent protocol that limited bandwidth per TCP connection, and maybe that’s a common thing—I don’t know. When I use a Torrent implementation (to download perfectly legal Ubuntu ISO images, thankyou) I configure it to limit the overall upload rate (not the number of TCP connections) because my web browsing becomes insufferably choppy if I saturate the meagre upstream bandwidth of the cable modem. (The downloading TCPs can’t send acknowledgements in a timely manner when the uplink is saturated.)
Those forms of congestion control work because they influence the behaviour of the endpoints—persuading them to reduce their rate of transmission. That’s exactly the effect that Comcast claims to be after, so the technique is relevant. This is a network congestion problem, and it makes no difference that it’s on their own network rather than the Internet at large, or that it’s produced by contention in a shared access medium rather than buffer exhaustion in a router. The idea is to throttle back the endpoints to a rate that the network can support, and there are more polite, reasonable, and generally appropriate ways to do this than using RST segments.
But as I say, I’m not a network engineer—this is textbook stuff. I’d appreciate it if someone with both theoretical grounding and practical experience would pass comment on whether my theory works in practice.
RED and packet dropping affect the rate at which TCP hosts transmit data over an established connection. For these to be effective in a multi-connection system like BT, we have to drop packets on all open connections.
What these techniques don’t address is the network load created by connection requests themselves (TCP SYNs). In the DoS scenario for DOCSIS, all that’s necessary is a boatload of SYNs, because each elicits and ACK or an RST, and each of these causes a DOCSIS-level RTS. RTS collisions on the DOCSIS network are the culprit.
This isn’t an inter-networking problem per se, it’s a MAC-level problem in DOCSIS that happens to be aggravated by BitTorrent’s swarming behavior.
TCP RST is therefore the best solution, as the expert* quoted above declares.
*Me
Quite. And RST abnormally terminates an established connection (cf. normal termination with FIN), or indicates that a socket is not listening for connections. In this case, as I understand it, the RSTs are being used to interrupt TCP sessions which have already reached the established state.
Well, not all open connections. Just the ones that are generating problematic amounts of traffic. Not every TCP connection is a resource hog. You don’t even have to pick on BitTorrent specifically.
The RST approach to interrupting established connections doesn’t address this either, precisely because it deals with established connections—ones which have already been through the whole SYN/ACK three-way handshake. In fact, the RST approach seems likely to aggravate this aspect of the problem, since many applications will attempt to reestablish a connection that is abnormally terminated in this manner.
The RST technique is still not making any technical sense in light of the problem as described. In fact, the problem as described is not getting any clearer. I’m confused as to why you raise TCP SYNs as an issue at all: they don’t seem relevant to the discussion at hand. I can see that the infrastructure is vulnerable to certain kinds of DoS attack, one of which might be effected with a flood of SYNs (soliciting RST responses), but I have yet to see how BitTorrent produces this kind of attack relative to other applications, let alone how injecting RSTs helps in any way.
In short, injecting TCP RSTs midstream doesn’t seem to be an effective way of addressing the problem as described, and it constitutes a troubling violation of network layering principles. Selectively dropping packets is the way this kind of problem is traditionally managed. It’s fair to drop an incoming packet which is expected to solicit a response if the outgoing medium is congested. Routers are entitled to drop IP packets as a form of traffic management, and the endpoints are (by and large) designed to deal with it appropriately.
Brett, you’re repeating yourself.
We know empirically that RST works as a traffic shaping mechanims on the Comcast network, its effectiveness has been observed by multiple parties. If you need to know why it works, perhaps you can re-read some of the commentary here.
It “works” like a hammer works on a screw. It manages traffic by effecting a DoS attack on the connections established by the endpoints until they give up trying to establish them. I’m repeating myself because I’m still trying to elicit a reasoned response as to why this butcherous approach is better than shaping traffic by dropping packets. At this point I can only assume that no explanation is forthcoming because you have none to offer, so I’ll let it rest.
You seem confused about TCP, making the common error that it’s a network protocol. It is, in fact, an *INTERnetwork* protocol, good for moving data between one network and another. It has an ancillary function, protecting the Internetwork from congestion. It does those two things reasonably well, but Comcast has a different problem.
Comcast, you see, is the operator of a network, not just a network with an Internet connection, but a network in which people also make phone calls and get TV programs. Comcast has to manage this network in such a way that its customers get good service. The way it’s chosen to do this, for the time being, is with a system called DOCSIS 1.1 that uses a couple of TV channels for Internet access. The downstream channel works pretty well because the only piece of equipment that transmits on it is owned by Comcast and it can do pretty much whatever it wants. The upstream channel is a lot more troublesome because it has 150 potential transmitters who have to coordinate with each other. This coordination function has some problems, mainly that it was sized around the assumption that very few users would require access to it at any given time. That was fine until File-Sharing came along. Ten file-sharing servers can bring a DOCSIS 1.1 network to its knees, and Comcast would prefer that didn’t happen.
They would like to optimize this upstream channel for fairness, which would mean that customers would start out with quick access to the channel, but would have slower and slower access the more data they offer. This would allow the normal interactive user to have what he wants without being swamped by the file-sharing bandwidth hogs. The astute reader will notice that TCP has a mechanism, the sliding windowprotocol, that does the exact opposite of what Comcast wants: the more of a hog you are with Sliding Window, the more of a hog you’re allowed to be.
So Comcast supplements TCP management for Internets and uses RST to slam the windowshut on customers who use more than their fair share of upstream bandwidth. If these streams are re-started, their windowsize is initially small, as it should be.
Of course, dropping packets would accomplish the same goal, but RST does something extra: it discourages the leecher from coming back to the Comcast seeder until he’s exhausted all other options for file sharing. And this is what Comcast needs to do: prevent people all over the Internet from overwhelming their private network with file sharing requests. Packet drop doesn’t do that.
It’s a *Network* problem, not an *Internet* problem, hence addressing it with a mechanism that alters Connection Rate as well as Windowsize is cool.
I’ll admit a certain degree of bemusement as to that distinction. As far as I’m concerned, when you connect networks together, you get a bigger network. The concept of an inter-network protocol was an interesting concept back when most protocols were somewhat tightly coupled to a particular kind of network and/or computer hardware, but the Internet Protocol has been with us so long now that it doesn’t seem special anymore. But perhaps that’s not what you mean. Feel free to educate me.
Yes, I understand this. The downstream channel (towards the customer) is contention free, but the upstream channel is not. Consequently the upstream channel can suffer collisions, and the effective utilisation of that channel can drop with high collision rates. This particular kind of asymmetry makes DOCSIS pretty much an accident waiting to happen: utilisation of the downstream link scales well, but the upstream link becomes congested.
(The full nitty-gritty of DOCSIS upstream contention is quite Byzantine in its complexity. I found that the introduction of this paper gives a nice simplified overview. Also, this paper (PDF) describes the DoS vulnerability, and makes it clear that “bandwidth hogs” are not the only problem here.)
I don’t think that’s at all an accurate characterisation of a sliding window protocol: a sliding window is an end-to-end flow control mechanism which limits the amount of data that can be sent. In terms of bandwidth utilisation, TCP has a rate-limiting discipline involving slow start, a threshold, and a congestion window, which tries to grab all the bandwidth it can without creating congestion. The intervening routers can influence this rate of transmission by dropping packets (giving the appearance of congestion), so it’s not at all clear why you consider this a problem.
RST slams the whole connection shut—it looks like the other end of the connection crashed in some way. If your aim were simply to throttle back the packet rate by influencing the congestion window, then dropping packets would suffice. Heck, an ICMP Source Quench might do the trick.
Well, now, I’m glad we managed to agree on that point.
No, their problem as you have described it (in relation to DOCSIS) is rather more specific than that: they need to keep the upstream packet rate from their customers within the range that DOCSIS can handle efficiently, given their particular adjustments of all that is variable in DOCSIS. Arbitrarily killing off the BitTorrent sessions is the “hammering a screw” way of reaching that end. A proper engineering approach involves managing the data flows so that each customer gets a fair allocation of bandwidth to use in whatever manner he desires.
I think I’ve reached an understanding of what’s happened here, at last! The problem really is DOCSIS upstream contention, and Comcast have opted to solve the problem by “hammering a screw” (jamming BitTorrent traffic) because they were able to obtain hammers (whatever devices are generating the RSTs) off the shelf, and jamming BitTorrent happens to solve the problem at this point in time. A properly engineered solution would have required a lot of work, translating to money and delay, so they’ve taken the ugly, easy approach, justifying it by declaring the alternative “impossible” (which translates to, “not worth our while to try when we can just use the hammer”).
New rule: do not attribute to malice or stupidity that which is adequately explained by a shameless desire for cheapness and expedience!