|
Late last week, Comcast officially disclosed to the FCC details of its network management practices which have been a subject of considerable discussion here on CircleID. (My thanks to Threat Level from Wired.com for providing a convenient copy of Comcast’s “Attachment A” in which this disclosure is made.) There’s not a lot of startling disclosure in this document, but it does provide some useful concrete facts and figures. I’ll quote the more interesting parts of the document here, and offer comment on it. All citations refer to “Attachment A: Comcast Corporation Description of Current Network Management Practices” unless otherwise specified.
Comcast has approximately 3300 CMTSes deployed throughout our network, serving our 14.4 million HSI subscribers. [p.2]
These figures yield an average of approximately 4360 subscribers per CMTS.
Comcast’s current congestion management practices focus solely on a subset of upstream traffic. [p.3]
More specifically, they focus on the “upload” channels of five particular file-sharing protocols, discussed later.
n order to mitigate congestion, Comcast determined that it should manage only those protocols that placed excessive burdens on the network, and that it should manage those protocols in a minimally intrusive way utilizing the technology available at the time. More specifically, in an effort to avoid upstream congestion, Comcast established thresholds for the number of simultaneous unidirectional uploads that can be initiated for each of the managed protocols in any given geographic area; when the number of simultaneous sessions remains below those thresholds, uploads are not managed. [p.3-4]
By “protocol”, the document here specifically means “application protocol”. Comcast’s approach to network management was thus to determine which applications were responsible for the most network load, and then manage those applications. The document offers nothing in the way of rationale for this approach: one might ask why they did not determine which customers were responsible for the most network load, and manage those customers.
The specific equipment Comcast uses to effectuate its network management practices is a device known as the Sandvine Policy Traffic Switch 8210 (“Sandvine PTS 8210”). [p.4]
Perhaps the decision to manage applications was born of an overriding business decision to use a particular vendor’s appliances, and the scope of possible technical approaches was thus limited a priori. The document does not describe how Comcast came to choose this appliance, given the range of alternatives that exist.
On Comcast’s network, the Sandvine PTS 8210 is deployed “out-of-line” (that is, out of the regular traffic flow) and is located adjacent to the CMTS. ... A “mirror” replicates the traffic flow that is heading upstream from the CMTS without otherwise delaying it and sends it to the Sandvine PTS 8210… [p.5]
There is one PTS 8210 per CMTS, except in some cases where “two small CMTSes located near each other may be managed by a single Sandvine PTS 8210.” [p.5] The average number of subscribers per PTS 8210 is thus somewhere between one and two times the average number of subscribers per CMTS, or between approximately 4360 and 8730. These figures are significant, because the number of active upload links is limited over the customer pool serviced by each such appliance.
[T]he Sandvine PTS uses technology that processes the addressing, protocol, and header information of a particular packet to determine the session type. [p.7]
Note that “header” information includes application layer elements, as clarified by Diagram 3 [p.8], so this is “deep packet inspection”. Roughly speaking, protocol control messages are subject to scrutiny, but the bulk data so transported is not. Such a distinction between data and metadata is a relative one, and Comcast’s cut-off point for analysis is a little hazy in parts. For example, Diagram 3 [p.8] notes that an “SMTP address” is subject to scrutiny, but “email body” is not. It’s not immediately clear whether “email body” includes all message data, or just the part beyond the header fields.
Deep packet inspection of the sort used here is not “minimally intrusive” [p.4] compared to some other approaches, but it may have been the least intrusive method of management available given a sufficient number of other arbitrary constraints.
[F]ive P2P protocols were identified to be managed: Ares, BitTorrent, eDonkey, FastTrack, and Gnutella. [p.8]
Note that Ares was a late entry (November 2007) [p.8], whereas management of the others commenced at roll-out in 2006 [p.5].
For each of the protocols, a session threshold is in place that is intended to provide for equivalently fair access between the protocols, but still mitigate the likelihood of congestion that could cause service degradation for our customers. [p.8-9]
Thresholds differ significantly between applications due to their inherently varied characteristics. See Table 1 [p.10] (but note a possible typo: the ratio for eDonkey is given as “~.3:1”, but the actual ratio as computed from the other columns is “~1.3:1”). BitTorrent unidirectional flows have the lowest threshold, permitting only eight per PTS 8210. Bear in mind that each such device is managing thousands of customers, but that relatively few BitTorrent flows are unidirectional uploads (according to Table 1 [p.10]).
When the number of unidirectional upload sessions for any of the managed P2P protocols for a particular Sandvine PTS reaches the pre-determined session threshold, the Sandvine PTS issues instructions called “reset packets” that delay unidirectional uploads for that particular P2P protocol in the geographic area managed by that Sandvine PTS. The “reset” is a flag in the packet header used to communicate an error condition in communication between two computers on the Internet. As used in our current congestion management practices, the reset packet is used to convey that the system cannot, at that moment, process additional high-resource demands without creating risk of congestion. [p.10]
The above may be a true representation of Comcast’s network management intentions in sending these reset segments, but the practice is in conflict with the TCP protocol specification. For one thing, only the two TCP endpoints (neither of which is the Sandvine PTS in this case) are considered to be participants in the protocol. If that isn’t decisive in and of itself, the TCP specification has the following simple remarks on the subject of resets.
As a general rule, reset (RST) must be sent whenever a segment arrives which apparently is not intended for the current connection. A reset must not be sent if it is not clear that this is the case. [RFC 793, p.36]
Put simply, the TCP RST flag is not and was never intended to be a means of managing congestion. It is intended to convey a specific error condition, and the Sandvine appliances are issuing the message inappropriately so as to produce the side effects of this error condition as a means to influence application behaviour. The network management practices are thus in direct violation of basic Internet standards, which is distinctly unwelcome behaviour. It might be an understandable (if inelegant) strategy in an environment where the network provider sets policy as to what applications are permitted, such as a corporate network, but it is inappropriate for a general Internet Service Provider. This was the basis for many of the howls of protest when Comcast’s network management practices were first discovered empirically.
[A]s Comcast previously stated and as the Order now requires, Comcast will end these protocol-specific congestion management practices throughout its network by the end of 2008. [p.11]
I hope that their future network practices will, as a first priority, aim to give each customer a fair share of the available network resources, without discriminating on the basis of the applications that the customer chooses to use. I also hope that these practices will uphold long-standing principles of Internet traffic management, rather than use inelegant side effects of lower-layer protocol control flags to manipulate specific applications.
Sponsored byIPv4.Global
Sponsored byDNIB.com
Sponsored byVerisign
Sponsored byCSC
Sponsored byWhoisXML API
Sponsored byVerisign
Sponsored byRadix
Analyzing Attachment A is basically just for historical interest. As you note, they intend now to move to a user threshold model.
So how do they do that? Let’s assume that upstream bandwidth hogging is still the major issue and they can determine, irrespective of the protocol, that a particular user is exceeding some quantitative threshold.
Can the typical CMTS or Sandvine PTS do something other than send RST packets to slow them down? Is there other equipment they would need to obtain? Or perhaps they are limited simply to shutting the user down? I haven’t seen anybody address this.
In this document @ http://downloads.comcast.net/docs/Attachment_B_Future_Practices.pdf
“I hope that their future network practices will, as a first priority, aim to give each customer a fair share of the available network resources, without discriminating on the basis of the applications that the customer chooses to use. I also hope that these practices will uphold long-standing principles of Internet traffic management, rather than use inelegant side effects of lower-layer protocol control flags to manipulate specific applications.”
[JL] That is exactly what they do. And you can see the details in our filing from Friday at http://www.comcast.net/networkmanagement/
[JL] The new system is “Protocol-Agnostic” – the system does not make decisions based upon the applications being used by customers. It does NOT target specific applications / protocols; the system does not look at what applications are being used. It does NOT examine the contents of customer communications; does not look in the contents of packets. It does NOT require or use DPI; it uses current network conditions and recent upstream and/or downstream traffic. It does NOT throttle down user traffic to a pre-set speed; when the bandwidth is available, even users in a managed state can burst to their maximum provisioned speeds.
Jason
Comcast
National Engineering & Technical Operations
All it really means is that Comcast is falling back to a more traditional, and apparently more socially acceptable, means of providing QoS on a network (albeit a step above pure Diffserv-based QoS). Most QoS devices such as Packeteer and probably Sandvine can partition user sessions into whatever groupings you want. In this case, they will likely do it simply by subscriber IP address and take a look at what an individual user is doing over a period of time, then put them into the “penalty box” for period of time if they have violated the policy. What that threshold is, I am not sure. I’d have to look at their policy document to see if it is stated. They have stated the monthly 250GB threshold, so presumably it is somewhere in the vicinity of what that means on a per-Mbps basis, either averaging that out over a month to determine what it means per-hour or maybe varying the distribution across peaks and lulls.
And yes, the way it is done is basic queuing and policing for the most part – dynamically move the offending subscriber into a lower priority queue that only guarantees a minimum amount of bandwidth and enforces that by putting their packets behind others in the queue and, eventually, when there is congestion, dropping some of their packets (which is probably every bit as disruptive as RST packets.)
Incidentally, I don’t know if many file-sharers realize this or not but the new policy actually works out worse for them. Before, only their file sharing traffic would be throttled. Other traffic presumably was ok, e.g., basic HTTP sessions, VoIP, FTP, etc. Even YouTube and other streaming video services were untouched. Now, a file sharer’s entire Internet session will end up being penalized and throttled, first in the short term based on an excessive per-Mbps rate, then later on by warning from Comcast or service disconnect if they go over the 250GB threshold. Comcast chose wisely on the monthly 250GB threshold as that is a lot of data and will affect (apparently from some testing) less than 1/3 of 1 percent of subscribers. Some other broadband providers have lower thresholds. It really won’t touch you unless you are using file sharing software or spend waaaaaaaaaay too much time on YouTube or watching other video. Similarly, it looks like about the same small amount of subscribers would be affected by the per-Mbps throttling policy, whatever that threshold is. That works out well for non-file sharers as there will actually be more bandwidth available, but maybe not so good for file sharers. (I wonder if the penalized rate is low enough that VoIP would fail if they were using a G.711 codec, no compression. Presumably while in the penalty box, your video would be pretty bad if viewable at all.)
So, maybe net neutrality purists view this as a victory as Comcast will no longer be looking into packets further than the IP header and the broadband ISPs are back to providing application-agnostic service. But rather than using the technology that is available to be more precise, we take a step back to throttling a subscriber based purely on their sustained transmission rate without any regard to what application caused the policy violation. This, at least for the time being, will be viewed as acceptable, or at least more acceptable than an application-specific approach by those who feel that is against NN principles, privacy principles or just generally unfair. But soon, when the complaints start rolling in about the effects of the new policy from subscribers who have been cut off, we’ll start the debate all over again.
Either way, it works for me. For now.
Thanks for the recap post, definitely helped as I sifted through the documentation.
Dan brings up an interesting side topic to all of this and that is the fact that because all these service providers (Comcast isn’t the only one) are moving to more ‘politically correct’ policies we may be hurt in the long run. It’s nice to say that application-agnostic should be the way to go, but in reality if we have the ability to monitor applications and determine the ones that are abusing the system this may be more helpful than targeting individuals, who make up the vast minority. I also wonder about the security implications in the long run if these ISPs aren’t monitoring application traffic at a more sophisticated level?
It’s all about recognizing the line between use of a technology and abuse of a technology. Are we throwing the baby out with the bathwater here?
Kyle
BreakingPoint Labs
Larry Seltzer said:
I agree. It doesn’t disclose much that we hadn’t already guessed, and it won’t be current practice for much longer, but it is particularly interesting in light of the accompanying propaganda war. Up until now, Comcast apologists have been able to claim somewhat vague but authoritative-sounding reasons for the absolute necessity of these measures. That particular line of rhetoric is now a thing of the past: the man behind the curtain is fully exposed, and must change tactics. Instead, this document portrays the management practices in the most favourable possible light, trying to make them seem as reasonable as possible (the weasel-words surrounding the use of TCP RSTs being a case in point), while also back-pedalling and dismissing the whole thing as unimportant since they were planning to do something else anyhow. This whole episode has been at least as much about managing public perception (and keeping regulators at bay) as it has been about managing networks!
Jason Livingood said:
That’s good to hear. If I get the time, I will examine the Future Practices document. Thanks for posting the link.
Dan Campbell said:
No, Dan, dropping packets is the appropriate thing to do under these conditions. TCP implementations are designed to detect and deal with this situation. A significant amount of research is invested in TCP congestion avoidance algorithms even now. Real congestion results in packet loss anyhow, so selectively dropping the biggest bandwidth consumers at near-congestion is a reasonable approach. Furthermore, dropping packets is the only strategy that provides the right kind of motivation for a network hog that won’t back off his transmission rates on the basis of hints. RST packets can potentially be ignored, and the Comcast incident spurred some research in that direction from P2P application designers.
I’d have to look at the Future Practices document in detail to determine whether you are right here. I assume, however, that you haven’t read it either, and your reasoning is simply, “you’re better off with just a few applications throttled, rather than your entire connection”. There may be something in this: Comcast’s present application-based interference is only applied to the upstream data channel, and anything new that imposes restrictions on the downstream channel may be disadvantageous to the customer. The devil is in the detail, and I haven’t read the details yet. It’s basically good news for the upstream channel, though: the customer can now set his own policies as to what packets get priority, and if he wants to dedicate his share of the channel to P2P, he can do so without interference (within the bounds of available bandwidth).
Kyle Flaherty said:
At the most basic level, providing network service means getting packets from A to B in an efficient, timely, and reliable manner. At this level of abstraction, the only possible kind of abuse to be considered is that of consuming more than one’s fair share of the network bandwidth. This has nothing at all to do with applications. At higher levels of abstraction, there are considerations such as Acceptable Usage Policies, and computer-related laws. These have nothing to do with network management in the low level, bandwidth-management sense. You could use some form of monitoring (not necessarily DPI) to aid in detection of these higher-level abuses, and arguably should do so in many cases, but it’s important to keep the issues as independent as possible. Such lines of demarcation are the only things that make large, complex systems sustainably manageable at all.
Well, there are other protocols besides TCP, and they don’t recover from dropped packets. VoIP and streaming video are getting more and more popular. The former is a direct threat to legacy telephony, and the latter, while eventually becoming a threat to legacy TV, drives up bandwidth and general network performance requirements dramatically. Any QoS scheme that does not prioritize by application but instead just drops packets during periods of congestion based on FIFO may affect these apps as well. TCP apps will recover of course, although they will be slower (the whole point), but real-time apps will suffer to the point where they can become unusable, and I suspect downloaders are also big Internet users who probably do a good bit of video as well. That’s why I say the P2P folks are worse off. Fine by me - as you say, it may be the incentive to cause the abusers to change their practices. Comcast is probably laughing about all this. They’ve come up with a “network management” scheme that is apparently, at least for the time being, acceptable to net neutralists, privacy advocates, lawyers and law professors, maybe the FCC (and maybe even the file sharers!) Their application-agnostic approach should in theory have the NN folks who cry “don’t discriminate by application” off their backs, at least until the NN folks change their viewpoint to “don’t drop ANY of my packets”, which would be all the more amusing. Comcast’s policy should have the privacy folks off their backs since they are not doing (gasp!) that awful DPI and looking into packet payload. (Perhaps the privacy folks will turn their attention to the numerous other deployments of DPI in service provider networks.) With Comcast’s new policy, we are back to good old fashioned rate shaping and policing, now by individual subscriber instead of by application. Wow. What will be interesting to find out are the details of the overall policy. It appears to be at least two fold: - a monthly cap of 250GB enforced first by a warning followed by a “temporary” suspension” of 6 months, which for most people would mean “permanent” and would force them to switch providers if they can (and Comcast would have no problem losing their $40/month since they are effectively losing money on such subscribers) - real-time throttling of subscribers based on what Comcast considers to be an abusive sustained transmission rate (whatever that may be) over some period of time, limiting the subscriber to less bandwidth (again, whatever that may be or what the effects are) for some period of time I had been assuming the 250GB cap was for downloads, but yes, come to think of it, since the bigger part of the P2P issue for cable networks is on the upstream, perhaps the cap is a cumulative bi-directional summary? I have seen this before in how some Internet transit providers calculate committed data rates from which they bill customers. Does anyone know…is this clarified in Comcast’s policy documents? Even so, it would still only affect a small percentage of subscribers, but it would make a difference to the P2P folks if the cap included uploads and downloads. Regarding their traffic shaping policy, it will be interesting to find out if that too is only applied in one direction (and which direction?) or is bi-directional, and either way what the parameters are that govern packet drops. Depending on the numbers, such a policy could affect more than just the P2P folks. Does Comcast’s policy document clarify this as well? Basically, Comcast has covered themselves for now. They have instituted a new policy that should alleviate the P2P issue, gain some level of let’s say “temporary tolerance” rather than acceptance from the NN, privacy and other folks outraged at their soon-to-be-former practice. Don’t be too concerned about their ability to survive a media or legal frenzy or bad press – they’re a cable company. It goes with the territory, they are used to it and are pros at it, and their image and reputation were built a long time ago. But this is all just temporary. Wait for the screams and lawsuits to begin based on their new policy. We’ve only scratched the surface.