|
In many ways, the emotionally charged debate on Network Neutrality (NN) has been a lot like hunting Unicorns. While hunting the mythical horse could be filled with adrenalin, emotion, and likely be quite entertaining, the prize would ultimately prove to be elusive. As a myth, entertaining; but when myths become reality, then all bets are off. The Network Neutrality public and private debate has been filled with more emotion than rational discussion, and in its wake a number of myths have become accepted as reality. Unfortunately, public policy, consumer broadband services, and service provider business survival hang in the balance.
Myth 1: The Internet can be “neutral” towards all types of applications
A neutral network implies that every packet and every application is treated the same, even under conditions of congestion. Let the network be agnostic and randomly drop packets. Let the network treat business class customer traffic exactly the same as residential traffic. Let non real-time traffic impact the service of real-time applications like voice.
The fact is that not all applications are the same; different applications have different tolerances for neutrality. A voice application is much more sensitive to packet latency, jitter, and packet loss then is a file sharing application, which can adapt its rate of transmission and recover lost packets. Applications, like VoIP and gaming, demand real-time priority because they require real-time interaction. Even interactive applications like web browsing, while not necessarily a real-time application, can benefit from prioritization. Do you ever wonder why your Web browsing right after dinnertime seems a bit sluggish? It has a lot more to do with network congestion then it does with the meal you just ate.
Myth 2: Network management is unfair
The words “fairness” and “freedom” has been bandied about a great deal by the proponents of Network Neutrality. If it is somehow “unfair” to manage the network, we will see an era of true unfairness, not to mention unhappiness, with dire consequences for the future of our economy.
Unmanaged networks result in serious degradation of service availability and quality for all users. It will also means that customers will be paying more for less, as providers are forced to continually build out their networks to stay ahead of the massive bandwidth consumption growth. Case in point: one broadband service provider had had to double access network capacity every six weeks, just to keep up with bandwidth demand—with no new subscriber growth—before they began managing network traffic. Capacity costs with no new subscriber revenue will ultimately be passed onto users.
But managing the network does not mean taking away the “freedom” to access content and applications of your choice. It just means freedom within fairness. The best of both worlds. Nor does it mean closing subscriber accounts.
Myth 3: Network management violates privacy
When a service provider deploys technology to manage their networks to improve capacity and quality of service, all they care about is the type of application it is—video streaming, gaming, web, or email, for example—not the content itself.
More importantly, managing the network in this fashion does not use or require:
And lastly, managing the network in this fashion does not install or require any specific software on user machines.
Myth 4: DPI is just a P2P “Throttling” Technology
DPI is more than just a peer-to-peer management tool. Unfortunately, the particularly “blunt means” used by Comcast has led to some serious misrepresentations and misunderstanding of how the technology is actually used in today’s networks. Few who have been following this debate would understand that DPI is at the heart of ensuring fairness on the network.
DPI is a critical network element that provides information on how the network is being used, when it is used, and optionally, by which applications and groups of subscribers.
On a tactical level, this information supports decisions about capacity planning, investments in access networks and peering networks, and how to improve service quality, especially during peak hours when the network may be congested. On a strategic level, it provides the ability for service providers to transform their business models and their service brands, by offering a variety of service tiers and consumption-based billing models.
Overall, DPI provides the tools necessary to manage the network to ensure fairness, reduce costs and optimize revenues.
Sponsored byVerisign
Sponsored byDNIB.com
Sponsored byWhoisXML API
Sponsored byCSC
Sponsored byVerisign
Sponsored byIPv4.Global
Sponsored byRadix
Myth 1: The Internet can be “neutral” towards all types of applications
Status: true! The Internet can be and usually is neutral towards applications. No matter which ISP you use, most of the Internet is someone else, and although your immediate ISP may have particular policies in relation to specific applications, you can be pretty sure that every set of intermediate routers between you and any given random end-point won’t share that policy. Neutrality is the rule, not the exception, simply because the network is so diverse. Rules are expressed as divergences from neutrality.
Some applications have different preferences as to the kind of service they would like, and IP was originally designed with a “Type of Service” field to address this. Note that placing any particular value in this field has never guaranteed that a router would handle it differently—it’s just another data point on which a router can base its handling, should it choose to do so. Since then, there have been standardisation efforts towards redefining this field for “Differentiated Services” [RFC 2474, RFC 3168]—a more complex embodiment of the same basic idea—and also “Explicit Congestion Notification” [RFC 3168].
Ultimately, however, applications should not rely on the network to provide any kind of preferential service. The Internet is a “best effort” delivery system, and applications must be designed on that basis.
Myth 2: Network management is unfair
Status: false! Network management is not unfair in and of itself. It is easy to devise extremely unfair policies and apply them to a network, but it’s also possible to facilitate fair allocation of resources this way, given that some participants might be greedy. “Fairness” is a pretty nebulous term, though, and there are a lot of differences of opinion as to what constitutes “fair”. In a commercial environment, where service is being bought and sold, “fair” has connotations of “fair trading”, meaning that “fair” can depend a great deal on what the customer reasonably expects to obtain in the way of service.
Myth 3: Network management violates privacy
Status: false! Like Myth #2 above, network management does not violate privacy in and of itself. That’s not to say that it can’t. If you are responsible for managing a network in a way that does not violate privacy, you must be at least a little careful in your choice of mechanisms. After all, there’s a lot of data flying about that could easily be used to violate privacy, particularly given the ISP’s knowledge of the subscriber’s actual identity.
Myth 4: DPI is just a P2P “Throttling” Technology
Status: false! It should come as no surprise that Deep Packet Inspection has more than one use. The notorious “Great Firewall of China” reputedly uses it, for example. Many network diagnostic tools also make use of it. There are also arguments against using it in particular contexts: for example, routers which base their behaviour on the specifics of higher protocol layers make for a brittle, unpredictable, and ultimately unmanageable network. If a router needs to know something about a packet, that data should be in the IP header.
Yes, true for the operator. But what is fair? The operator can reduce costs by limiting bandwith for ‘unwanted’ traffic such as bittorrent (because it use much capacity) or Skype (because it competes with the operators services), but is this ‘fair’?
Allowing operators shape traffic is decidedly good for the operator, as it is in control, and possibly a disaster for a free and open Internet. If DPI had been widely deployed and unregulated new applications such as bittorent, Skype etc would have been killed in its infancy.
What adult expects the world to be fair? It’s not, never has been, never will be. The Internet is no exception. The Internet is not (no longer) a Government subsidized service provided for free. When it was run by the Gov’t, it didn’t benefit the masses like it has since it was commercialized. Not even close. The Internet is run by commercial companies who invest their own (or their shareholders) dollars in infrastructure and service offerings. If it were your investment, what would you want them to do? To a degree, they are limited by technology and the price points that are acceptable in the retail market. They need a return on the investment, else why do it. Services are derived around those restrictions, as well as the laws that play into it. There has to be some rules, policies and limitations, and some are passed on to the subscriber base (if not, they get passed on as costs somewhere.) Every technology that has ever been commercially deployed has it’s technical or business-related limitations. Even if the service providers do a poor job of alerting customers to how their service works and what the restrictions are, no one necessarily has an absolute right to do whatever they want. BitTorrent and other P2P apps wreak havoc with broadband service, and a few users ruin it for the majority who aren’t doing P2P.
Look at it this way (apologies in advance to those who recently said how much they hate simplifying analogies), if you rent a car and the model you get can theoretically go 150mph, does that imply that the rental car company is giving you express permission to (a.) break the law, (b.) endanger yourself or others around you and (c.) risk damage to their asset? Whether they express it or not in the rental contract, there are always limitations whether stated (well or poorly), and common sense and compromise have to play a part.
Good job on Brett Watson for seeing it right!
Myth #1 is obviously TRUE and has been through most of the history of the Internet. Only recently did hardware makers came up with Service-Provider scale DPI devices cheap enough for large ISPs to use them, and then they started “shaping” the network claiming how necessary it all is. Well we’ve had great success in the Internet without DPI.
And shame on the article for making up Myth’s #2, #3, and #4 all by itself and then debunking them as if it just proved something. Nobody thinks that network management is unnecessary, nobody thinks that network management itself violates privacy, or that DPI is just good for throttling.
Hardware-based DPI is at least 10 years old, Robb, flow managers, accelerators, and caches have been part of the middleware for a long time. People use them because they eliminate some of the waste and redundancy that's epidemic to IP-based network systems. The Internet should not be "neutral" toward all applications because all applications don't have the same costs, value to the user, and requirements. Rather, it should try to satisfy a broad range of user requirements by delivering VoIP quickly, peer-to-peer cheaply, and web traffic at a good compromise of low latency and low cost. Many of the more zealous neuts have condemned all uses of DPI as privacy-invading; David Reed is a good example.
Your puppet masters at Free Press, the big rich lobbying group in DC, have said all of those things.
And I do wish more of the “NN Squad” types actually knew what they were talking about, or knew packets from a hole in the ground for that matter…
1. Yes there are patently unfair or even wrong ways in which DPI, per application traffic shaping etc can be used.
2. It is perfectly possible to use them fairly, to provide an acceptable user experience to all your customers (and say, prevent 1% of your users from leeching 90% of your bandwidth).
3. All you can eat / unlimited broadband connections are dumb marketing, because you bank on pipes remaining underutilized. Try getting 2 mbit guaranteed pipes (aka a full T1 instead of a cheap ADSL) and see how much that costs?
Blame specific practices. Dont blame the tools used for them, or business rationales that motivate these to be used wrongly, or paint them as evil. That kind of tactic is plain FUD and mudslinging rather than rational debate. Not to start a long, long debate but its starting to sound like gun control (ban the guns, "guns are tools, it is the human behind them that pulls the trigger") DPI doesnt kill people, but the rest of the debate is just as emotionally charged as the gun debate. And is attracting much the same kind of extremist arguments.
Partly true. Problem is that it is that we have two parties, the customer and the operator and it is the operator that holds the gun (you started :) If the DPI is implemented in a fair way, no problem, but what is fair? VoIP needs to be prioritised over P2P but which VoIP services? (Answer: the one owned/affiliated with the operator) What will happen with 'good' services that are mistaken as 'bad'? E.g. new applications using P2P type protocols. One of the major drivers for new internet technologies/applications is that the network does not discriminate traffic. If it does (and differently in different parts of the networks) this will limit innovation. What would have happened if operators had DPI deployed with e.g. P2P limitations and this new app 'Skype' came along and 'mistankenly' was given low priority. Would operators give Skype higer prio to make it work in its network and compete with its own VoIP service? No way, the operator must protect its revenue and if it is allowed to use DPI to limit competition the operator is obliged to do so. DPI is good for the operator, but, in practice, not good for society. just my 2 cents..
If the whole Comcast / Net Neutrality argument is purely based on stopping something early before it becomes the slippery slope, then fine. But you are bringing up things that aren't the case right now. Comcast uses DPI-based P2P throttling products, but they don't touch VoIP or Skype (I use both over Comcast with no issues). The courts have already played out the issue with a broadband provider, particulary if it is a legacy telco, blocking or placing other VoIP traffic besides its own as a lower priority. They aren't allowed to do it. And don't drink the cool aid on the nonsense that there is so much legitimate P2P out there. Maybe one day that will be the case. Right now, upwards of 95% of BitTorrent and KaZaa traffic is the ILLEGAL STEALING OF COPYRIGHT "PROTECTED" MATERIAL, predominantly MP3s but also movies. If Comcast or other ISPs have to shut down their "network management" capabilities and let it all go, doesn't anyone have the fear that, while the few BitTorrent users dominate the available bandwidth, it will stifle the innovation for pretty much every type of application? I know that if that happens and it affects my Skype and Vonage, I'll be forced to purchase voice service from, guess who, either the legacy LEC (Verizon) or the legacy cable provider (Comcast). Wouldn't that be stifling the innovation of Vonage and Skype, two companies that have made considerable headway in providing low price high quality alternatives to legacy services?
Dan, I realise that you like to address every conceivable aspect of a problem simultaneously [example], but I find that this muddies the waters. It's therefore somewhat exasperating to me that you've introduced the matter of "ILLEGAL STEALING OF COPYRIGHT "PROTECTED" MATERIAL" (sic) into this discussion. It's entirely irrelevant to the matter at hand, and I would expect even Comcast to agree with me on that point. Comcast's war on BitTorrent has nothing to do with the legality of any particular instance of its use, and nor should it. There are numerous aspects to the Neutrality issue, but they centre around one main theme: the question of whether "network management" can be used as a legitimate pretext for behaviour which is unreasonably discriminatory. The discrimination may be on the basis of the application type (such as a war on BitTorrent), remote network destination (such as a threat to de-prioritise traffic to Google unless they grease some palms), or other vested interest (such as a desire to sabotage a competing VoIP offering). ISPs, by merit of the fact that they sell Internet access as a service (and in some cases sell services which compete with Internet applications), should not have carte blanche to declare unilaterally any and all network management practices "reasonable". Ideally, no regulatory intervention would be required, since customers would simply change ISPs, but customer choice is fairly limited on average. It so happens that the Comcast war on BitTorrent has utilised Deep Packet Inspection, and this is why DPI is also on the table at the moment. There is not only a question of whether the war on BitTorrent is reasonable in and of itself; there is also the question of whether DPI is a good network management technology to be using at all, even if it is used to effect a reasonable policy. The latter concern is not just a technical issue: DPI is, by nature, a technology which facilitates discrimination on the basis of application, and condoning it in certain specific instances tends to invite the abuses as well. My present take on the matter is that DPI is not appropriate for a service provider. ISPs should simply aim to supply each customer with a reasonable share of bandwidth without regard to the application. This feels like a case of stating the entirely bleeding obvious, but the Telco and Cableco incumbents in question seem to have a deep reluctance to pursue that ideal. DPI is entirely reasonable at the boundary of a private network, however, where the network owner may wish to limit the use of the network for whatever reason. For example, a BitTorrent-killer could be quite useful at the Internet boundary of an office network where use of the application is administratively prohibited.
Brett, I typically ignore your responses because they tend not to offer much more than sarcasm and criticism, particularly directed towards Richard Bennett to whom you seem to show a particular disdain (and frankly, whose comments are typically on the money from what I've seen). Why you do this, I don't know, but there are plenty of other blogs out there where innane, childish chatter is the norm. But since we are on the subject, I noticed that you have over 170 comments yet only 3 original posts and nothing in 2 years. Clearly you have ample time your hands to read through the posts and make comments, yet such little original material? There's a reason there are more critics in this world than artists and creative people. It's because it is much easier to sit back and wait for someone else to put out original thoughts and then criticize them than to create something original yourself. Critics are a dime a dozen, and usually that is too much to pay. Then again, I guess someone who not only calls themself "The Famous Brett Watson" but actually abbreviates it TFBW is begging everyone to not take him seriously. This is a complex debate. It can't be completely simplified to just the technical matter. The lawyers and law professors are muddying the debate by focusing only on the legal side, and most of the time they only focus on a portion of that aspect or their arguments are inconsistent or incomplete. You want to simplify it to just the technical aspect, what is fair network management, and you constantly call on others like Richard to defend using the Sandvine / Cisco approach instead of other techniques. How about if you actually research it and add something to the conversation? The file sharing issue is indeed relevant. I see alot of NN folks putting out misleading information about how the ISPs are blocking all of this legitimate P2P traffic. The reality is that the majority of P2P is indeed illegal file sharing, and if that were not going on we would not be having this debate right now. I've seen the statistics. I could care less about the music industry, but the problem right now is with a bunch of spoiled children who not only think you shouldn't need to pay for the creation of others, but have no regard for the other side effects it causes, like causing severe degradation to their neighbors' broadband service or, worse yet, possibly leading us all towards a usage-based model. No, Comcast doesn't care about saving the music industry, but it is the file sharing of music and movies that they are fighting, don't kid yourself. The products that some ISPs are using to throttle P2P were not necessary prior to when P2P file sharing came on the scene. A problem with the arguments I see is that many of the NN folks suggest the ISPs should just leave all traffic untouched without ANY suggestion as to what to do about the service degradation that WILL occur, or how to structure a viable business model that continues to allow for inexpensive, flat-rate broadband service without going out of business on infrastructure costs. The best you get might be the completely misguided statement like, "well, the ISPs should just upgrade their networks," without any thought to the costs or technical feasiblility. In order to properly debate this, the best I can do to boil it down is to three areas: legal, technical and business model. Most people seem to pick only one area. It's not that simple. They all must be considered simultaneously.
Dan, the trouble with drawing the copyright aspect of the issue into the argument is that it downplays the damage being done to the legitimate users of the application. By analogy, just because ~95% of email is spam doesn't mean we should block all email. Nobody would stand for that kind of heavy-handed "remedy", because legitimate non-spam email is still a popular use of the application. Similarly, even if we dismiss the complaints of the "spoiled children" who use BitTorrent only to infringe copyrights, that leaves us with a significant number of users who suffer collateral damage in the war on BitTorrent. I appreciate that there is a need for network management so as to avoid congestion, but sabotaging specific applications is no way for a respectable network service provider to go about it. This remains true even when the application counts for the bulk of traffic and has a reputation of widespread illegitimate use.
That’s well on the way to becoming one of the classic logical fallacies.
thanks
srs
The email spam analogy is interesting but ironically can work both ways. I know what you are saying, but spam has become such a problem that ISPs and business are using invasive tactics to thwart it. Spam filters often cause collateral damage to legitimate email, which at best might end up in your junk mail folder or worse just never arrive. But your point is not lost. I merely wanted to point out the root of the issue because I see a lot of disinformation spread about legitimate P2P, which right now is not the majority. Just because the tactics used to thwart an issue are questionable, illegal or deserving of criticism, it doesn’t absolve the root cause of the problem. If it were as easy as it is to duplicate and trade music, software and movies as it is to photocopy books and other copyright-protected material, maybe the courts would have taken a stronger stance 30 (??) years ago on Xerox and photocopiers. But the difficulty in standing at a copier photocopying each page of 2-inch-thick 1000-page Steven King novel was its own discipline! In the digital age, it is too easy.
When companies like Sandvine and P-Cube developed their products, it was in direct response to Napster, BitTorrent, KaZaa, etc., a clear market demand. When Cisco, the biggest network vendor of all and one who often acquires companies, decided to buy their way into the market (which resulted in the P-Cube acquisition 3 or 4 years ago), they weren’t sitting around a boardroom speculating on some future need; they were dealing with immediate demand because of media file sharing.
If the whole Comcast/FCC thing leads to usage-based services – and some ISPs are already testing this model - there will be a lot of collateral damage to innocent bystanders in the form of bigger bills. At a minimum it will detract from the user’s Internet experience when the amount of time they spend online and the size of data they download preys on their mind. It would probably stifle innovation as well, for similar reasons.
Major ISPs like Comcast will probably tread lightly here, knowing the backlash that will occur given that this has already played out in other services. The backlash would first be a significant increase of calls to their call center, with subscribers first angry only about their bills but then becoming increasingly agitated as they find themselves on hold for an hour, a la the AOL scandal in 1997. Eventually it will cause churn, and the bigger ISPs know this.
What may also happen is that someone will create a freeware application you can load on your PC that measures your traffic and creates a simple end-of-month report that you can use to challenge your Internet bill, pointing out the unsolicited traffic your Internet link received (port scans, attacks, etc.) or viruses that generated outbound traffic and should have been taken care of by the ISPs anti-spam filter. Your ISP may say you owe X but you can say no, I didn’t ask for that traffic, I should only have to pay Y. The application could even be clever enough to compare the size of downloaded files with the actual amount of data transmitted and, if there were retransmissions because of network errors or congestion, suggest to the ISP that that is not the subscriber’s problem either and we aren’t paying for the extra traffic. So the arguments would continue. ISPs would have to find that area between the majority of their subscriber base and the download junkies and fix the usage-maximum there to penalize just the abusers.
But ISPs may have to consider offering anti-spam filters, anti-virus and IDS/IPS as part of the service. Some already have this as a value added service or simply to protect their own network, but it may become critical in a usage-based service. And, of course, spam filters, AV and IDS/IPS are significantly more intrusive - at least from the privacy perspective - than are bandwidth management appliances, even if the BMA’s apply DPI in a more brute-force manner (as one earlier post put it.) BMA’s don’t look into the content for the purpose of surveillance and certainly not to store it for later retrieval as some blog posts try to lead people to believe (could you imagine the storage capacity you would need? Only Google could do it.) So, the Net Neutrality purists who preach the privacy side of the argument will have another dilemma to face. And round and round we go.
Ultimately, until the day comes when we all have fiber to the home and bandwidth really is “free”, we will need to deploy a variety of tools in service provider networks to ensure service quality and reasonable prices. Whether or not those tools are “fair” depends on what perspective you take.
You make it sound like it’s a choice between metered access and unlimited access. As I’ve pointed out elsewhere, the bulk of broadband services in Australia are neither metered nor unlimited, but rather limited on a gigabytes-per-month basis, and throttled back to dial-up speeds for the remainder of the period if the quota is exceeded. The price per month is generally fixed, but a higher price is paid for a higher bandwidth cap. They generally include spam-filtered email in the service, of course.
If you want to learn how decent broadband services can be provided without charging by the byte or sabotaging particular applications, look into the Australian broadband market. I don’t say that it’s perfect, but it’s clear that the US market can learn a thing or two from it—assuming that you’re looking for answers, rather than excuses, of course.
I use the terms “metered” or “usage-based” liberally to sum up anything that is not flat-rate no-usage-maximum “all you can eat” service, rather than try to come up with terms for th e 50 different permutations of, well, “usage-based” service. And I have been a part or at least have seen many pricing models and technical strategies for Internet service deployed globally, not just in the US.
The language you’ve used assumes variable pricing for limited services: specifically, you introduce the “bigger bills” bugbear, and all the associated fuss over whether traffic was solicited or not. For plans which are fixed in price but bandwidth-capped, these are not noteworthy issues, and a light user can potentially save money relative to an “unlimited” plan. Your summary of usage-based plans seems to have treated the whole as though it were a particular subset: the subset of variably-priced plans. That subset is not representative of the whole.
Let me see if I understand you correctly. The impression that you’ve been giving me here is that a single one-size-fits-all unlimited plan is better than any tiered, usage-based system, including usage-based variants such as fixed-price plans that engage a throttle when a quota is exceeded. After all, you have predicted bad consequences if the FCC ruling results in a shift to usage-based services. Correct me if I misunderstand you.
If this is a fair description of your stance, then I disagree with it. Speaking for myself, I’m glad that I have a choice of fixed-cost usage-based plans rather than a one-size-fits-all unlimited plan. More objectively, the passing on of some bandwidth costs to the user in this way encourages moderation: a person on an unlimited plan has a perverse incentive to maximise his value by transferring data gratuitously, whereas a person on a tiered, limited plan has an incentive to pick a reasonable tier and ration usage accordingly.