|
I’m kinda foxed by the some of the discussion going on about “Net Neutrality”.
The internet was designed from the outset
notto be content neutral.
Even before there was an IP protocol there were precedence flags in the NCP packet headers.
And the IP (the Internet Protocol) has always had 8 bits that are there for the sole purpose of marking the precedence and type-of-service of each packet.
It has been well known since the 1970’s that certain classes of traffic—particularly voice (and yes, there was voice on the internet even during the 1970’s)—need special handling.
Voice-over-IP (VOIP) requires that networks
notbe neutral; if tiny VOIP packets have to fight against large HTTP packets for bandwidth and space in router/switch queues then conversational VOIP quality will be very poor and we may as well concede the voice game to the incumbent telcos.
Maybe the heat comes from the question of who gets to mark traffic as having precedence—the user or some provider?
But how can we trust users not to mark all their traffic as being of overriding high priority? But we begin to have the scent of provider-based priority marking if we don’t trust the users and begin policing and admission control at the edges where the user’s packets enter the internet.
Provider discrimination has already existed for a long time, often for purposes of self-protection or to induce better sharing of network resources. For example providers often disfavor and rate limit ICMP echo requests and replies. And router vendors offer things like “fair queuing” (a means to more equally distribute resources among flows) and “Random Early Drop” (RED) (a mechanism that actually throws away perfectly good packets in order to penalize over-aggressive flows and coerce them to perform socially acceptable TCP congestive back off.)
The RSVP and Integrated Services approach to end-to-end quality-of-service faded away in the face of provider resistance and the kind of inter-provider jealousy that is natural when providers compete with one another. And I haven’t seen much reason to believe that end-to-end Diff-Serve packet markings actually survive end-to-end.
(Just before we were acquired by Cisco, I implemented a full functioned IP multicast based RSVP client. Woof! That was one seriously complicated protocol!)
What I’m getting at here is this: The internet was born with an element of discriminatory treatment of traffic, and there are good technical reasons why such discrimination is valuable, particularly for VOIP. So it would be plain wrong to say that the internet must be perfectly fair to all traffic. What we need is a line, a fuzzy line, that tells us when such discrimination moves out of the category of being useful and into the category of predatory.
My own sense is that this fuzzy line needs to be based on the idea that it is OK if done with the actual or implicit consent of the user (or users) and servers to improve whatever it is that they are using the internet for. But if it is done by providers for reasons divorced from self-protection (such as ICMP rate limiting) or to squeeze more dollars out of users or coerce their choice of providers, then the traffic discrimination is wrong.
Our guide in this should be the end-to-end principle and my own First Law of the Internet.
Featured from the CaveBear blog.
Sponsored byVerisign
Sponsored byDNIB.com
Sponsored byWhoisXML API
Sponsored byIPv4.Global
Sponsored byRadix
Sponsored byVerisign
Sponsored byCSC
Karl, I agree with much of what you say, but I’m concerned that some of your remarks here create more confusion than they dispel. In particular, the idea that the Internet “was designed from the outset not to be content neutral” is dangerously misleading. It was designed from the outset with the ability to handle packets in different ways, but this has nothing whatsoever to do with the payload—the “content”. A router is behaving in an entirely content-neutral manner if it makes all its routing decisions based on the IP header. It could still be non-neutral in other ways, but it is content-neutral.
You then say that VOIP “requires that networks not be neutral”. This isn’t so. Adequate VOIP requires certain latency and throughput—nothing more. A network that treats all packets as equals can sustain VOIP so long as the current load allows the latency and throughput to be maintained. An overloaded network can’t sustain VOIP whether it is neutral or not, unless it selectively kills off certain streams entirely. Exactly what technical solution makes optimal use of network resources is an open question, but VOIP absolutely does not require non-neutrality of any kind.
Also interesting is the question of who gets to specify how packets are to be treated, particularly when the packet is routed across network boundaries, and thus through different terms of service. That’s a deep enough issue that I won’t delve into it here, however.
Your conclusion, that non-neutrality is “OK if done with the actual or implicit consent” of the involved parties, is not too far removed from my own previous remarks on the subject. I have to say that your “First Law of the Internet” borders on platitude, however. It expresses an admirable sentiment—the kind with which everyone would agree in principle—but there are other “laws” at work here. If a particular network provider believes that the First Law of Business is “maximise profit”, then which law do you suppose is going to take precedence, so far as they are concerned?
Everybody obeys the First Law of the Internet, except when it conflicts with their other agendas.
What you say is reasonable and sensible - we don’t have to quibble over the internet was designed to be non-neutral or rather it was designed with the foresight to accommodate uses that need to differentiate among traffic classes. The point I wanted to make was that there are good reasons to differentiate and there are bad reasons. And I think we both agree on the rough distinction between those classes.
As for VOIP needing a helping hand. You are right, to a degree, that if links are not saturated, VOIP works ok. But that’s in a world of fast links (10megabit Ethernet and faster) in which the cost of serialization delays doesn’t really contribute much to the total end-to-end delay. But when we get to relatively slow access links (typical DSL), or some of the really distant wireless technologies (I’ve had to pump VOIP though a sub-64Kbit satellite channel) then it really hurts when a small VOIP packet has to wait for a long HTTP packet to finish occupying the an outbound interface.
We have to remember that acceptable quality conversational voice starts to become impossible when the one-way delay from mouth-to-ear reaches about 150milliseconds. With typical VOIP protocols chunking out 20millisecond samples, we already begin by losing nearly 15% of our time budget even before each packet leaves the user’s VOIP handset. Add in typical net delays, much of which are speed-of-light delay, but also a goodly part comes from queuing in routers, even on non-overcommitted links, and we can easily exceed the 150millisecond time budget. And when the receiver has to add pseudo-delay in order to smooth out the variation in transit time, i.e. jitter, we can easily end up with a low quality VOIP call that has users talking over one another.
On top of that, I’m starting to get the feeling that many edge providers, which seem to be the same ones who are doing most of the sword rattling about charging for “premier” service, are oversubscribing their backhauls.
All of that does not bode well for VOIP in an undifferentiated service world.
As for my First Law of the Internet - Oh, I wish it were a platitude. However, there are so many groups that feel that the net is not to be used as the user’s wish - ICANN, for example, very clearly says that we can’t create internet naming, at least not at the top-level-domain layer, unless it meets ICANN’s criteria. And we see countries, such as China, that want to filter, and companies, such as Google that are more than willing to accommodate. Peer-to-peer technologies have been condemned even when used in ways that don’t abuse copyrights.
My First Law of the Internet goes a step further in that it recognizes that there are competing interests and it establishes a rule that it is the burden of the challenger to demonstrate that a use should be denied rather than the burden being on the user to prove that the use is innocuous. That statement of the burden of proof is absent from previous formulations of internet openness. In fact, to my mind, the fixing of this burden is actually the most important aspect of my First Law.
There are a growing number of forces that are pushing the internet into the mold of the old telco world - static and immutable. These forces argue that the internet is too important to risk, and that any change must demonstrate its safety before being allowed. There is merit in this argument. But we should answer that argument by requiring those who make it come up with a compelling demonstration that the accused innovation is dangerous rather than requiring the innovator to prove perfect safety.
I have hear that there is an academic method in which one graphs the rate of innovation over time - I would suspect that in most technologies such a curve would have an initial flat zone where the basic research is being done, then a sharp up-ramping as the technology is deployed and innovation is rapid, and then another flat zone as the technology settles down. In this academic method, the time to impose regulatory systems, which is but another way of saying that the burden of demonstrating safety shifts to the innovator, should not occur until that second flat zone begins.
The internet is still on that steep upturn of innovation - and according to that academic method it would be premature to impose regulation, and thus innovators should be encouraged, or at least not denied, by allowing them to experiment, even if that means that occasionally there will be some damage to the net.
That is why I think my First Law of the Internet is more than a platitude.