|
The foundational idea behind “net neutrality” is one of fairness by constraining ISP power over network mechanisms. The theory is this: if there is “non-discriminatory” local traffic management, then you have “fair” global outcomes to both users and application providers. There are thousands of pages of academic books making this assumption, and it is the basis of recent EU telecoms law.
This is a false application of a common carriage metaphor to a stochastic distributed computing system. The regulatory result is technically incompetent and practically unenforceable.
The core problem is one of showing intentionality for a “best effort” service whose performance is undefined. Is an ISP deliberately “throttling” the service to a particular application? Hard to tell when the service can legitimately do anything anyway! Flukes and faults look identical.
Let’s just assume for a moment that this “throttling” concept is technically meaningful (it’s not), and unpack what this would imply. Let’s also say that one day the CEO of AT&T is thrashed on the golf course by the CEO of Netflix, and the next day the performance of Netflix on AT&T plummets. Could AT&T be a wicked and intentional “discriminator” engaged in “throttling”?
I’ve covered before how the performance properties of broadband are emergent and not engineered. What it means is that there is no guarantee of any limit on the nature, scale or frequency of any failure. (Engineering is by definition the act of taking responsibility for any failure to meet constraints, like safety or cost.)
So if the measured performance of Netflix drops, is that evidence of throttling by AT&T? No. It simply isn’t possible to show that there was any intention to make the performance drop just from external measurement. That’s the nature of emergence.
It could just be some new interaction of the Netflix codecs and the internal network protocols. It could be Apple changed a setting in its TCP stack that happens to affect Netflix (which uniquely shows new emergent properties due to its scale). It might be an OS update on a Cisco router that had some unintended side-effect. It could just be the effect of growing load. It could just be purely random as the system moves to a new “attractor” state. The list is endless, the interactions myriad.
So retrieving AT&T’s intention for Netflix performance from network operation is mathematically and philosophically hopeless. You can never go back through the stochastic “labyrinth of luck” to find out what the intention was.
OK, so what if AT&T was caught changing the traffic management rules that fateful day? Indeed, what if it was explicitly “deprioritising” traffic to Netflix? And this fact is even kept in a timestamped and digitally signed configuration log. Surely that’s evidence of “throttling”!
Nope, sorry. It could be that Netflix’s codec had been showing signs of rapidly oscillating between resolutions in AT&T’s lab, which caused flow collapse and buffering. In anticipation of a customer QoE problem they had changed the resource scheduling to smooth the flow. This kept Netflix from oscillating and failing. They had even tested this “fix” on the live network.
When it was deployed at full scale the emergent effect was to cripple the overall service to Netflix, as there was a new resonance phenomenon between protocols and scheduling that occurred.
Now, I’ve chosen a “trivial” example of some scheduling system where “deprioritise” has obvious meaning. In general, we have scheduling systems which take one input traffic pattern, spit out another, and all these state machines interact in probabilistic ways to create a user experience. Even attaching meaning to “deprioritise” (and hence “throttle”) is meaningless nonsense.
What if the AT&T CEO had written to the AT&T CTO the next day, complaining of a sore back and a sorer ego, saying “let’s be evil to Netflix!”. Isn’t that evidence of intent to throttle?
You can guess the answer. The performance of the Netflix application is shifting all the time. The traffic management rules and router settings, etc. are all dynamic. The patterns from applications change every day as codecs and protocols are updated.
So that isn’t proof of an intent to nobble Netflix, even if a traffic management rule coincidentally changed the same day! Miscarriages of justice will be common; go read any introductory forensics (or medical) textbook and find the chapter on statistics.
You can “turn the knobs” on the traffic management, which biases the emergent outcome to become a different one. You can keep turning the scheduling “knobs” until you get something you (dis)like. But you can’t “prove” that a particular knob setting was intended to give a particular result.
When the case comes to court, there’s no way to recreate the specific emergent conditions of the network at that moment in time, and show that this particular “knob setting” was to “knobble Netflix”. It might just be an accidental by-product of making Hulu and YouTube work better! Netflix’s QoE is an emergent and transient phenomenon, the result of a dynamic stochastic system. Got it yet?
There’s a litany of other reasons why ISPs are the target of a miscarriage of justice by net neutrality advocates, but this particular one is a “game over, please go home and shut up”. I’m going to say it loud: it is not possible (in general) to recover ISP intentional semantics from denotation or operational semantics. The foundational assumption of “net neutrality” is provably false.
In other words, the endless works of net neutrality advocates and certain legal academics are a pile of technical tosh and scientific slop. “Net neutrality” should not be fought, it should be laughed at. You can’t build a non-discrimination regime from fair treatment to packets, or apply common carriage to a distributed computing service.
If you have not mastered undergraduate electromagnetism, then please don’t do spectrum policy and expect to be taken seriously. If you have not mastered undergraduate computer science, then you are not qualified to “debate” regulating broadband performance. It is time the ISP industry found some courage to technically discredit this costly regulatory attack from illegitimate activists and ignorant academics.
Sponsored byIPv4.Global
Sponsored byWhoisXML API
Sponsored byRadix
Sponsored byDNIB.com
Sponsored byVerisign
Sponsored byVerisign
Sponsored byCSC
One point though: the usual questions about throttling aren’t based on individual measurements, they’re based on behavior over a relatively long timeframe (weeks to months) combined with comparisons with both similar traffic from different sources and the same traffic with an altered source (eg. Netflix accessed both directly and via a VPN). For instance, with your Netflix/AT&T example, other streaming services using the same codecs should react similarly to one degree or another and Netflix traffic routed through a non-Netflix relay should react similarly. When only Netflix traffic is affected, with other services being unaffected and with Netflix traffic routed through a non-Netflix IP address also being unaffected, it raises suspicions. When complaints about the slowdown go unaddressed for months with no explanations as to why the problem’s occurring, one can only conclude that either AT&T’s network team is totally incompetent (highly unlikely) or AT&T’s management doesn’t consider the slowdown to be either undesirable or a problem that needs fixed.
If they give an explanation involving traffic volumes affecting their network, that makes the conclusion even worse. If it were traffic volume, all streaming services would be being affected because they all involve similar traffic loads. Non-streaming services that created similar traffic loads would also be affected because they’d be causing the same traffic-volume problem. Yet the behavior doesn’t match the explanation, it follows one particular company’s service rather than actual traffic volume. In my experience (25+ years writing and debugging network software), when the behavior being observed doesn’t match the purported explanation of the cause it means the explanation of the cause is wrong and the real problem’s something else. I’ve had many non-technical managers argue otherwise over the years, and every time that I’ve gone on to track down and fix the actual cause it’s turned out I was correct and the manager wasn’t.