|
In “Net Neutrality: Discrimination, Competition, and Innovation in the UK and US”, Alissa Cooper and Ian Brown explore the relationship between two broadband regulatory regimes and their practical outcomes. The paper is of (paradoxical) interest as it (unintentionally) demonstrates how policy is being made without sufficient understanding of packet network performance.
This paper contains many common fallacies about performance. These fallacies are fuelling misdirected conflicts over broadband regulatory policy. The underlying (false) assumption is that ‘neutral’ (aka ‘non-discriminatory’) networks exist.
I am highlighting this paper as an exemplar of an endemic gap of scientific understanding. Bridging this gap will, I believe, transform the regulatory debate for the better.
Networks have performance constraints
The performance constraints of broadband need to be understood and respected, much like spectrum policy needs to fit within the immutable limits of the physics of electromagnetism.
The first error of the paper is to implicitly argue for a world that lies well outside of the mathematical constraints of statistical multiplexing. There is then an inevitable real-world failure to deliver on their utopian vision.
This reveals a second error, which is a mischaracterisation of the relationship between the ISP’s service performance intentions, the described traffic management rules, and the delivered operational experience.
The confluence of these errors leads to an unhelpful blame game between users, application developers and ISPs. As ISPs are stuck in the middle, they are unfairly singled out as the baddies.
The underlying issue is that the universe of discourse of the paper fails to reflect actual networks in operation. Examination of the truth of its subsequent policy-related claims becomes a moot question.
There is a schedulability constraint
The essence of packet networking is to statistically share resources, resulting in contention. Networks have multiple performance constraints, and one of them is schedulability of that contention. The effectiveness of packet scheduling is in turn limited by two factors: our knowledge of demand for performance, and the sophistication of the mechanisms in collectively constructing matching supply.
The authors posit the existence of a world of good and predictable performance from ‘non-discriminatory’ networks. Such networks are presumed to have minimal knowledge of differential performance demand, and minimal mechanisms for differential supply.
For this world to sustainably exist, it requires one of two impossible things to happen. Either contention is always negligible (so the schedulability constraint doesn’t matter), or appropriate scheduling of contention happens by magic (so the constraint appears to be unfeasibly high).
In the former case you have to believe not only in a cornucopia of resources but also that more capacity always solves all performance issues. Neither is true. In the latter case, you have to believe in the unbounded self-optimisation of networks. How else do you explain that the right scheduling choices are made?
A semantic model of ISP service
The authors repeatedly refer to ‘(non-)discrimination’, as the presumed technical means by which ‘neutrality’ is achieved. What does this mean?
The concept of ‘discrimination’ is only relevant to the extent that someone, somewhere is not getting the performance that they might have desired. It contains a philosophical trap for the unwary.
To see this trap we need to relate:
For example, we might have intended to satisfy all the performance needs of a typical family with a home worker; described the service as having 1Gbps with no differential traffic management; and in practise the service is unusable for interactive gaming in the evening video-watching peak period.
The truth of the matter is that not all performance demands can be simultaneously satisfied at all times at any feasible cost. Holding this in mind, what are the intentional, denotational and operational behaviours of a ‘non-discriminatory’ ISP service? Is such a thing even meaningful to discuss?
Uncovering false assumptions about semantics
You can guess the answer. Their concept of ‘discrimination’ being offered has no objective and measurable technical meaning.
Broadband is stochastic, and performance is an emergent phenomenon. The reality is that ‘best effort’ networks offer arbitrary performance, and indeed may behave non-deterministically under load. That means any behaviour is a legitimate one! That’s the deal with the Internet architecture devil we regrettably made.
The authors appear unaware of this. For instance, they assume that localised denotational information about differential traffic management rules provides users with meaningful information about the global (emergent) operational behaviours. It does not.
By turning the (ahem) neutral term ‘differential’ traffic management into the judgemental ‘discriminatory’ one, it effectively asserts a belief in the intentionality of statistical flukes. The assumed relationship between the intentional and operational does not exist!
The network casino doesn’t care about you
It’s like having gone to the casino several times and come out as a winner, only to believe that the purpose of casinos is to fund your family’s luxury lifestyle. This mistakes prior benevolent operational randomness at the statistical multiplexing casino for intentional intelligent design.
This incorrect assumption that operational behaviours are intentional is absolutely pervasive, even among telecoms and networking cognoscenti. Humans are hard-wired to evaluate ‘intentional stance’, so we unconsciously imagine there is a ‘homunculus’ in the network doing good on our behalf when we experience goodness.
The real reason why ‘neutrality’ is impossible
Any specific operational behaviour is not intentional, no matter how strongly your intuition might feel it is. (It is theoretically possible to construct packet networks where the operational behaviours are intentional, but that is not how ISPs are currently designed or managed.)
The idea of ‘neutrality’ has focused attention on local scheduling mechanisms and whether they are ‘discriminatory’. But an impenetrable mathematical labyrinth separates the local mechanisms from the global user experience.
Ultimately we only care about the user experience, and not packets, since we want fairness for people. So regulation has become concerned with the ‘wrong’ side of the labyrinth. This is a subtle issue, but cuts to the core of the intractable ‘neutrality’ firestorm.
The blame game: who stole my performance?
In the authors’ worldview, the inability of networks to manufacture universal and eternal delight in performance gets treated as a fault of ISPs. The idea it is simply a mere scheduling constraint of mathematics is unthinkable.
Without ISPs as the baddies, the fallen angels of broadband might include users and developers, for their greedy demands and negligent engineering. Or maybe even lawyers and economists for having encouraged poor market incentives for use of a scarce resource.
To protect against the anxiety of these dangerous thoughts, we have to instead invent a universal entitlement to good performance. If I don’t get the performance I want, then someone, somewhere, is denying me my due. No, it’s worse than that! They are… DISCRIMINATING NEUTRALITY VIOLATORS!
The only alternative in the current network architecture model is to believe that arbitrary allocation of disappointment is fair and desirable. In an Alice in Wonderland twist, this performance caprice has been relabelled as ‘non-discriminatory’ in the paper.
This is to assert that equality of opportunity for unplanned misery trumps effective service delivery. A moment’s thought tells you that having all applications fail unacceptably, but all fail equally often, is not the basis of good broadband policy. So ‘neutrality’ is not merely impossible, it’s also absolutely undesirable!
The missing framework for reasoning
The underlying issue the authors face is the absence of a framework to even begin to talk about the problem. As a result, the paper’s position is akin to writing about spectrum policy in terms of the luminiferous ather. None of the conclusions can be depended upon, since the system under discussion is fundamentally misdescribed. This is a systemic problem, and not the fault of the authors.
I can now reveal I have pulled a naughty trick on you. This is not a review of the paper listed in the first sentence. I haven’t cited a single line of it. You could substitute practically any paper or book on ‘net neutrality’ as the opening line, and make exactly the same valid critique. (That said, yes, I did read their paper, and yes, this is a valid critique of their argument.)
We regrettably have an increasingly large body of self-referential literature on the subject that makes identical technical and reasoning errors. They have collectively disconnected from the reality of network performance by ignoring the mathematical ‘labyrinth’. Instead, they have created an alternate universe of fantasy ‘neutral’ networks.
The time has now come to rethink our approach broadband policy. The first step is to abandon ‘neutrality’, both as a term and concept.
Sponsored byVerisign
Sponsored byWhoisXML API
Sponsored byVerisign
Sponsored byCSC
Sponsored byIPv4.Global
Sponsored byRadix
Sponsored byDNIB.com
Martin,
Great post. I fear however you are still talking tech (and math) over the heads of most of the business leaders, lawyers, economists, policy activists and policymakers, and journalists, who - want their neutral network cake without understanding it never existed. Nor could.
Some years ago, I managed to persuade the Dynamic Coalition on Internet Rights and Principles to embed in its charter on Internet Rights and Principles (now translated to 20+ languages) - the phrase ‘Network Equality’ instead of the misnomer neutral. I suggest equality is what people want, and what an open Internet can deliver - statistically - more often than not. Reaction? If you agree with me, your next post could perhaps be more uplifting and call for equal rights…for bits, users, ISPs and all other players in the Internet ecosystem. Yeah it’s still politics, but I submit it is at least an Internet policy prescription that does not violate the laws of - math - and network operations and architecture.
Lee McKnight Syracuse University iSchool
It is pleasing to see more realistic examinations of the whole ‘Net Neutrality’ mess, as it has become.
That said, I express a minor, minor disagreement about the last sentence. In my opinion, ‘Net Neutrality’ (or, everyone is delighted as much as they want, scarcity be damned) is an ideal, but people are trying to use it as policy. Ideals are not a substitute for policy, and philosophy is not a substitute for process. People are in a sense trying to solve world hunger by saying “Let’s pass a law that says, “No one shall go hungry.” and that will take care of everything!” - completely ignoring the reality of what it would take to implement such a thing - the actual policies and processes it would take.
And policies are never ideal - they are best effort. Policy always falls short of the castles in the sky people dream of. Policy is ugly. It is much easier to cling to the magical dream of ‘Net Neutrality’ and hope the magic bandwidth fairy banishes the problem of scarcity and logistics: “If we pass a law saying Net Neutrality must exist, then it will!” If such magic was possible, we would already have world hunger beaten, world peace achieved, and poverty and disease eliminated.
That said, we need to keep ideals around - we cannot completely abandon the ideal of neutrality, because ideals do help give direction. Policies are never perfect, but we can try still choose among which policies move closer to an ideal compared to those which stray farther away from them.
Martin,
Agree:
-net neutrality is a “fiction”
-but both sides are in echo chambers
Disagree:
-it is not about the math
-it’s about the business model
Suggest:
-a common framework (3 dimensions; geographic, operating, and market dispersions)
-which models supply and demand components consistently
-and distinguishes traffic flows and boundaries across WAN/MAN/LAN/PAN boundaries
-and recognizes that for a publicly scaled inter-networking model across pure suppliers, pure consumers and supplier/consumers (prosumers) (and various shades of all 3) there are essentially 3 major layers (transport/network, controls, apps) and 7 minor layers.
-within each layer and box within this grid we find the same (or nearly similar) framework replicated, enabling supply/demand clearing down to the atomic (bit) level ex ante.
For example we can model the price elasticity of digital wireless and predict marginal (and average) costs (and hence pricing) based on iterative assumptions (or educated guesses by competitive market actors) of future supply and demand. We did this in 1996 and predicted the outcomes accurately: 700+ minutes of consumption/mos, $70 ARPU and 100%+ penetration. Back then, wireless was at <100 mous, $40 ARPU and <10% penetration.
We can apply the same modeling to broadband access/5G, IoT, infinite 4K video on demand, which are at the same state wireless was in 1996 and get these universally available and used within 10 years whilst investors still make their money back (and then some).
Prescription:
-if we understand and use this framework we find:
-mandated interconnection out to the edge is essential (carterphone, wifi, etc…), offset by
-monitored settlements between layers (vertically) and across boundaries (horizontally)
-that clears rapidly depreciating supply (in each box) with continuously expanding (and bifurcating) demand north-south (between apps and infrastructure) and east-west (between the above actors).
For instance, any community, be it an “app” or an “enterprise” or “socio-economic organization” or a “service provider” can identifier future demand and rapidly provision service at a price (revenues) that generates a return for those components (opex and capex).
Bill and keep (aka net neutrality) results in complete the opposite outcome of what its supporters hope to achieve and instead fosters and perpetuates monopoly control, stifles demand and is ultimately overpriced, not universal and unsustainable. Precisely what we have today at the edge and core of the networks.
Result:
-the balkanized and uncoordinated edge access providers (what you refer to as ISPs) restructure into horizontally scaled layers of confederated exchanges (which still allows for some local and national govt oversight and input)
-vertically complete solutions can scale and provide universally inexpensive solutions whereby the cost of communications is directly tied to the value of the session/event.
Let’s have a discussion about this framework.
Cheers,
Michael