Home / Blogs

Why Telecoms Regulators Must Ignore ‘Lawgeneers’

Protect your privacy:  Get NordVPN  [73% off 2-year plans, 3 extra months]
10 facts about NordVPN that aren't commonly known
  • Meshnet Feature for Personal Encrypted Networks: NordVPN offers a unique feature called Meshnet, which allows users to connect their devices directly and securely over the internet. This means you can create your own private, encrypted network for activities like gaming, file sharing, or remote access to your home devices from anywhere in the world.
  • RAM-Only Servers for Enhanced Security: Unlike many VPN providers, NordVPN uses RAM-only (diskless) servers. Since these servers run entirely on volatile memory, all data is wiped with every reboot. This ensures that no user data is stored long-term, significantly reducing the risk of data breaches and enhancing overall security.
  • Servers in a Former Military Bunker: Some of NordVPN's servers are housed in a former military bunker located deep underground. This unique location provides an extra layer of physical security against natural disasters and unauthorized access, ensuring that the servers are protected in all circumstances.
  • NordLynx Protocol with Double NAT Technology: NordVPN developed its own VPN protocol called NordLynx, built around the ultra-fast WireGuard protocol. What sets NordLynx apart is its implementation of a double Network Address Translation (NAT) system, which enhances user privacy without sacrificing speed. This innovative approach solves the potential privacy issues inherent in the standard WireGuard protocol.
  • Dark Web Monitor Feature: NordVPN includes a feature known as Dark Web Monitor. This tool actively scans dark web sites and forums for credentials associated with your email address. If it detects that your information has been compromised or appears in any data breaches, it promptly alerts you so you can take necessary actions to protect your accounts.

My attention was drawn recently to the article Europe Is About to Adopt Bad Net Neutrality Rules. Here’s How to Fix Them by Barbara van Schewick from Stanford Law School. Much as I would like to spend my morning doing other work, I can see imminent harm that these (and many similar) proposals cause to the public. As a responsible professional and native European, I would like to summarise why it is imperative for EU regulators to ignore these siren calls (if they want to retain their legitimacy).



‘Neutral’ networks do not exist

The idea of ‘neutrality’ is not an objective and measurable phenomenon, as shown by the recent work published by Ofcom. It is an invention of the legal classes attempting to force novel distributed computing services into a familiar carriage metaphor.

‘Neutrality’ has an evil twin, namely ‘discrimination’. This is a fundamental misunderstanding of the relationship between the intentional and operational semantics of broadband. Neither concept is a term of the art of performance engineering or computer science.

No packet networks have ever been ‘neutral’, and none ever will be. No scheduling algorithm is ‘(non-)discriminatory’. The assumed intentionality of random processes is false. The idea of ‘defending’ neutrality is thus a pure intellectual nonsense.

Regulators who attempt to legislate ‘neutral’ networks into existence will find themselves in collision with mathematical reality.

Disconnected from actual constraints

Networks have resource constraints. One is capacity, and another is ‘schedulability’. The proposals to prevent ‘class-based discrimination’ fatally ignore the scheduling constraints of broadband.

They require a cornucopia of resources (that don’t exist) to resolve all scheduling issues (which can’t happen) via an unbounded self-optimisation of networks (that is beyond magical).

Regulators who attempt to direct traffic management will find themselves sabotaging the customer experience and a sustainable cost structure. They will also be held accountable for the global QoE outcomes of their interventions at the level of local mechanisms. This won’t end well.

There is no entitlement to performance

Taking this issue further, discussions around ‘throttling’ or ‘slowing down’ implicitly assume that there is some kind of entitlement to good performance from ‘best effort’ broadband. Yet there is nothing ‘best’ or ‘effort’ about it.

The service’s performance is an emergent effect of stochastic processes. Performance is arbitrary, and potentially nondeterministic under load. Anything can happen, good or bad! That’s the ‘best effort’ deal with the performance devil.

That means that when disappointment happens (as it must), its effects are unmanaged. So how does unpredictable and arbitrary performance help the development of the market? It doesn’t.

Given this dynamic, it seems perfectly reasonable for ISPs to bias the dice to ‘speed up’ apps whose performance lags, and ‘slow down’ ones who are being over-delivered resources. Think of it as ‘less arbitrary disappointment’, rather than ‘better effort’.

Regulators who attempt to sustain the illusion of universal and perpetual entitlement to high quality at the price of low quality are in for a rough ride.

‘Specialised’ services are an illusion

Every application has a performance constraint in order to be useful. Any attempt to define (and possibly restrict) the availability of predictable performance will hit barriers:

  • Firstly, there cannot be an objective definition of ‘specialised’. It’s in the eye of the beholder. All my applications are ‘special’. Aren’t your digital children ‘special’, too?
  • Secondly, applications are a form of speech, so you need to regulate classes of privileged speech, which hits both constitutional and human rights problems.
  • Thirdly, you assume that there are no legitimate ‘editorial’ decisions over the allocation of performance that ISPs can undertake. This is like saying to a newspaper that it cannot chose where to position its classified ads versus its news stories.

Regulators who try to create aristocratic classes of application, or insist all must be equal serfs, are dooming their population to performance misery.

‘Fast lanes’ already exist are are just fine

Application developers already buy CDNs to achieve higher performance at lower cost. This is seen as being a core feature of a workable Internet. Paid peering agreements with performance SLAs also exist. Non-IP telecoms services compete for users and usage with IP-based ones (e.g. ATM, MPLS, TDM).

So-called ‘fast lanes’ also aim for predictable performance, just at lower cost than other telecoms services. (We also need ‘slow lanes’ for predictable low cost, which may compete with the postal service.) The purported disaster that is promised is contradicted by decades of experience.

Indeed, the first ISP ‘fast lane’ was built to service the needs of the deaf for reliable sign language. Banning the ordinary development of broadband technology will mean these people are left with a simple choice: go without, or buy an expensive non-IP telecoms service to get the timing characteristics you need. Banning ‘fast lanes’ visibly harms users.

Regulators risk ridicule if they strongly regulate pricing of services with assured timing characteristics based on which transport protocol they are using.

The antithesis of packet networking

The ideas of ‘congestion’ (whether ‘imminent’ or not) profoundly misses the point and reality of packet networks.

The raison d’ĂȘtre of packet networks is to statistically share a resource at the expense of allowing (instantaneous) contention. Networks safely run in saturation are a good thing. In other words, we would ideally like to be able to have as much contention as possible, to lower costs, as long as we can schedule it to deliver good enough user experiences.

The discussions offered around ‘congestion’ are beyond irrelevant, they are simply meaningless. Genuinely, they fall into the category of ‘not even wrong’. You don’t need to rebut them, because the offered universe of discourse is so far divorced from reality.

Regulators face a simple choice: either there is a rational market pricing for quality (that developers must participate in), or there is rationing of quality. Which one do you want?

A broken theory

The underlying theory of ‘net neutrality’ advocates is a virtuous cycle of innovation. The more users there are, the more applications get written, which drives more users. The leap is then made to to ‘neutrality’. This utopian ideal (single class of service, ‘best effort’, users pay all performance costs) supposedly maximises the flywheel effect. The presumptive basis is to minimise risk and cost to developers, and maximise choice for users.

This theory is flawed in five key ways:

  • Is assumes applications get the predictable performance they need. We can be sure that many applications don’t exist today because the performance of ‘best effort’ is unpredictable, so by definition they aren’t written and don’t get traction.
  • It assumes that all users and developers are internalising their costs. They are not. Many applications are effectively pollution of a shared resource, and protocols are aggressively fighting for finite resources.
  • It assumes there is no cost of association. A flat global address space with where everything is reachable may sound attractive, but it comes with non-zero security and routing costs.
  • It assumes that developers are entitled to write distributed applications with no engineering costs for performance (e.g. issuing profiles to DPI vendors, marking traffic). This is delusional.
  • It assumed there is a mechanism for users to configure performance directly when needed. Today, that is absent.

Regulators that attempt to sustain today’s mispricing of performance will find their rules incentivise misallocation of resources, open up market arbitrages, and repel capital from the telecoms industry.

Regulators must ignore ‘lawgeneers’

The FCC went ahead and made rules about ‘net neutrality’ without getting its technical house in order first. This was done at the behest of cohorts of well-funded lobbying lawyers masquerading as performance engineers. As a result it has put at risk the FCC’s credibility, since those rules are in conflict with the technical and economic reality of broadband.

The article cited here is merely an exemplar of a sizeable body of academic literature on ‘net neutrality’. This literature exists in a self-referential citation bubble disconnected from actual broadband network operation. A common failing is to call for ‘faster than math’ packet scheduling.

This does our industry and society a disservice, and harms the credibility of the institutions whose names are attached to these works. Their authors’ misguided attempts to control the definition and direction of ISP services must be resisted.

I strongly urge European regulators to ignore these campaigning ‘lawgeneers’. They have no ‘skin in the game’, so suffer no consequences for their pronouncements based on false technical assumptions. This is a form of ‘moral hazard’. At least ISPs have a stake in the long-term viability of their services.

The case for a scientific approach

There are real issue of power, fairness, justice and market transparency. There are real uncertainties over which market structures maximise social and economic benefits. There are real questions about the practicality of different traffic management and charging mechanisms.

The way forward is for regulators to establish a solid body of scientific knowledge within which the necessary debates can occur. This needs to be done by stochastics experts and computer scientists, not lawyers. The one (and only) thing that should be ‘neutral’ is the resulting framework in which a debate over justice and fairness is held.

In particular, broadband has performance and cost constraints. So what are they? We can then have a policy debate that sits within those constraints, just as spectrum policy respects the laws of physics and electromagnetism.

Ofcom has laudably made such a move to establish a basis of scientific fact from which to make broadband regulations. They have cleanly separated the science and policy issues. This process needs to continue and spread.

If you would like to join a movement for reality-based regulation, please do feel free to get in touch to discuss how this might be brought about.

By Martin Geddes, Founder, Martin Geddes Consulting Ltd

He provides consulting, training and innovation services to telcos, equipment vendors, cloud services providers and industry bodies. For the latest fresh thinking on telecommunications, sign up for the free Geddes newsletter.

Visit Page

Filed Under

Comments

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

Related

Topics

Domain Names

Sponsored byVerisign

Threat Intelligence

Sponsored byWhoisXML API

DNS

Sponsored byDNIB.com

New TLDs

Sponsored byRadix

Cybersecurity

Sponsored byVerisign

Brand Protection

Sponsored byCSC

IPv4 Markets

Sponsored byIPv4.Global