|
The Internet is a great success and an abject failure. We need a new and better one. Let me explain why.
We are about to enter an era when online services are about to become embedded into pretty much every activity in life. We will become extremely dependent on the safe and secure functioning of the underlying infrastructure. Whole new industries are waiting to be born as intelligent machines, widespread robotics, and miniaturized sensors are everywhere.
There are the usual plethora of buzzwords to describe the enabling mechanisms, like IoT, 5G and SDN/NFV. These are the “trees”, and focusing on them in isolation misses the “forest” picture.
The Liverpool to Manchester railway (opened 1830) crossing a canal.
* * *
What we really have today is a Prototype Internet. It has shown us what is possible when we have a cheap and ubiquitous digital infrastructure. Everyone who uses it has had joyous moments when they have spoken to family far away, found a hot new lover, discovered their perfect house, or booked a wonderful holiday somewhere exotic.
For this, we should be grateful and have no regrets. Yet we have not only learned about the possibilities, but also about the problems. The Prototype Internet is not fit for purpose for the safety-critical and socially sensitive types of uses we foresee in the future.
It simply wasn’t designed with healthcare, transport or energy grids in mind, to the extent it was ‘designed’ at all. Every “circle of death” watching a video, or DDoS attack that takes a major website offline, is a reminder of this. What we have is an endless series of patches with ever growing unmanaged complexity, and this is not a stable foundation for the future.
The fundamental architecture of the Prototype Internet is broken, and cannot be repaired. It does one thing well: virtualise connectivity. Everything else is an afterthought and is (by and large) a total mess. Performance, security, maintainability, deployability, privacy, mobility, resilience, fault management, quality measurement, regulatory compliance, and so on…
We have spent three decades throwing bandwidth at all quality and performance problems, and it has failed. There is no security model in the present Internet: it is a pure afterthought patched onto an essentially unfixable global addressing system. When your broadband breaks, it is nearly impossible to understand why, as I have personally found (and I am supposed to be an expert!).
It isn’t just the practical protocols that are broken. The theoretical foundations are missing, and its architecture justification is plain wrong. First steps are fateful, and when you misconceive of networking as being a “computer to computer” thing, when it is really “computation to computation”, there is no way back. The choice to reason about distributed computing in terms of layers rather than scopes [PDF] is an error that cannot be undone.
The problem is not just a technical issue. It is a cultural and institutional one too. Engineering is about taking responsibility for failure, and the IETF does not do this. As such, it is claiming the legitimacy benefits of the “engineer” title without accepting the consequent costs. This is, I regret to say, unethical. Real and professional engineering organizations need to call them out on this.
We see many examples of failed, abandoned or unsatisfactory efforts to fix the original design. Perhaps the most egregious is the IPv4 to IPv6 transition, which creates a high transition cost with minimal benefits and thus has dragged on for nearly 20 years. It compounds the original architecture errors, rather than fixes them. For instance, the security attack surface grows enormously with IPv6, and the size and cost of routing table sizes are unbounded.
The economic model of the Prototype Internet is absolutely crazy. We now have a system of quality rationing that incentivises edge providers to generate the most aggressive, inefficient and least collaborative application protocols. No other industry seeks to punish its most enthusiastic customers with data caps and “fair usage” policies. This problem is down to a persistent disconnect between pricing and inherent resource costs.
The regulatory system is also caught up in the incompetent insanity of ‘net neutrality’. Such a monumental failure to grasp basic technical facts is an embarrassment to a supposedly advanced scientific civilisation. It delegitimises the role of the regulator in protecting the public.
What we are dealing with is an immature broadband industry grappling with unique problems. We operate at the speed of light. As such, it is difficult to adapt and adopt management and pricing methods developed in other industries that work at the speed of sound or less. However, it is possible to make the jump from skilled numerical craft to hard engineering science, if we accept our present shortcomings.
This is a complex system with many feedback loops and incentives that keep it ‘stuck’ in an unhappy place. Nobody is to blame, but everybody has a contribution to the madness.
The early 1970s ideas of how packet networks should be built have now reached their use-by date. The rotten smell from the back of the architecture cupboard is seeping out everywhere. It’s time to face facts: more of the same beliefs and behaviors just leads to more of the same systemic failures.
The Prototype Internet is a canal system when we need an Industrial Internet railroad. There are no means to transform the former into the latter. The best we can hope for is to use canal transport to build the railroad.
Is this the best we can do in articulating value?The unsatisfactory nature of the present Prototype Internet is unspeakable, as it generates such intense anxiety, shame and fear. We have bet the development of our modern civilization on a digital infrastructure that is extremely fragile. Its quality is out of control. When you cannot measure and manage quality, you can only differentiate on quantity.
The scaling properties of the Prototype Internet are unknown and unknowable. Assumptions based on “it scaled this far so it must scale more” are extremely foolish. This fundamentally fails to grasp that there are hard limits imposed by physics and mathematics to the protocols we have adopted. This is not a hypothesis: there is hard evidence of new (and nasty) scaling problems emerging.
The problem I see is that we keep on pumping resources into a dead-end model. As erudite blogger, Chris Dillow writes in another context: When confronted with evidence against their prior views, people don’t change their minds but instead double-down and become more entrenched in their error.
We collectively face a difficult dilemma: at what point do we accept that the present Prototype Internet is indeed just a prototype? And how do we begin to envision and architect its Industrial Internet successor? Do we have to wait for a costly disaster to happen before we make a move?
The good news is that the ingredients for an Industrial Internet are now becoming clear. The essential problems of science, mathematics, and protocols are largely solved, at least in theory. The practical reality of a new and better Industrial Internet is within our reach. It can be achieved in a relatively short timeframe.
The Industrial Internet is one for which security is a first-class design object. Different users and uses can be isolated from one another. Our approach to performance would be the exact opposite to the Prototype Internet. Rather than build networks and then reason about the (emergent) performance, we would reason about the performance and then build (engineered) outcomes.
With the Industrial Internet, we would from the very beginning design-in the features and capabilities to make it cheap to deploy, predictable in operation, and automated to support. We would work backwards from the essential business processes to ensure the right enablers were there from inception.
The management methodologies for ‘lean quality’ and efficient digital supply chains would be incorporated into the Industrial Internet from the get-go. The ideas of quality systems thinkers like Deming, Goldratt and Hammer would inform our choices and vision. The Industrial Internet is about fitness-for-purpose and low waste; the antithesis of the Prototype Internet’s purpose-for-fitness and overprovision everything.
What it now takes is for that vision to be crystallized into a plan of action. The first step is to tell the story of an alternative future where we upgrade from broadband canals to distributed cloud computing railroads. This story then needs to be “made real” with examples of the new model being deployed in the real world to prove the benefits.
Who wants to join me in this mission? Hands up!
* * *
Yes: I am a dreamer. For a dreamer is one who can only find his way by moonlight, and his punishment is that he sees the dawn before the rest of the world.
— Oscar Wilde, The Critic as Artist
* * *
Sponsored byWhoisXML API
Sponsored byCSC
Sponsored byVerisign
Sponsored byIPv4.Global
Sponsored byRadix
Sponsored byDNIB.com
Sponsored byVerisign
I want to make sure that I understand: who are we talking about here?
The two key regulatory failures are BEREC and the FCC. See http://www.slideshare.net/mgeddes/fcc-open-internet-transparency-a-review-by-martin-geddes and http://www.martingeddes.com/1323-2/. But Ofcom got their house in order, did the science, and found that "neutrality" is not an objectively measurable phenomenon, and hence cannot be regulated. See http://www.slideshare.net/mgeddes/essential-science-for-broadband-regulation. The work of Barbara van Schewick, on which the Open Internet regulatory approach is based, absolutely fails to understand the emergent nature of performance and the lack of intentional semantics to the service.
Isn’t ICANN concerned by a new version of the Internet?
I have readed this post because a mail on the IETF (by Stephane Bortzmeyer) stated:
“[Only if you are bored and have nothing useful to do.]
A guy solved all the problems of the Internet, thanks to a new
mathematical theory he developed, “∆Q” :
http://www.circleid.com/posts/20170214_lets_face_facts_we_need_a_new_industrial_internet/
He also calls us “unethical” but, among all its claims, this is the
least crazy :-)
Let’s congratulate Circle ID (which, most of the time, publishes
interesting things) for its openness of mind: any random troll can
publish here.
So, since I use to trust Stephane ... But I only found a seriously point of view.
I am certainly interested in joining the effort (where to click?). To see where it could go. Some of the points are certainly true. Some others to be discussed as they seem to come from a post-1986 and initially telco oriented acknowledged pro. The internet was entirely defined in the 1978 “IEN 48 Objectives 20 lines section.
The last five lines were strangled by the NTIA/militaro-industrial en 1986 replacing its global effective operations and CCITT exploration by a group of public contractors’ engineers called the IETF.
That IETF tried to do a good job with the 8.5 first lines and is still lost with the implications of the 6.5 middle lines. But TCP/IP is BUGged. There is no layer six presentation, so for it to work globally some of its job is to be politically, legally, etc. carried differently; some governance having to Be Unilaterally Global (hence the need for the NTIA and now ICANN).
IMHO the line and processing bandwidth permit to address the need, however not as they are used today (in accordance to RFC 1958). Not end to end: fringe to fringe. Yet as John Maynard Keynes identified it in 1930 : “The difficulty lies, not in the new ideas, but in escaping from the old ones, which ramify, for those brought up as most of us have been, into every corner of our minds.”. Today engineers (including yougsters) are too old for us “pre-human” veterans.