Home / Blogs

Reaction: Do We Really Need a New Internet?

The other day several of us were gathered in a conference room on the 17th floor of the LinkedIn building in San Francisco, looking out of the windows as we discussed some various technical matters. All around us, there were new buildings under construction, with that tall towering crane anchored to the building in several places. We wondered how that crane was built, and considered how precise the building process seemed to be to the complete mess building a network seems to be.

And then, this week, I ran across a couple of articles (Feb 14 & Feb 15) arguing that we need a new Internet. For instance, from Feb 14 post:

What we really have today is a Prototype Internet. It has shown us what is possible when we have a cheap and ubiquitous digital infrastructure. Everyone who uses it has had joyous moments when they have spoken to family far away, found a hot new lover, discovered their perfect house, or booked a wonderful holiday somewhere exotic. For this, we should be grateful and have no regrets. Yet we have not only learned about the possibilities, but also about the problems. The Prototype Internet is not fit for purpose for the safety-critical and socially sensitive types of uses we foresee in the future. It simply wasn’t designed with healthcare, transport or energy grids in mind, to the extent it was ‘designed’ at all. Every “circle of death” watching a video, or DDoS attack that takes a major website offline, is a reminder of this. What we have is an endless series of patches with ever growing unmanaged complexity, and this is not a stable foundation for the future.

So the Internet is broken. Completely. We need a new one.

Really?

First, I’d like to point out that much of what people complain about in terms of the Internet, such as the lack of security, or the lack of privacy, are actually a matter of tradeoffs. You could choose a different set of tradeoffs, of course, but then you would get a different “Internet”—one that may not, in fact, support what we support today. Whether the things it would support would be better or worse, I cannot answer, but the entire concept of a “new Internet” that supports everything we want it to support in a way that has none of the flaws of the current one, and no new flaws we have not thought about before—this is simply impossible.

So lets leave that idea aside, and think about some of the other complaints.

The Internet is not secure. Well, of course not. But that does not mean it needs to be this way. The reality is that security is a hot potato that application developers, network operators, and end users like to throw at one another, rather than something anyone tries to fix. Rather than considering each piece of the security puzzle, and thinking about how and where it might be best solved, application developers just build applications without security at all, and say “let the network fix it.” At the same time, network engineers say either: “sure, I can give you perfect security, let me just install this firewall,” or “I don’t have anything to do with security, fix that in the application.” On the other end, users choose really horrible passwords, and blame the network for losing their credit card number, or say “just let my use my thumbprint,” without ever wondering where they are going to go to get a new one when their thumbprint has been compromised. Is this “fixable?” sure, for some strong measure of security—but a “new Internet” isn’t going to fare any better than the current one unless people start talking to one another.

The Internet cannot scale. Well, that all depends on what you mean by “scale.” It seems pretty large to me, and it seems to be getting larger. The problem is that it is often harder to design in scaling than you might think. You often do not know what problems you are going to encounter until you actually encounter them. To think that we can just “apply some math,” and make the problem go away shows a complete lack of historical understanding. What you need to do is build in the flexibility that allows you to overcome scaling issues as they arise, rather than assuming you can “fix” the problem at the first point and not worry about it ever again. The “foundation” analogy does not really work here; when you are building a structure, you have a good idea of what it will be used for, and how large it will be. You do not build a building today and then say, “hey, let’s add a library on the 40th floor with a million books, and then three large swimming pools and a new eatery on those four new floors we decided to stick on the top.” The foundation limits scaling as well as ensures it; sometimes the foundation needs to be flexible, rather than fixed.

There have been too many protocol mistakes. Witness IPv6. Well, yes, there have been many protocol mistakes. For instance, IPv6. But the problem with IPv6 is not that we didn’t need it, not that there was not a problem, nor even that all bad decisions were made. Rather, the problem with IPv6 is the technical community became fixated on Network Address Translators, effectively designing an entire protocol around eliminating a single problem. Narrow fixations always result in bad engineering solutions—it’s just a fact of life. What IPv6 did get right was eliminating fragmentation, a larger address space, and a few other areas.

That IPv6 exists at all, and is even being deployed at all, shows just the entire problem with “the Internet is broken” line of thinking. It shows that the foundations of the Internet are flexible enough to take on a new protocol, and to fix problems up in the higher layers. The original design worked, in fact—parts and pieces can be replaced if we get something wrong. This is more valuable than all the iron clad promises of a perfect future Internet you can ever make.

We are missing a layer. This is grounded in the RINA model, which I like, and I actually use in teaching networking a lot more than any other model. In fact, I consider the OSI model a historical curiosity, a byway that was probably useful for a time, but is no longer useful. But the RINA model implies a fixed number of layers, in even numbers. The argument, boiled down to its essential point, is that since we have seven, we must be wrong.

The problem with the argument is twofold. First, sometimes six layers is right, and at other times eight might be. Second, we do have another layer in the Internet model; it’s just generally buried in the applications themselves. The network does not end with TCP, or even HTTP; it ends with the application. Applications often have their own flow control and error management embedded, if they need them. Some don’t, so exposing all those layers, and forcing every application to use them all, would actually be a waste of resources.

The Internet assumes a flawed model of end to end connectivity. Specifically, that the network will never drop packets. Well, TCP does assume this, but TCP isn’t the only transport protocol on the network. There is also something called “UDP,” and there are others out there as well (at least the last time I looked). It’s not that the network doesn’t provide more lossy services, it’s that most application developers have availed themselves of the one available service, no matter whether or not it is needed for their specific application.

The bottom line.

When I left San Francisco to fly home, 2nd street was closed. Why? Because a piece of concrete had come lose on one of the buildings nearby, and seemed to be just about ready to fall to the street. On the way to the airport, the driver told me stories of several other buildings in the area that were problematic, some that might need to be taken down and rebuilt. The image of the industrial building process, almost perfect every time, is an illusion. You can’t just “build a solid foundation” and then “build as high as you like.”

Sure, the Internet is broken. But anything we invent will, ultimately, be broken in some way or another. Sure the IETF is broken, and so is open source, and so is… whatever we might invent next. We don’t need a new Internet, we need a little less ego, a lot less mud slinging, and a lot more communication. We don’t need the perfect fix, we need people who will seriously think about where the layers and lines are today, why they are there, and why and how we should change them. We don’t need grand designs, we need serious people who are seriously interested in working on fixing what we have, and people who are interested in being engineers, rather than console jockeys or system administrators.

By Russ White, Infrastructure Architect at Juniper Networks

Filed Under

Comments

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

Cybersecurity

Sponsored byVerisign

Domain Names

Sponsored byVerisign

DNS

Sponsored byDNIB.com

New TLDs

Sponsored byRadix

Threat Intelligence

Sponsored byWhoisXML API

IPv4 Markets

Sponsored byIPv4.Global

Brand Protection

Sponsored byCSC