|
Back in the early 2000s, several notable Internet researchers were predicting the death of the Internet. Based on the narrative, the Internet infrastructure had not been designed for the scale that was being projected at the time, supposedly leading to fatal security and scalability issues. Yet somehow the Internet industry has always found a way to dodge the bullet at the very last minute.
While the experts projecting gloom and doom have been silent for the good part of the last 15 years, it seems that the discussion on the future of the Internet is now resurfacing. Some industry pundits such as Karl Auerbach have pointed out that essential parts of Internet infrastructure such as the Domain Name System (DNS) are fading from users’ views. Others such as Jay Turner are predicting the downright death of the Internet itself.
Looking at the developments over the last five years, there are indeed some powerful megatrends that seem to back up the arguments made by the two gentlemen:
Once these technology trends have played their course, it is quite likely that the public Internet infrastructure and the services it provides will no longer be directly used by most people. In this sense, I believe both Karl Auerbach and Jay Turner are quite correct in their assessments.
Yet at the same time, both the mobile applications and the secure private networks that move the data around will continue to be highly dependent on the underlying public Internet infrastructure. Without a bedrock on which the private networks and the public cloud services are built, it would be impossible to transmit the data. Due to this, I believe that the Internet will transform away from the open public network it was originally supposed to be.
As an outcome of this process, I further believe that the Internet infrastructure will become a utility that is very similar to the electricity grids of today. While almost everyone benefits from them on daily basis, only electric engineers are interested in their inner workings or have a direct access to them. So essentially, the Internet will become a ubiquitous transport layer for the data that flows within the information societies of tomorrow.
From the network management perspective, the emergence of the secure overlay networks running on top of the Internet will introduce a completely new set of challenges. While network automation can carry out much of the configuration and management work, it will cause networks to disappear from the plain sight in a similar way to mobile apps and public network services. This calls for new operational tools and processes required to navigate in this new world.
Once all has been said and done, the chances are that the Internet infrastructure we use today will still be there in 2030. However, instead of being viewed as an open network that connects the world, it will have evolved into a transport layer that is primarily used for transmitting encrypted data.
The Internet is Dead—Long Live the Internet.
Sponsored byVerisign
Sponsored byIPv4.Global
Sponsored byWhoisXML API
Sponsored byDNIB.com
Sponsored byCSC
Sponsored byVerisign
Sponsored byRadix
I hate to say it, but the Internet’s been an out-of-sight infrastructure layer for most of it’s lifetime. Protocols like NTP and SNMP exist purely behind the scenes, users didn’t need to deal with them and even network admins dealt with them only indirectly through the applications that used those protocols to communicate. SMTP and IMAP are good examples of application-level system-to-system protocols, users deal with email client applications and have little to no exposure to the infrastructure that moves email around. The closest most users got to interacting directly with the network was via FTP command-line clients until Gopher software and later Web browsers became common, and between Google and Web applications and changes in how browsers display things we’ve been steadily moving away from the URL bar as a direct representation of where we are on the network for the last decade or so. These days a browser window in a work environment’s most likely just another window on your desktop with an application displayed in it, and with the steadily-increasing use of client-side Javascript applications any remaining awareness of the network’s rapidly fading.
Which is how it should be. To borrow a quote, the Internet is like pavement: once you’ve figured out how to lay it down and paint lines on it you’re pretty much done with it. The interesting developments are all in the stuff that runs on top of the pavement, like cars and trucks.
Thanks for the comment Todd, I agree with you entirely.
I myself expect that technologies such as SD-WAN and SDN will do the same to networks that DHCP did to IP addresses and network configurations back in the early 00s. The level of mobility we have today wouldn’t be possible without DHCP, so I’m keen to see what kind of services and use cases we will land on once the networks themselves become just as dynamic.
Who knows - perhaps in 2030 my personal cloud will be connected to yours via private intercloud. Running through a tunnel on the Internet, of course! :-)
It already can. :) By 2030 there won't be a need for any tunnel, we should've moved to IPsec using transport-mode ESP by default running on IPv6 which doesn't require a tunnel to work around NAT. IPv6 is going to be required for that mobility regardless of anything else, and the only thing really required for ubiquitous transport-mode ESP is full support for DNSSEC and CERT records in DNS (see RFC 4398, or 4025 for storing raw public keys for individual hosts). RFC 4398 would, if implemented in browsers and other software, make certificate authorities like Verisign unnecessary and obsolete. Currently the only reason it's not supported in Chrome is that Google has heartburn over the 1024-bit key length restriction in DNSSEC (which traces back to the usual PPP MTU setting by way of IP fragmentation of UDP packets, so not relevant to IPv6 and possibly not relevant anymore to IPv4).