|
Today’s Internet is a critical infrastructural element whose continued development influences industries, cultures, and the private space of most individuals. However, the world-encompassing network sometimes needs to be adapted to new requirements to keep pace with current innovations. In this post, we explore the foundation of the Internet and envisage the Internet of the future.
Innovation and the data explosion
We have recently seen accelerating development of Internet usage on the user side. Whether it is online shopping’s expansion or the greater distribution of mobile devices or streaming services—there isn’t much that could not somehow be connected to the Internet. On the “backend” of the Internet though, the developments, like switching from copper to fiber optics, might seem unspectacular.
However, the Internet’s development can be compared to electricity: different use cases don’t need “different” electricity—they need differing levels of it. This is the same with Internet traffic. New applications don’t produce new types of traffic; they produce more of it. Now, only very low latencies and absolutely secure transmission will do justice to newer applications.
Fiber optic to remain the standard
In global data centers, we are seeing a “scale-out approach,” meaning that existing infrastructures are expanded horizontally to meet with growing requirements. To optimize usable space, facilities are increasing the integration of transmission technology and accelerating automation through robots that can work in confined spaces. Data centers are being designed as clean rooms where communications no longer take place via fiber-optic cables, but instead through light impulses reflected on a mirrored ceiling.
In 2017, researchers established a data connection encrypted through quantum cryptography, exploiting the effect of entanglement at the particle level. A quantum key is generated from connected photons and sent from a satellite to the sender and recipient, allowing any key interception to be noticed and avoided by both parties.
We’re seeing enormous advances in transmission technology—400GE ports are already used at DE-CIX today and getting to 1000GE is also only a question of time. Foundational transmission technology may experience further evolution and scale-up in the future, but will not be displaced.
Think globally, surf locally?
Since the Web revolutionized communication, the world has been converging faster and faster. Yet, to support the Internet’s future development, we need to think locally. Applications like VR and 8K content require increasingly large data volumes and increasingly low latencies, sometimes in the 20-millisecond range. To put that in perspective, blinking an eye takes 150 milliseconds. We know that nothing in the universe can move faster than light—including data. So, to implement this broadly, physics forces us to bring the data closer to the user.
To achieve this, providers have their own equipment for caching content in the networks that serve end-customers. This needs to be expanded in area and density for large-scale VR usage, e.g., in autonomous vehicles.
Fast developments and slow standardization
The Internet of Things (IoT) is at the beginning of its golden age. Forecasts note that the number of connected devices could exceed the 20-billion mark by next year and could reach 50 billion by 2022.
For all this, we are basically using just one network protocol. The still-dominant IPv4 standard uses 32-bit addresses, only making around 232, or 4.3 billion different addresses possible. Luckily, a new 128-bit format has existed in principle since 1998. This IPv6 standard offers an address space of 2128, or around 340 sextillions, eliminating any concerns. However, DE-CIX measurements show that currently, only around 5 percent of traffic corresponds to the new standard.
Protocol standardization is key. This takes place for the Internet within the Internet Engineering Task Force (IETF), where changeovers and rollouts take so long because actors need to find consensus. A technological solution for this problem could be the freely programmable network equipment that is appearing on the market. In contrast to the current generation, in which merely a configuration of the standardized protocols occurs, in the coming generation, the processing of data streams and packets can be software-defined software.
In summary, we are entering a new era characterized by digitalization and constant interconnection. The goal of the next-generation Internet is, through abstraction and automation, to enable any desired bandwidth spontaneously between all participants or data centers. For this, further development of existing technologies is required, and new approaches to the integration of infrastructure, software, and services must be conceived.
Sponsored byDNIB.com
Sponsored byCSC
Sponsored byIPv4.Global
Sponsored byVerisign
Sponsored byRadix
Sponsored byWhoisXML API
Sponsored byVerisign