|
Mostafa Ammar, out of Georgia Tech (not my alma mater, but many of my engineering family are alumni there), recently posted an interesting paper titled The Service-Infrastructure Cycle, Ossification, and the Fragmentation of the Internet. I have argued elsewhere that we are seeing the fragmentation of the global Internet into multiple smaller pieces, primarily based on the centralization of content hosting combined with the rational economic decisions of the large-scale hosting services. The paper in hand takes a slightly different path to reach the same conclusion.
The author begins by noting networks are designed to provide a set of services. Each design paradigm not only supports the services it was designed for but also allows for some headroom, which allows users to deploy new, unanticipated services. Over time, as newer services are deployed, the requirements on the network change enough that the network must be redesigned.
This cycle, the service-infrastructure cycle, relies on a well-known process of deploying something that is “good enough,” which allows early feedback on what does and does not work, followed by quick refinement until the protocols and general design can support the services placed on the network. As an example, the author cites the deployment of unicast routing protocols. He marks the beginning of this process as 1962, when Prosser was first deployed, and then as 1995 when BGPv4 was deployed. Across this time routing protocols were invented, deployed, and revised rapidly. Since around 1995, however—a period of over 20 years at this point—routing has not changed all that much. So there were around 35 years of rapid development, followed by what is now over 20 years of stability in the routing realm.
Ossification, for those not familiar with the term, is a form of hardening. Petrified wood is an ossified form of wood. An interesting property of petrified wood is that is it fragile; if you pound a piece of “natural” wood with a hammer, it dents, but does not shatter. Petrified, or ossified, wood shatters, like glass.
Multicast routing is held up as an opposite example. Based on experience with unicast routing, the designers of multicast attempted to “anticipate” the use cases, such that early iterations were clumsy, and failed to attain the kinds of deployment required to get the cycle of infrastructure and services started. Hence multicast routing has largely failed. In other words, multicast ossified too soon; the cycle of experience and experiment was cut short by the designers trying to anticipate use cases, rather than allowing them to grow over time.
Some further examples might be:
There are also weaknesses in this argument, as well. It can be argued that the reason for the failure of widespread multicast is because the content just wasn’t there when multicast was first considered—in fact, that multicast content still is not what people really want. The first “killer app” for multicast was replacing broadcast television over the Internet. What has developed instead is video on demand; multicast is just not compelling when everyone is watching something different whenever they want to.
The solution to this problem is novel: break the Internet up. Or rather, allow it to break up. The creation of a single network from many networks was a major milestone in the world of networking, allowing the open creation of new applications. If the Internet were not ossified through business relationships and the impossibility of making major changes in the protocols and infrastructure, it would be possible to undertake radical changes to support new challenges.
The new challenges offered include IoT, the need for content providers to have greater control over the quality of data transmission, and the unique service demands of new applications, particularly gaming. The result has been the flattening of the Internet, followed by the emergence of bypass networks—ultimately leading to the fragmentation of the Internet into many different networks.
Is the author correct? It seems the Internet is, in fact, becoming a group of networks loosely connected through IXPs and some transit providers. What will the impact be on network engineers? One likely result is deeper specialization in sets of technologies—the “enterprise/provider” divide that had almost disappeared in the last ten years may well show up as a divide between different kinds of providers. For operators who run a network that indirectly supports some other business goal (what we might call “enterprise”), the result will be a wide array of different ways of thinking about networks, and an expansion of technologies.
But one lesson engineers can certainly take away is this: the concept of agile must reach beyond the coding realm, and into the networking realm. There must be room “built-in” to experiment, deploy, and enhance technologies over time. This means accepting and managing risk rather than avoiding it and having a deeper understanding of how networks work and why they work that way, rather than the blind focus on configuration and deployment we currently teach.
Sponsored byVerisign
Sponsored byRadix
Sponsored byVerisign
Sponsored byWhoisXML API
Sponsored byIPv4.Global
Sponsored byDNIB.com
Sponsored byCSC