|
At eComm, I interviewed on stage Neil Davies, founder of Predictable Network Solutions. (Disclosure: they are a consulting client, we are working together to commercialise their technology.) The transcript of the interview is up on the eComm blog, titled The Internet is Not a Pipe and Bandwidth is Bad.
Neil’s achievement is a breakthrough advance in the use of applied mathematics to describe behaviours of statistically multiplexed networks. The consequences are potentially widespread across the telecoms industry. The problem is that the mental models we use—of pipes, flow, bandwidth—do not match the reality of statistical multiplexing. This mismatch drives us into endless small fixes that deeply sub-optimise the overall use of the capacity available.
Historically we have build the “Network of Promises” (with a hat tip to Bob Frankston for the naming inspiration). Technologies like circuit-switching, ATM and IMS perform capacity reservation, admission control, and session management. Together they provide complete predictability and control—at a price of an “all or nothing” approach. Once the network is full, that’s it—and if someone reserves capacity and doesn’t use it, tough luck. The result is a costly and inflexible network.
In contrast, the Internet is a generative “Network of Possibilities”. The application and user discover what is possible. There is constantly variable capacity and quality, and we adapt to the discovered “network weather”. Skype may work, it may not; video may be high definition, low definition, or unusable depending on what else is going on. We can tip the scales in favour of some applications using QoS, but that comes at a cost. When we prioritise some packets, we end up shrinking the overall value-carrying capacity of the transmission system. The more time-sensitive the traffic, the more the shrinkage when we prioritise. The downside of this approach is that the only real answer to poor network quality is more capacity. This may work for core networks, but becomes unaffordable for access networks.
What Neil has discovered is that we have been modelling out networks at the wrong logical layer, and have fundamentally misunderstood the control theory around how data is managed. Instead of managing packets, we need to manage something two logical layers higher: flows of packets over time. With the right mathematical “lens” to see the time-based effects, a new and much simpler way of building and managing networks emerges.
This “Network of Probabilities” works with networks that have statistically stable properties over short periods (milliseconds to seconds)—as most do have. His technology can reduce the network to two control points, entry and exit in each direction. (More complex topologies, e.g. with CDNs, can also be managed, but at added complexity to signalling and maths.) Packets are re-ordered and dropped in a new way, such that they (virtually) never “self-contend” on their onward journey, and all subsequent buffering can be eliminated.
Any link or network segment that can saturate can now be managed in a new way. The “pie” of quality attenuation (loss and delay budget) is kept constant, but can be allocated in a fine-grained way to different flows over the network. There is still a longer-term adaptation of applications to the sensed network conditions; there is no magic to overcome the fundamental (and variable) capacity of the network.
The bottom line? We can load up networks to 100% of capacity, mix multiple classes of traffic together, and also add in scavenger traffic (with no cost impact on the rest of the network).
It’s like the Philosopher’s Stone of telecoms, with one difference: it exists.
Sponsored byVerisign
Sponsored byWhoisXML API
Sponsored byDNIB.com
Sponsored byVerisign
Sponsored byIPv4.Global
Sponsored byCSC
Sponsored byRadix