|
I recently ran a workshop in Asia and to guide attendees through the content; I put together an overview slide which you might also find of interest and use.
It is a description of the quality attenuation framework, originally developed and defined by Predictable Network Solutions Ltd, and documented and extended by myself and colleagues at Just Right Networks Ltd. You can read more at qualityattenuation.science.
* * *
The telecoms industry is, I believe, overdue for a ‘lean’ revolution. This will change its working model from ‘purpose-for-fitness’ to ‘fitness-for-purpose’. For networks, that means switching from ‘build then reason about performance’ to ‘reason about performance and then build’.
The benefit of this business transformation is a radical lowering cost risk and cost, predictable experiences, and the ability to rapidly adapt to changing patterns of demand.
In order to deliver this benefit, there needs to be a management that executes on the new intent of ‘going lean’. What to change, what to change to, and how to effect that change? Answering these means applying a system of scientific management that helps us focus on what is relevant, and ignore what is not.
These ideas of scientific management are well established in other industries (Six sigma, theory of constraints, Vanguard method, statistical process control), but appear to be novelty in telecommunications.
In order for these lean concepts to be applied, we need to overcome a series of technical constraints that we presently face. The technology innovations that will achieve this include high-fidelity measurements, new packet scheduling mechanisms, and new architectures to embed these into.
Turning those technologies into a working system for a particular product, customer or deployment is an act of engineering. True engineers have an ethos of taking responsibility of fitness-for-purpose, and any shortfall in fulfilling the promises made. This means turning a high-level customer intent into a technical requirement.
To understand whether there is a risk of under-delivery against the requirement you need to be able to model and quantify the ‘performance hazards’ via ‘breach metrics’. This means reasoning about the performance of supply chains before they are assembled, and decomposing a ‘performance budget’ into a requirement for each element or supplier.
Turning that specific engineering requirement into an operational system, in turn, draws upon a general science of performance. This considers what resource supply will meet the resource demand. The nature of the resource constraint is timeliness (as if you can be made to wait forever, the tiniest capacity will suffice).
The contract between supply and demand is formed as a ‘timeliness agreement’, which can be enforced by observing how ‘untimeliness’ (packet loss and delay) accrues along the supply chain.
This ‘untimeliness’ is a reframing of the nature of quality: from an attribute of a ‘positive’ thing (quantity), to the absence of a negative thing (quality attenuation). There are three basic laws of networking (that don’t appear in the textbooks) that describe this ‘quality attenuation’ phenomenon: it exists; is conserved; and can (partly) be traded between flows.
The amount of quality attenuation that is tolerable for any application to deliver an acceptable rate of performance failure defines its ‘predictable region of operation’. This is the requirement of demand that is then expressed in a ‘timeliness agreement’ that contracts the required supply.
Underpinning this is a need to quantify the idea of quality attenuation. This involves extending the mathematics of randomness from ‘events’ (like rolling a dice) to include ‘non-events’ (the dice never lands). This allows packet loss to be included a single resource model as delay.
This is akin to how imaginary numbers extend real numbers, and how complex analysis underpins the physics of electromagnetism. Without expressions like ‘3i + 4’ you can’t model radio waves; without this new mathematics of ?Q, you can’t adequately model packet network performance.
The ?Q metrics can be ‘added’ and ‘subtracted’, and this algebra is the basis of a new calculus that lets you ask ‘what if?’ questions. It can be used to quantify a layered model of reality (a ‘morphism’) that relates the user experience to the network service quality with known error bounds.
Sponsored byWhoisXML API
Sponsored byCSC
Sponsored byVerisign
Sponsored byIPv4.Global
Sponsored byRadix
Sponsored byVerisign
Sponsored byDNIB.com