|
The telecoms industry has two fundamental issues whose resolution is a multi-decade business and technology transformation effort. This re-engineering programme turns the current “quantities with quality” model into a “quantities of quality” one. Those who prosper will have to overcome a powerfully entrenched incumbent “bandwidth” paradigm, whereby incentives are initially strongly against investing in the inevitable and irresistible future.
Recently I had the pleasure of meeting the CEO of a fast-growing vendor of software-defined networking (SDN) technology. The usual ambition for SDN is merely internal automation and cost optimisation of network operation. In contrast, their offering enables telcos to develop new “bandwidth on demand” services. The potential for differentiated products that are more responsive to demand makes the investment case for SDN considerably more compelling.
We were discussing the “on-demand” nature of the technology. By definition this is a more customer-centric outlook than a supply-centric “pipe” mentality, which comes in a few fixed and inflexible capacities. What really struck me was how the CEO found it hard to engage with a difficult-to-hear message: “bandwidth” falls short as a way of describing the service being offered, both from a supply and demand point of view.
At present, telecoms services are typically characterised as a bearer type (e.g. Ethernet, IP, MPLS, LTE) and a capacity (expressed as a typical or peak throughput). Whatever capacity you buy can be delivered over many possible routes, with the scheduling of the resources in the network being opaque to end users. All kinds of boxes in the network can hold up the traffic for inspection or processing. Whatever data turns up will have a certain level of “impairment” as delay and (depending on the technology) loss.
This means you have variable levels of quality on offer: a “quantity with quality” model. You are contracted to a given quantity, and it turns up with some kind of quality, which may be good or poor. Generally only the larger enterprise or telco-to-telco customers are measuring and managing quality to any level of sophistication. Where there is poor quality, there may be an SLA breach, but the product itself is not defined in terms of the quality on offer.
This “quantity with quality” model has two fundamental issues.
The first is that “bandwidth” does not reflect the true nature of user demand. An application will perform adequately if it receives enough timely information from the other end. This is an issue of quality first: you merely have to deliver enough volume at the required timeliness. As a result, a product characterised in terms of quantity does not sufficiently define whether it is fit-for-purpose.
In the “quantity with quality” model the application performance risk is left with the customer. The customer has little recourse if the quality varies and is no longer adequate for their needs. Since SLAs are often very weak in terms of ensuring the performance of any specific application, you can’t complain if you don’t get the quality over-delivery that you (as a matter of custom) feel you are entitled to.
The second issue is that “bandwidth” is also a weak characterisation of the supply. We are move to a world with ever-increasing levels of statistical sharing (packet data and cloud computing), and dynamic resource control (e.g. NFV, SD-WAN). This introduces more variability into the supply, and an average like “bandwidth” misses the service quality and user experience effects of these high-speed changes.
The impact on the network provider is that they often over-deliver in terms of network quality (and hence carry excessive cost) in order to achieve adequate application performance. Conversely, they also sometimes under-deliver quality, creating customer dissatisfaction and churn, and may not know it. Optimising the system for cost or revenue is hard when you don’t fully understand how the network control knobs relate to user experience, or what ‘success’ looks like to the customer.
What the CEO of the SDN vendor found especially challenging was dealing with a factual statement about networks: there is an external reality to both the customer experience and network performance, and aligning to that reality is not merely a good idea, it is (in the long run) mandatory! This felt like a confrontational attack on their “bandwidth on demand” technology and business model.
Confronting this “reality gap” is an understandable source of anxiety. The customer experience is formed from the continual passing of instantaneous moments of application operation. The network performance is formed by the delivery of billions and trillions of packets passing through stochastic systems. Yet the metrics we use to characterise the service and manage it reflect neither the instantaneous nature of demand, nor the stochastic properties of supply. The news that you also needed to upgrade your mathematics to deal with a hyper-dynamic reality only adds to the resistance.
An industry whose core practises are disconnected from both demand and supply inevitably faces some troubles. In terms of demand, users find it hard to express their needs and buy a fit-for-purpose supply. If you are moving to an “on-demand” model, it helps if customers have a way of expressing demand in terms of the value they seek. For managing supply, you need to be able to understand the impact of your “software-defined” choices on the customer, so as to be able to make good ones and optimise cost and QoE.
The only possible resolution is to align with an unchanging external reality, and move to a new paradigm. We need to upgrade our products and supporting infrastructure to a “quantities of quality” model. By making the minimum service quality explicitly defined, we can both reflect the instantaneous nature of user experience demand, and also the stochastic and variable-quality nature of supply.
This is not a trivial matter to execute, given how every operational system and commercial incentive is presently designed to sell ever more quantity, not on aligning supply quality to the demand for application performance.
In the short run, the answer is to shine a brighter light on what quality is being delivered in the existing “bandwidth” paradigm. If you are engineering an SD-WAN, for example, and you have lots of sharp “transitions” as you switch resources around, what is the impact of shifting those loads on the end user? Do you have sufficient visibility of the supply chain to understand your contribution to success or failure in the users’ eyes?
In the medium term, the engineering models used by these systems need to make quality a first-class part of the design process. The intended uses need to be understood, the quality required to meet them properly defined, and the operational mechanisms configured to ensure that this is delivered. The science and engineering of performance needs to improve to make this happen, and a lot of operational and business management systems upgraded.
In the long run, the fundamental products and processes need to be changed to a more user-centric model for an on-demand world. Rather than only buying a single broadband service to deliver any and all applications, networks will interface with many cloud platforms that direct application performance through APIs. Those APIs will define the quality being demanded, which the network must then supply.
Success in a software-defined world will not come from repackaging circuit-era products, but from engineering known outcomes for end users with tightly managed cost and risk. New custom will come from being an attractive service delivery partner to global communications and commerce platforms.
Only an improved “quantities of quality” approach can deliver this desirable industry future.
Sponsored byCSC
Sponsored byRadix
Sponsored byVerisign
Sponsored byIPv4.Global
Sponsored byWhoisXML API
Sponsored byDNIB.com
Sponsored byVerisign