|
The success or failure of public cloud services can be measured by whether they deliver high levels of performance, security and reliability that are on par with, or better than, those available within enterprise-owned data centers. To emphasize the rapidly growing cloud market, IDC forecasts that public cloud IT spending will increase from $40 billion in 2012 to $100 billion in 2016. To provide the performance, security and reliability needed, cloud providers are moving quickly to build a virtualized multi-data center service architecture, or a “data center without walls.”
This approach federate the data centers of both the enterprise customer and cloud service provider so that all compute, storage, and networking assets are treated as a single, virtual pool with optimal placement, migration, and interconnection of workloads and associated storage. This “data center without walls” architecture gives IT tremendous operational flexibility and agility to better respond and support business initiatives by transparently using both in-house and cloud-based resources. In fact, internal studies show that IT can experience resource efficiency gains of 35 percent over isolated provider data center architectures.
However, this architecture is not without its challenges. The migration of workload between enterprise and public cloud creates traffic between the two, as well as between clusters of provider data centers. In addition, transactional loads and demands placed on the backbone network, including self-service customer application operations (application creation, re-sizing, or deletion in the cloud) and specific provider administrative operations can cause variability and unpredictability to traffic volumes and patterns. To accommodate this variability in traffic, providers normally would have to over-provision the backbone to handle the sum of these peaks—an inefficient and costly approach.
Getting to Performance-on-Demand
In the future, rather than over-provisioning, service providers will employ intelligent networks that can be programmed to allocate bandwidth from a shared pool of resources where and when it is needed. This software-defined network (SDN) framework consists of virtualizing the infrastructure layer—the transport and switching network elements; a network control layer (or SDN controller)—the software that configures the infrastructure layer to accommodate service demands; and the application layer—the service-creation/delivery software that drives the required network connectivity—e.g. the cloud orchestrator.
SDN enables cloud services to benefit from performance-on-demand
The logically-centralized control layer software is the lynchpin to providing orchestrated performance-on-demand. This configuration allows the orchestrator to request allocation of those resources without needing to understand the complexity of the underlying network.
For example, the orchestrator may simply request a connection between specified hosts in two different data centers to handle the transfer of 1 TB with a minimum flow rate of 1 Gb/s and packet delivery ratio of 99.9999% to begin between the hours of 1:00 a.m. and 4:00 a.m. The SDN controller first verifies the request against its policy database, performs path computation to find the best resources for the request, and orchestrates the provisioning of those resources. It subsequently notifies the cloud orchestrator so that the orchestrator may initiate the inter-data center transaction.
The benefits to this approach include cost savings and operational efficiencies. Delivering performance-on-demand in this way can reduce cloud backbone capacity requirements by up to 50 percent compared to over-provisioning, while automation simplifies planning and operational practices, and reduces the costs associated with these tasks.
The network control and cloud application layers also can work hand-in-hand to optimize the service ecosystem as a whole. The network control layer has sight of the entire landscape of all existing connections, anticipated connections, and unallocated resources, making it more likely to find a viable path if one is possible—even if nodes or links are congested along the shortest route.
The cloud orchestrator can automatically respond to inter-data center workload requirements. Based on policy and bandwidth schedules, the orchestrator works with the control layer to connect destination data centers and schedule transactions to maximize the performance of the cloud service. Through communication with the network control layer, it can select the best combination of connection profile, time window and cost.
Summary
Whether built with SDN or other technologies, an intelligent network can transform a facilities-only architecture into a fluid workload orchestration workflow system, and a scalable and intelligent network can offer performance-on-demand for assigning network quality and bandwidth per application.
This intelligent network is the key ingredient to enable enterprises to inter-connect data centers with application-driven programmability, enhanced performance and at the optimal cost.
Sponsored byVerisign
Sponsored byDNIB.com
Sponsored byRadix
Sponsored byIPv4.Global
Sponsored byWhoisXML API
Sponsored byCSC
Sponsored byVerisign
I like that you are specifying application requirements in abstract terms that are not being converted all the way down into the networking primitives needed to do them. That is where I think things will end up as well. There is some focus on those abstractions now after a rather long look at low-level protocols like OpenFlow. I think the industry is correcting some.
One challenge will be the amount of trust required to specify things at a high level and believe that it will correctly translate into behavior. Should be interesting to see how DevOps, application guys, and network folks work through this trust thing.
I am also curious what will happen with reporting and troubleshooting tools. As you start to specify SLAs, it will be a trust but verify model. If you have any insight here, that would be excellent.
-Mike (@mbushong)
Plexxi
Mike, You bring up a very good point that we don't want a "Wild West" uncontrolled access to network resources. Ciena is implementing key management tools, such as being able to allocate a portion of the network for dynamic behavior so that activity does not interrupt production workloads. Dynamic job requests must also meet policy, and then go through a scheduler to ensure network resources are available for that task. Tools such as these will help build the trust you mention between all parties involved even before SLA troubleshooting is required.