|
Machine to machine (M2M) communications may not be new, but with the rapid deployment of embedded wireless technology in vehicles, appliances and electronics, it is becoming a force for service providers to reckon with as droves of businesses and consumers seek to reap its benefits. By 2020, the GSM Association (GSMA) predicts that there will be 24 billion connected devices worldwide, while Forrester predicts that mobile machine interactions will exceed the number of mobile human interactions more than 30 times. To ensure competitive advantage, service providers must invest in their networks to enable M2M services more quickly, economically, securely and assuredly.
The principle of M2M communications is straightforward. Sensors are installed on consumer or commercial hardware to transfer application-relevant information to other sensors and/or to a centralized storage facility. Using this information, complicated algorithms infer decisions relevant to the specific application, and are executed accordingly. While this is simple in theory, in-practice, it actually requires the construction of a complex network, with a clear path between devices and storage; the ability to store, process and analyze large amounts of data; and the ability to take action based on this intelligence.
As evidenced by recent reports, it’s clear that the industry believes that cloud computing is becoming a viable service option for mission critical business applications. In a 2012 survey conducted by North Bridge Venture Partners, and sponsored by 39 cloud companies including Amazon Web Services, Rackspace, Eucalyptus, and Glasshouse, found a meager 3% considered adopting cloud services to be too risky—down from 11% the previous year. In addition, only 12% said the cloud platform was too immature, and that’s down from 26% the year prior. This evolution of the computing industry towards cloud has enabled the storage of vast amounts of data from devices and also made the analysis of this data more feasible. In fact, Microsoft recently said that its Azure cloud has more than four trillion objects stored in it, a fourfold increase from a year before. Its Azure cloud averages 270,000 requests per second, while peaking at 880,000 requests per second during some months. The requests per second have increased almost threefold in the past year, a Microsoft official wrote in a blog post. As a comparison, Amazon Web Services said that just its Simple Storage Service (S3) holds 905 billion objects, and was growing at a rate of one billion objects per day, while handling an average of 650,000 requests per second. As cloud becomes the de facto model for M2M communications, M2M vendors must understand what it takes to enable secure and reliable transfer of information via that vehicle.
It is also important to note that M2M communications can be triggered by both planned and unplanned events. For example, in a smart grid application, smart meters can send information about electricity consumption to a centralized database at pre-scheduled times. Sensors can also be designed to react to unplanned events, such as extreme weather conditions, and trigger increased communication in a certain geography or location. As such, the network that connects these devices to each other, and to the cloud, has to perform in both instances, adapting to both forecasted increases in traffic and random spikes, with automatic, assured performance.
Cloud Infrastructure Requirements for M2M Communications
The network platform that enables M2M communications has multiple segments: the access segment (wireless radio or wireline-based), backhaul to the cloud and the cloud network.
Figure 1: Information from billions of sensors is captured in data centers for processing. Sensor data is transmitted over a wireless access network, mobile backhaul and core network to the data centers.
Sensor data travels to the cloud over wireless/radio or wireline access infrastructures. The aggregation network has to provide highly resilient, scalable and cost-effective backhaul either from mobile or wireline access to be effective. If not the case, M2M communications would be unreliable and many of the new-age applications could never be fully realized.
In order to enable cloud as a platform for M2M adoption, innovation and communication, the cloud has to serve as a high-performance computing platform, often referred to as an enterprise-grade or carrier-grade cloud. High-performance cloud networks need terabit-level connectivity to be able to withstand the projected volume of M2M traffic. These networks will require a provisioning tool so that administrators can allocate resources to where and when they are needed, and also ensure that network assets are available to support delivery of bandwidth-rich applications and services. And, finally, data centers and the cloud backbone need to function as a seamless, single network—a data center without walls—to optimize performance and economics.
Widespread availability of M2M technology has already spurred innovative use cases across different industries, such as: smart grid in energy/utilities; communication between various devices for security and industrial/building control; environmental monitoring; and many applications in the consumer domain ranging from retail to home appliance intelligence.
For example:
Keys to success
To foster adoption of M2M-enabled technology, initiatives such as GSMA’s Connected Life regularly bring together thought leaders within the M2M ecosystem to share their insights to help increase availability of anywhere, anytime connectivity.
The successful adoption of M2M depends on the maturity of multiple elements in the ecosystem, including the wireless technology and business system; the network connectivity that connects the machines and sensors to the cloud; the cloud computing platform; and the software applications that translate the huge amount of data into useful intelligence.
To build an enterprise or carrier-grade cloud platform that can support the projected volume of M2M traffic, the underlying network that connects enterprise data centers, and data centers to the cloud, has to be reliable, high-performing, connection-oriented and have low latency. It must be responsive and integrated into the cloud ecosystem to satisfy connectivity requirements of storage and compute cloud subsystems. It must also enable elastic/liquid bandwidth to ensure the performance and economic benefits of the cloud are realized. Carrier-class network infrastructure—with the ability to scale to 100G today and terabit capacities in the future and with multiple levels of resiliency enabled by an intelligent control plane—will be critical to enabling these cloud networks.
Sponsored byVerisign
Sponsored byCSC
Sponsored byRadix
Sponsored byDNIB.com
Sponsored byWhoisXML API
Sponsored byIPv4.Global
Sponsored byVerisign