NordVPN Promotion

Home / Blogs

Defining Broadband

The FCC is seeking public comments to help create a better definition of “broadband”. The effort is in relation to its development of a National Broadband Plan by February 2010 as part of the American Recovery and Reinvestment Act.

Accurately noting that “broadband can be defined in myriad ways” and “tends to center on download and upload throughput,” the FCC seeks a more robust definition. The definition will be part of the governance over those receiving funding for broadband development as part of the Recovery Act.

This could get interesting.

In my experience with broadband, both as an engineer and a consumer, the only common things I’ve seen are that:

  • It’s not dial-up
  • Advertised speeds are not real speeds and may vary
  • The service is shared and oversubscribed.

The rest is really a crapshoot.

It’s good to see the FCC seeking outside input. Their decision against Comcast last year seemed to be overzealous and technologically misguided. It looked more like a power play than an honest attempt to create a level playing field. Their decision seemed to ignore the technical or economic realities that factor into how broadband services are designed, delivered and managed, and how these factors have worked to maintain reasonable pricing.

The FCC is open to considering performance indicators and characteristics such as latency, reliability, mobility, jitter, traffic loading and even—wow!—“diurnal patterns.” (Diurnal is basically the opposite of “nocturnal”. Yes, I’ll admit I had to look it up.) Diurnal behavior in humans is one factor in broadband oversubscription models. Network traffic patterns vary during the course of a day according to human behavioral traits, one of which is our need to sleep once in a while, and these factors can be exploited in traffic engineering.

The FCC is also looking to determine how performance indicators should be measured and on what network segments they are relevant. Are they measured just on the local access link or a longer end-to-end path? And if the latter, what exactly is “end-to-end” when you are talking about the Internet?

So, it’s a good question. What exactly is “broadband”? What factors should be included in the definition? Is there really a single definition that can be consistently applied across the board, given that there already multiple technologies currently used and that each has its own technical nuances and characteristics? The FCC acknowledges such differences and the need to account for it.

Digression: The biggest irony is the word “broadband” itself, and one wonders how that term came into vogue when historically its use was in RF communication. Perhaps the reasoning was related to broadband originating with DSL or cable technology. Perhaps the term itself should be retired. The term “high-speed” Internet is probably more appropriate, but let’s not split hairs.

The FCC has its work cut out for it, and you can see why they have reached out.

The rest of this article is just brainstorming on some of the relevant points. My apologies in advance for the length of it. If it sometimes sounds like undisciplined stream of conscience, it is. Readers beware!

Throughput or Speed or Bandwidth or Download Rate or…

The FCC acknowledges that the definition of broadband tends to center too much on upload or download throughput rates. Of course it does. That was originally and still is the real differentiator between broadband and its dial-up predecessor. While it is wise to broaden the scope, no definition can omit speed. It will continue to be the focal point.

Misleading advertising, whether intentional or not, has always been an issue in broadband. You can blame a lot of the public frustration on it. “Throughput” is a misleading term, often causing consternation among the subscriber community who (mis)interpret it. But it is very difficult to explain to a lay person why the advertised speed is not really that exact, that it is more of an average and will vary. At the same time, it is only fair that the marketing folks come up with something to characterize the “speed” of their service without short changing it.

But “speed” is really not the right word either. It more closely resembles the physical line rate, which while relevant in dial-up and leased lines has less meaning in the broadband world. Yet “speed” is at least something the average layperson can relate to, more so than “throughput” or “bandwidth”.

A minimum “speed” will need to be included in any broadband definition. It is the key parameter. Maybe several minimums will be needed depending on the technology in consideration or other variables. It will be tempting to want to jack up the rate—who of us wouldn’t want the service to be as fast as possible. But just remember that bandwidth is not infinite nor is it free. Someone has to pay for it. It looks like your tax dollars are going to fund some build-outs, but it will be your monthly payment that foots the bill after that.

Oversubscription

Most consumers do not realize that the only reason they are getting broadband service for a mere $40 per month is oversubscription. Heavy oversubscription. So heavy that even if the service is quoted in NxMbps range, service providers are counting on your average speed to be much less, maybe just a hundred Kbps. Less even. The statistics usually work out well. The bottom line is that oversubscription is necessary. Without it, you are talking about a leased line service, and the prices would be much higher.

I’ve never understood why people struggle with this concept. Oversubscription governs everything, from lanes on a highway to checkout lines in a supermarket to tables and servers in a restaurant. All systems are designed to handle a certain load but are always bounded by some practical limits. No system can handle all potential users simultaneously. The trick is to engineer it to stay one step ahead of the user community’s demands.

I’ve also not understand why so many have a hard time accepting that the advertised speed is not exact and will vary. Broadband “speed” is more like a speed limit. 55mph may be the posted speed limit, and you may go at that rate most of the time, but not all the time. During rush hour your rarely reach it while late in the evening you can exceed it (police notwithstanding.)

Oversubscription models account for the fact that subscribers are not actively online all the time. People occasionally work, eat, watch TV, sleep (remember, we are “diurnal”) and pursue other hobbies. Oversubscription also accounts for the fact that when subscribers are on line, there is idle time when they are reading something they have downloaded. These breaks in the action allow time for others to use the pipes. Oversubscription also considers the traditional asymmetric nature of Internet traffic where subscribers tend to “pull” more data (download) than “push”. This principle has changed over time and has been leveling off with P2P uploading, but for the most part it is still true. Overall, the traffic statistics show that these basic principles can be exploited such that a large pool of subscribers can share a bandwidth segment that could not possibly handle all of them downloading at the same time.

An aggressive oversubscription model may assume that the average subscriber operates as low as 32Kbps or so. Now, if the marketing folks advertised that, who would sign up? Most consumers would look at that and say, hey, that isn’t even dial-up speed! But it’s just an average diluted over a period of time. Subscribers burst into the network past the advertised and contracted rates (assuming the ISP is not doing any hard rate limiting. Most ISPs usually allow for bursting beyond the contracted rate as long as the network is not congested at that time.)

Pretty much all telecommunications services are oversubscribed, at least those that are shared in nature, and certainly all packet-based services. There are no true service guarantees, just (hopefully) well-engineered networks based on statistical gambles that usually work but sometimes fail in the presence of an extreme circumstance (think 9/11) or a disruptive application (think peer-to-peer file sharing). It’s never been economically possible to create a profitable service that provides absolute guarantees service quality while maintaining reasonable prices. I doubt we’ll be adding “broadband speeds” anytime soon to the “death and taxes” punch line of the “only guarantees in life” cliché.

We take dial tone and call completion success in the PSTN for granted, but even they are not guaranteed. (Again, think 9/11.) It’s just that the voice world has had 100 years to stabilize and provide a reasonably consistent statistical basis for designing a voice network. The Erlang models give a pretty good indication of the number of trunks you need to ensure the probability that fast busies are kept below an intended design target. But extreme conditions have sometimes challenged that model. An example was when unlimited dial-up Internet access came along in the 90’s. Suddenly the average call time went way up and the peaks changed to evening instead of mid-morning.

As unfortunate and misleading as it is, advertised speeds are as much marketing as they are real service characteristics.

If consumers expect that advertised speeds are exact or, worse yet, minimum speeds that the service provider must consistently deliver and guarantee, then oversubscription models go out the window and prices will go up. If you want such a guarantee, the service exists. It’s called a leased line. Just don’t expect to get a leased line $40 per month!

The bottom line is that oversubscription is a business decision. While some common properties drive service providers to similar models, how far a provider is willing to push it (i.e., how many subscribers they are willing to provision over the same service segment) is up to them. How well they can manage it and ensure service quality is up to them, and they will have to endure the customer backlash if they overdrive it.

It will be interesting to see how the FCC handles this aspect. Let’s hope that they acknowledge this and that there aren’t provisions that turn broadband into a leased line service, or else $7.2B may not be enough to cover the cost.

Wired versus Wireless Services

The FCC suggests that there may be a need to distinguish between wired and wireless services. This is critical because oversubscription models are more aggressive in wireless services, and the performance expectations in wireless networks are typically lower than that of wired services. To date, wireless services have been able to get away with this because of the benefit that the mobility provides.

The biggest benefit and differentiator of wireless services is obviously the mobility component. It is not speed or any other feature really, at least not yet. While we’d always like higher speeds, better reliability and more features, it is the mobility aspect that draws us to the service. And we are willing sacrifice performance for it.

For example, mobile phone service is only now getting to the quality level of traditional PSTN service, and it’s still not there yet. A decade ago, compared to traditional TDM voice service mobile service was, quite frankly, bad. But the mobility aspect was overwhelming compensation for the downsides.

Satellite-based broadband services are another ballgame. Satellite service is also “wireless” but in a different way. Its main benefit is not really subscriber mobility, but rather service mobility. Satellite services offer the flexibility of bypassing the local telecommunications infrastructure (or lack thereof) to set up and deliver service anywhere. High speeds are possible but they are limited and more expensive when compared to terrestrial services primarily because of the scarcity and expense of the satellite capacity. And then there is latency to deal with, but more on that later.

Because wireless broadband service must be delivered over scarce, expensive spectrum while being subjected to technical issues like interference and latency that are not as pronounced in wired systems, it is likely that the FCC will need separate definitions, or at least exceptions, to account for the differences.

Geographic Differences

How can a definition of broadband account for regional differences in services availability and quality? What satisfies one consumer as high-speed may be what another considers only marginally better than dial-up. It’s all very relative. But perhaps that is the point of the funding, to provide the means necessary to blur the lines between “urban” and “rural” with respect to the availability, speeds and features of broadband service.

The Recovery Act is limited to the United States. That should limit the scope a bit. There shouldn’t be a huge disparity in service level, even for rural regions. But the US has already taken criticism worldwide for being “behind in broadband” and for setting minimum rates too low. Much has been written about how other countries are (allegedly) providing much faster service. Make no mistake; President Obama is quite interested in broadband development, its impact on the US economy and the need for the US to be a leader. It may not be health care reform, but to him it is what the development of the interstate highway system and its effects on the US economy was to Eisenhower. This broadband initiative has implications that cannot be viewed in a US vacuum. Whatever we come up with will need to pass the test of international scrutiny.

Latency

Latency is an even more nebulous term than speed. Application performance generally deteriorates as latency increases. But the effects differ across applications and protocols.

Latency is primarily a function of the physical distance between end points. Congestion, serialization delay, properties of the transport medium, buffering and other factors play into it, but it is mostly the physical distance. This will never be consistent across service provides, nor can it be consistently measured. Providers whose services are contained to a local area will have low latency while regional or national providers will have higher latency.

It is perfectly legitimate for a service provider to offer local broadband service and offload traffic to an upstream transit provider in the same city, minimizing the latency to, say, 20ms round trip from the subscriber to the provider’s network edge. For DSL, this would be either at the CO where the DSLAM is located or perhaps the aggregation POP one level up. For cable, it would likely be the head end.

But for larger providers, subscriber traffic may stay on-net longer, perhaps across the country.

It is also perfectly legitimate for a regional or national service provider to argue that broadband is purely an access service and as such, any latency measurement must terminate at the edge of the access layer rather than further in the network. A provider’s backbone is effectively part of the Internet itself and should be excluded from any latency measurement that is intended to measure the broadband component of the service.

Further complicating things is the likelihood that whomever provides service to rural areas will have to backhaul homes a considerable distance to the closest aggregation POP.

In satellite services, latency plays an even bigger factor given the 22,300 mile-high orbit of geosynchronous satellites. Packets have to travel much further to get up to the satellite and back down to the ground, much further than the worst-case terrestrial scenarios. The effects on some applications in the face of half a second of round-trip satellite latency can be dramatic.

Satellite-based services typically build in TCP or WAN acceleration to compensate for the effects of latency. But you can never actually eliminate the latency, not as long as Einstein was right about the speed of light. Though TCP and WAN acceleration are great technologies and can mask the effect of latency, they don’t mask the typical ICMP PING-based technique of measuring latency. This would need to be considered in any broadband definition. Otherwise it could cause satellite service providers some difficulty in meeting the performance criteria necessary to quality for funding.

On that note, regardless of where you draw the boundaries, how would you measure latency?

Most networkers immediately think of the round trip time (RTT) for an ICMP PING between two IP nodes. This is usually a fair measure and is often used for latency measurements in IP-based services and for latency-based SLAs. But it really has limited meaning from the user point of view considering the multitude of applications on the Internet, their differences in behavior, and the different effects that latency has on their performance in the eyes of the end user.

Most Internet traffic is still HTTP-based web traffic. HTTP runs over TCP. TCP is affected by latency in that the more latency there is, the slower the download will be. TCP backs off in the presence of latency, interpreting it as congestion or packet loss and restricting the transmit speed accordingly. This is the intended congestion control behavior of TCP, but to a layperson it simply means slower downloads. For HTTP traffic, it can be more noticeable because the end user typically clicks or types in a link and is then waiting for it to load, with the clock ticking and it seeming longer than it really is.

The effect on large file transfers like FTPs is worse, but if an end user launches an FTP then leaves it in the background while he multi-tasks elsewhere, it may be less noticeable.

Real-time traffic is affected by latency in different ways. Unidirectional traffic like streaming video is not dramatically affected from a receiver’s perspective. Once the stream is being received, the video may be delayed a bit but the stream is still visible.

For bi-directional real-time traffic such as voice over IP, Internet chat or point-to-point video conferencing, latency can be very noticeable. For voice, after the latency hits about 150ms or so, you notice it, and two speakers can easily start to step on each other speech.

Latency is a crucial parameter, but given the difficulty of defining how it would be measured, between what “end points” it might be measured, and how to deliver a specification that makes sense across a variety of applications and transport technologies is a tall order.

Jitter and other Quality of Service Parameters

It is interesting that the FCC is open to considering jitter and perhaps other QoS parameters, particularly since it could go against the rationale for their traffic shaping decision last year against Comcast. In order to minimize jitter on an IP network, you must have some knowledge of the applications riding over it and then coerce such traffic according to a pre-defined QoS policy. But application-level awareness and the subsequent monkey business with packet treatment has some folks up in arms.

As great as the Internet has been, there are at least two major things it lacks. One is an end-to-end QoS model.

(The other is the lack of a true multicast capability that would facilitate things like a global IPTV service and video exchange between providers, but that dreamland is the topic for another day.)

QoS can guarantee the performance of a diverse set of applications with different characteristics while being transported over the same network. But given that the Internet is not a single network, rather it is an amalgamation of many, and notwithstanding that that fact has been its strength, it has always been impractical to come up with a single QoS model that every service provider could and would adhere to. Networks vary in size, bandwidth, congestion, features, redundancy, components, operational support, geographic scope and all sorts of other factors. Even if a basic set of rules could be derived, you could never ensure everyone engineered it appropriately and enforced it uniformly and fairly. Routing policies and peering arrangements between providers have been complex enough. A multi-provider end-to-end QoS model is a stretch. And similar to issues with latency, measuring jitter over only a portion of the end-to-end path limits its meaning. The QoS for an end-to-end transaction is only as strong as the weakest link in the chain.

Still, it is interesting that jitter is under consideration. For the Internet to take an evolutionary step, improvements must be made such that real-time applications truly have a chance. Right now, they perform pretty well most of the time—sometimes great but also sometimes quite poor. They are still not to the level of their legacy transports. And they never will be unless the underlying network can deliver it.

If we could find a way to define it and make it work, I’m all for this one.

Reliability

This one is really interesting and may up the ante. Right now, broadband is fairly reliable but still not to the level of traditional utilities such as telephone service and power. With broadband becoming more and more critical as consumers move their voice and video entertainment to it, as they telecommute more often or run a home business, it will be interesting to see what can be derived here, how it would be measured and enforced, and what SLAs might service providers be required to give to subscribers.

Rebates anyone?

This one is tough to envision.

Other Features to Consider

What other features and characteristics should be considered? What about IPv6? This would be an opportunity to really give IPv6 a push. Last year’s Government mandate was more of encouragement or guidance than something with real teeth. Overall, we are dragging our feet too much and may pay the price later.

Perhaps that is the subject of another article.

By Dan Campbell, President, Millennia Systems, Inc.

Filed Under

Comments

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

Related

Topics

Brand Protection

Sponsored byCSC

Domain Names

Sponsored byVerisign

New TLDs

Sponsored byRadix

DNS

Sponsored byDNIB.com

Threat Intelligence

Sponsored byWhoisXML API

IPv4 Markets

Sponsored byIPv4.Global

Cybersecurity

Sponsored byVerisign

NordVPN Promotion