|
In a speech this morning, widely heralded (and criticized) as a call for “network neutrality,” FCC Chairman Julius Genachowski: “Why has the Internet proved to be such a powerful engine for creativity, innovation, and economic growth? A big part of the answer traces back to one key decision by the Internet’s original architects: to make the Internet an open system.”
Now “open system” doesn’t mean anarchy. The Internet has rules, technical standards codified in the unassuming sounding “Requests for Comment.” As described by the author of RFC 1, Steve Crocker (How the Internet Got Its Rules), the RFCs were designed to help people coordinate activity, to build an interoperable network: “After all, everyone understood there was a practical value in choosing to do the same task in the same way. For example, if we wanted to move a file from one machine to another, and if you were to design the process one way, and I was to design it another, then anyone who wanted to talk to both of us would have to employ two distinct ways of doing the same thing.” By coordinating an open infrastructure, the Net’s architects left room for expansion at the edges.
While critics have been quick to call the statement and the rules it prefigures “government regulation,” Chairman Genachowski says “this is not about government regulation of the Internet. It’s about fair rules of the road,” (a phrase picked up by Commissioners Copps and Clyburn in their supporting statements). Like rules of the road, basic non-discrimination and transparency principles promote interoperability: As every driver and car manufacturer knows what to expect of the highways, every Internet user and application-developer should know what he or she can assume as substrate.
Yes, road rules constrain some innovation at the core—you can’t build a public road with braid-like traffic patterns where cars freely weave in and out in both directions, or with yellow stop signs and green “yield,” but you can still improve the pavement or road reflectors. The added predictability of a standard interface enables other more significant innovation at the edges—the Porsche, Prius, Smart, and Tesla can all drive on the same standard highway.
Most importantly, Chairman Genachowski shows he understands the option value of network openness—leaving room for the unexpected:
The Internet’s creators didn’t want the network architecture—or any single entity—to pick winners and losers. Because it might pick the wrong ones. Instead, the Internet’s open architecture pushes decision-making and intelligence to the edge of the network—to end users, to the cloud, to businesses of every size and in every sector of the economy, to creators and speakers across the country and around the globe. In the words of Tim Berners-Lee, the Internet is a “blank canvas”—allowing anyone to contribute and to innovate without permission.
As the Net’s core became more fixed since the days of RFC 1, it has enabled attachment of various devices and formats, some of which would become standards in their own right (HTTP, HTML) others of which would never really take off (VRML 3D modeling). We can’t pick winners, but we can build a field for contests worth winning.
Working through the details of the proposed FCC rules will be critical, and difficult, but the principles Genachowski offers for implementation provide a solid foundation.
Sponsored byIPv4.Global
Sponsored byVerisign
Sponsored byWhoisXML API
Sponsored byDNIB.com
Sponsored byCSC
Sponsored byRadix
Sponsored byVerisign
Yes, the Internet has largely been “open” and that is probably the biggest factor contributing to its success. That general philosophy needs to stay the same. But let’s not be too idealistic of what “open” really means in the practical world we live in. Even our most sacred ideals and laws such as freedom of speech are not absolute and without exception.
As you discuss, the Internet consists of all kinds of rules and governance, some of which spawned from RFCs while others just evolved naturally from early multi-party cooperation into de facto standards (if they even deserve to be called “standards”.) Many, depending on your angle, could justifiably be considered to be anti-competitive, or unfair, or as inhibiting your abilities to innovate. But most of these de facto rules make sense and were usually derived from common sense, technology limitations or other very practical concerns. Business motivations often came into and continue to come into play as well, but a lot of it is honest.
There are a plethora of examples but here are a few:
1. The strict restrictions on who can acquire portable (provider independent) public IPv4 address space could be considered to be unfair. The reasoning for such restrictions is sound and well known, a practical constraint that is in the best interest of the Internet at large. But one can easily argue that its side effects such as the proliferation of NAT are not good things. One could site examples on how it stifled or at least modified an innovative concept, or slowed down the ability to deploy a service (I have witnessed this first hand.)
2. BGP peering and transit agreements, and what defines a “Tier 1” ISP. There is really no hard law or rule stating who is a Tier 1 provider and why, other than the obvious factors that sort of define the term such as size and number of routes. But it’s not exactly a law per se, and there’s plenty of providers who would love to peer directly with other Tier 1 ISPs. One could easily argue that it’s unfair that some big ISPs exchange routes and traffic based on no-cost bi-lateral peering agreements while many others must pay (typically the very same ISPs) for transit service. This is how it evolved, but there are many who could claim that it is unfair.
There are plenty of others.
To go back to your road analogy, indeed, basic standards that set the expectations of what a highway and other roads should look like create a predictably that allows for innovation, and allows for many types of car manufacturers to create similar but competing products. But that too is not absolute and without exception. Some lanes on a highway are HOV, restricting access to those who fit a certain criteria that is not even based on type of car nor even on pre-payment, but rather how the car was populated (sort of like how an application may be used.) Similarly for toll roads, but in that case the differentiation is the person’s desire to pre-pay for a service or special treatment (i.e., whatever is on the other side of the toll road/bridge, or to use a high-speed pay lane that they are talking about on the Washington Beltway.) And road speeds vary from the autobahn to 55mph or 65mph standards to lower speed roads, which changes the general rules. Some vehicles like a moped are not allowed in the fast lane of a highway (if permitted on the highway at all!) while large 18 wheelers may not be allowed on smaller, residential roads. Same basic highway and general rules, but sometimes different “applications” have different and specific rules and restrictions.
For example, without knowing for sure, I’ll assume there are some guidelines if not federal rules governing the width of a highway lane during its construction and, consequently, perhaps there are rules governing the width of a car / truck within that framework. But let’s suppose there weren’t such rules, which in the relatively immature Internet world is not a stretch. If someone comes out with a new model of car, let’s call it the Chevy “BitTorrent”, and that car is one-mile long and 6 highway lanes wide, is it fair to allow such a car to ride on our roads without any restriction given its overall impact to everyone else? Should those who build the roads be forced to create brand new and much larger roads that tolerate such a dramatic shift in car dimensions (application behavior)? Who pays for it all, and where does it end?
Every rule has its exception, and those exceptions aren’t always bad things.