|
I keep seeing so many articles about the Internet and related policy issues that it’s hard to know how to respond. The term “IP Transition” may be a good starting point since the term is an attempt to treat the Internet as a smooth transition rather accepting the idea that we are in the midst of a disruptive change. It seems that the FCC’s approach is to simply substitute IP for old protocols and to preserve policies tied to the accidental properties of a copper infrastructure. This shows a failure to come to terms with the new reality.
Perhaps we can learn from the history of the ICC (Interstate Commerce Commission). The Wikipedia entry tells how the FRC (Federal Radio Commission) was spun out of the ICC. What is more interesting is that the ICC itself was succeeded by the Department of Transportation whose focus is on the means of transportation rather than the business of carrying freight. I’d be interested in learning more about how we went from the ICC to the DoT from those versed in regulatory history.
If we are to come to terms with a disruptive change we need to take a zero-based approach and think in terms of a “Department of Connectivity” (DoC) that can focus on the future rather than on preserving the past. This is a thought experiment which is appropriate because the Internet is really about an idea—a way to use available resources.
The DoC could take a fresh look at the infrastructure and ask how to finance the wires and radios given that we can’t charge for services like phone calls because such services are now apps. While we do need to be wary of analogies there is a similarity to the shift from charging for rides on railroads to paying for roads. The DoC can also think outside the telephone-framing of today’s 9-1-1 to a resilient infrastructure. For example a Nest (or similar) fire detector should be able to send a rich message directly to a fire department using standard messaging.
The problem with calling for a DoC is that unlike roads, which coexisted with the railroads, today’s connectivity has to be carved out of the existing telecommunications infrastructure as when IP was used to create wormholes between LANs. I can’t make the case for a DoC in a few sentences here given that it took a carefully constructed column (in IEEE/CE Magazine) for me to explain the concepts. What I can do is ask those who take the “IP Transition” effort seriously to read that column and to revisit the conventional understandings rather than trying to force the Internet into the confines of regulatory policies of circa 1934.
Sponsored byVerisign
Sponsored byCSC
Sponsored byDNIB.com
Sponsored byIPv4.Global
Sponsored byRadix
Sponsored byWhoisXML API
Sponsored byVerisign
That column (Refactoring Consumer Electronics) is an abridgment of the outstanding technologies of the last two centuries, sorted by their usability and by the business models they implied. A compelling review for anyone interested in the role that the Internet and computing may play in the history of mankind.
As a publishing medium, the Internet can also be compared to Gutenberg’s printing technology. While that analogy may help framing the extent of copyright concepts, the possibility of the Internet to connect any devices and services at will is overwhelming in comparison. The event in the history of writing that made a similarly huge impact is probably the invention of writing itself. No surprise, in that case, if the Internet subverts so many long-established business models.
Users considered stakeholders looks like a neat idea to me. At least, as far as Internet governance models are concerned, it would be an advance in democracy. I’m not sure a Department of Connectivity is necessary, though. It may suffice to boost small businesses, being confident that they will deploy connectivity anyway. Computer programming is an area that experienced the power of free cooperation since the early days of the Internet. The lack of a suitable business model required foundations like Apache and Mozilla, and lots of common sense. The movie industry might undergo a similar subversion; presently they are trying to limit connectivity by silo protocols such as DRM and sanctioned blocks.
It’s fairly simple. Digital is overtaking analog constructs and pricing. Likewise horizontal scale wins out over vertical integration/bundling.
Marginal cost at every layer and boundary point drives efficient pricing which clears supply/demand north-south between layers and east-west across boundary points. The 3 digital, competitive waves of the 1980s-90s in IXC/WAN-voice, data (FR,ATM,IP), and wireless are the models going forward. What we are witnessing with this IP transition is the last throes of the monopolies to remain vertically integrated. But they simply cannot scale rapidly obsoleting capex/opex at every layer across demand silo-ed by arbitrary geographic, market segment, or application boundaries.
We need a quantitative model which maps supply/demand across service layers, geo-densities, and application bundles. Watch 15 minute video: http://bit.ly/156Blxd
The problem, as I wrote in my column, is that layers are the old paradigm and we need to think outside that paradigm.
Bob, you're trying to dispel the Old Testament and embrace only the New one. By saying "The new opportunity is interconnecting devices and information. But, as we will see, this puts the telecommunications business model at odds with the consumer electronics need for connectivity as a basic resource." you fail to understand that telco has been (or was supposed to be) doing this all along. IP was just a horizontal, digital, packetized information "arbitrage" of the inefficiently costed, vertically integrated, analog and TDM telco stack. And the forces were unleashed, unknowingly, with the vertical separation of MaBell in 1983. Not much has changed from old to new; except perhaps that we can embraced multi-dimensional network thinking learned from digitization of voice, data and wireless. (I guess the New Testament is like that). Now we just need to apply it to the last mile. Watch: http://bit.ly/156Blxd
I did look at your video. The point I make in http://rmf.vc/IEEERefactoringCE is that digitization is second generation (or new testament if you will). But we’ve moved far beyond that and there is no longer meaning nor value inside networks.
Bob, there is tremendous value inside Twitter or FB or Google's "ad exchange" which pays for the bulk of public IP sessions and infrastructure (along with private enterprise spend). Network effects happen at all layers and all boundary points. The infostack points to the cause and effect and the fact that value anywhere in the framework is a function of flow or blockage elsewhere. My approach provides incentives to infrastructure providers to invest and upgrade continuously in "smart" pipes and pathways. Your ambient connectivity provides no better upside than what we currently have; possibly even worse. What you are missing is the answer to the questions, "where is the demand" and "who pays"? In your medical monitoring example, it is precisely the insurer and health provider that finds value in subsidizing or paying for the low-bit streams of data generated by the edge devices. The carrier with wide-area coverage and high QoS can provide an important connectivity role. But this cannot happen if that same carrier, as you point out, is constrained by artificial geographic, market segment, or application boundaries. Hence almost ALL service provider models (at the infrastructure layers) are flawed at present.
I write about funding in http://rmf.vc/CISustainable. The funding model we use for sidewalks avoids having to map value into particular pieces of concrete. The reason I suggested a DoC is that it’s very hard to unwind all the assumptions implicit in the telecommunications framing such as the idea of carriers, the concept of QoS and even the idea that there is a thing called “The Internet”. This is going to be hard to resolve in this discussion—perhaps I need to schedule a talk in NYC if you have an appropriate venue.
Demand is infinite and varied. Demand is also the product; not the network. This is the key difference in the telecom "utility" model vs all the other utilities and publicly shared goods; including municipal infrastructure. As long as pricing reflects marginal cost at every layer and boundary point, we can charge/rate by the bit/session/minute and clear marginal demand. Importantly, the cost is typically so low as to be able to be made "free" because it is absorbed or amortized by the commercial transaction/application which invariably occurs. In fact I am working on a thesis (with video streaming over wireless) where we swing back from a flat-rate all you can eat (settlement free) "bits" model to a rated "minute" model. The return of minutes! Only capacity is priced to induce consumption; as in an hour of video consumption over wireless for $1. Then we can actually monetize real-time and live linearTV at low cost to 6 billion people. How else do we incent investment in gigabit connections to support 4k video streams and HD group collaboration for telemedicine, telework, tele-education; all of which know no natural boundaries. The only boundaries are those contrived by the govt and operators to sustain information monopolies. Naturally there are RoW and frequency allocation issues, but if every service provider at layer 1 is held to an open access standard, for the most part we have enough in the way of demand forces to take care of the supply/access problem naturally and generatively (emergent). We could try to organize a google hangout as Randy Resnick does over at VUC.me. Might be better than trying to organize a physical venue.
I’d be happy to have a hangout but first we need to establish a common language. Concepts like layers don’t make sense when we start by thinking in terms of relationships. If there are no boundaries then where would you place a meter or define “capacity”. You may also want to read http://rmf.vc/PurposeVsDiscovery (also on CircleID).
Bob, one person's edge is another's core. And the boundaries are fluid; especially in a hyper mobile world, where mobile devices may interact directly with fixed devices, or via the cloud or a combination of both. Asynchronous techniques using out of band/medium signaling and transmission may be the best way to ensure security and privacy given what we know today. It's also a reason why I believe, in addition to insurance/redundancy/backup mandated by the high-volume user, 2nd and 3rd layer 1 wire nets will be pervasive (except in the most uneconomic of circumstances). Furthermore these layer 1 nets will support "n" number of wireless nets. Let's start with my framework and you can critique that and I'll critique some of the points in your articles and maybe we can develop a better compact than Wheeler's. I am open to any framework as long as it is consistent, commercially feasible, and clears supply and demand from a past, current and future perspective.
Bob, I should have said layers AND boundaries are fluid. They are reference points and serve to show relationships, cause and effect, costs, price signals, and value. Sometimes the physical layer 1 is more valuable than the layer 3 switch. Sometimes the opposite (like the commodity/specialty supply/demand pricing relationships and cycles in the chemical industry).
They serve to get everyone on the same page visually and semantically with respect to functionality and distill some very complex processes to make them understandable. For instance, a mobile phone in your pocket is not much different from a PBX at a MAN/LAN boundary point circa 1990. Only it is much, much better and more powerful. And mobile across numerous boundary points horizontally and layers vertically (yes you can synch at layer 1 via a physical connection if you need to).
Within each layer and boundary point, there are further sets of the same “infostacks” with functional separations involving hardware/software tradeoffs. A mirror within mirrors infinite universe of possibilities.
As you note the new reality is increasingly at odds with the traditional telecom framing. Time to the take a fresh look at the copper, radios and glass and how to use them as resources.
The principles should be consistent. Therefore, I make the case that we took the wrong fork in the telco road back in 1913 when we listened to MaBell's Bulls##t about universal service and further compounded it in the 1930s believing an all-knowing regulator could achieve it and regulate all information monopolies. Utter nonsense and it set us back anywhere from 50-75 years technologically, socially and politically. In particular it may have served to extend the restructuring process from absolutist states to more democratic societies by many decades and been a primary contributor to WWII and the Cold War. History has proven that competitive markets can ensure universal service in far more equitable and generative fashion than govt to satisfy the infinitely diverse array of demand. That's the past. As for the present, networks do run on and over public goods (ie the RoWs and frequencies) and some standardization is better than pure chaos. Therefore policy should be fairly straightforward: 1) open access at layers 1-3 and oversight to ensure that no long-term monopolies from network effect can occur at any particular boundary point. 2) support and foster balanced settlements that lead to value creation and transference. 3) analyze, analyze, analyze and monitor what is going on at each layer and boundary point. 4) do not set prices but assist efforts to drive costs down! If we feel compelled to offer a safetynet to the bottom 1-5% of users (or 1-5% most expensive users), then work with industry and those users to find economic solutions and minimize any non-market taxes and subsidies. It's amazing what wifi, IP and 800 have done along those lines, as well as forced interconnect (A/B cellular extended to PCS) and equal access (directories). The latter (addressing directories) is part of open access in layers 1-3. And in the future, the current web 3.0 neo-monopolies (google, FB, twitter, amazon, apple), which are attacking the old Wintel monopolies, will give way to new networks in web 4.0. This could be why Google is so resistant to open access and serving/fulfilling commercial and wireless demand in KC. The latter would seem to be almost no-brainers in terms of amortizing the layer 1-3 investments and generating even more revenue and broadband uplift throughout the entire community. But it would loosen their direct hold on the traffic driven ad-exchange (big data) model.
I’ve held off responding because we’re approaching the problem from different framings and I don’t think I can bridge that in a short response—that’s why I write my longer essays. We agree on a lot so it’s worth trying to come to a common understanding but that will take a while.