The idea of tracking data outages spawned from an early discussion on the outages forum including feedback from an outages survey about having a status page for (un)planned outages as a central resource. The purpose of such effort is to have a wider focus that one could view as opposed to having to check dozens of provider status pages. There were many ideas put forth but nothing really panned out and things kinda fell on the back burner.
A decade old guessing game finally came to an end during these 2012 summer months. America was supposed to be hopelessly behind while Europe had not much to show after a decade of spending lavishly EU money on IPv6 related projects. China and Japan were thought to be light years ahead of everybody else. But in the end, it was the might of the American Content Industry that tipped the scales.
Over the past few months I have made regular references to OpenFlow. This is an exciting new development that fits in very well with several of the next generation technology developments that we have discussed in some detail over the past few years -- new developments such smart cities and smart societies, the internet of things. Such networks need to operate more on a horizontal level, rather than the usual vertical connection between a computing device and the users.
Ten years ago everyone evaluating DNS solutions was always concerned about performance. Broadband networks were getting faster, providers were serving more users, and web pages and applications increasingly stressed the DNS. Viruses were a factor too as they could rapidly become the straw that broke the camel's back of a large ISP's DNS servers. The last thing a provider needed was a bottleneck, so DNS resolution speed became more and more visible, and performance was everything.
The Google Fiber project is receiving international attention. This in itself is a good thing, since it brings the benefits of high-speed FttH infrastructure to the attention of large numbers of people in business and government who will not have to deal with such developments on a regular basis... At the same time we have to look at Google Fiber from the point of view of operating in the American regulatory environment. Yes, we can all learn from its disruptive model, and particularly when the results of the more innovative elements of the services begin to kick in; but for other reasons there is no way that this model can be replicated elsewhere.
Google's announcement of its 'Fiberhoods' throughout Kansas City is yet another example of the thought leadership and innovation being brought forward by the popular advertising company. But what does this move say about the state of Internet access in America?
When preparing a network for IPv6, I often hear network administrators say that their switches are agnostic and that there is no need to worry about them. Not so fast. Yes, LAN switches function mainly at layer 2 by forwarding Ethernet frames regardless of whether the packet inside is IPv4 or IPv6 (or even something else!) However, there are some functions on a switch that operate at layer 3 or higher.
Almost every conversation I have with folks just learning about IPv6 goes about the same way; once I'm finally able to convince them that IPv6 is not going away and is needed in their network, the questions start. One of the most practical and essential early questions that needs to be asked (but often isn't) is "how do I lay out my IPv6 subnets?" The reason this is such an important question is that it's very easy to get IPv6 subnetting wrong by doing it like you do in IPv4.
During the last few months the US's main DSL providers AT&T and Verizon have begun retracting from the DSL and landline market in many rural and less commercially viable areas while concentrating on their wireless LTE ambitions. DSL and voice telephony provide relatively low returns, which can be whittled away through network maintenance costs, while LTE promises to deliver proportionately higher profits, based on exorbitant charges by data volume.
I have long been perplexed at how Google plans to make a profit with their Kansas City Fiber project. Originally the project was touted as an altruistic move by Google to really understand the underlying costs of deploying fiber in a large municipality. But as anyone who has been in the trenches can tell you, it is not the technology that determines the cost of a fiber deployment, but the tyranny of the take-up.