40%, not 92%-120%. "Data consumption right now is growing 40% a year," John Stankey of AT&T told investors and his CEO Randall Stephenson confirmed on the investor call. That's far less than the 92% predicted by Cisco's VNI model or the FCC's 120% to 2012 and 90% to 2013 figure in the "spectrum crunch" analysis. AT&T is easily a third of the U.S. mobile Internet and growing market share; there's no reason to think the result will be very different when we have data from others.
NORDUnet, the R&E network connecting the Nordic countries has recently undertaken a brilliant Internet peering strategy that will have global significant ramifications for supporting research and education around the world. NORDUnet is now emerging as one of the world's first "GREN"s -- Global Research and Education Network. NORDUnet is extending their network infrastructure to multiple points of presence throughout the USA and Europe to interconnect to major Internet Exchange Points (IXPs).
From will they ever learn department, we are once again seeing attempts by incumbent carriers to skirt rules around network neutrality. They tried and failed with UBB. Now they are at it again with "speed boost" technologies. The two technologies at question are Verizon's "Turbo" service and Roger's "SpeedBoost".
In June 2009 we mused in these columns about Long Term Evolution standing for Short Term Evolution as wireless networks started to drown in a data deluge. It is January 2012 and we keep our heads above the mobile data deluge, even if barely, thanks to a gathering avalanche of LTE networks. Even the wildest prognoses proved conservative as the GSMA was betting on a more 'managed' progression...
James Urquhart claims Cloud is complex - deal with it, adding that "If you are looking to cloud computing to simplify your IT environment, I'm afraid I have bad news for you" and citing his earlier CNET post drawing analogies to a recent flash crash. Cloud computing systems are complex, in the same way that nuclear power stations are complex - they also have catastrophic failure modes...
The Department of Energy (DoE) recently came out with an excellent report, called the Magellan report, on the advantages and disadvantages of using commercial clouds versus in house High Performance Computers (HPC) for leading edge scientific research. The DoE probably supports the largest concentration of HPC facilities in the world. I agree with the report that for traditional applications such as computational chemistry, astrophysics, etc. will still need large HPC facilities.
It has been about six months since I got together with four of my friends from the DNS world and we co-authored a white paper which explains the technical problems with mandated DNS filtering. The legislation we were responding to was S. 968, also called the PROTECT-IP act, which was introduced this year in the U. S. Senate. By all accounts we can expect a similar U. S. House of Representatives bill soon, so we've written a letter to both the House and Senate, renewing and updating our concerns.
There's often a lot of discussion about whether a piece of malware is advanced or not. To a large extent these discussions can be categorized as academic nitpicking because, at the end of the day, the malware's sophistication only needs to be at the level for which it is required to perform -- no more, no less. Perhaps the "advanced" malware label should more precisely be reattributed as "feature rich" instead.
Cloud computing, from a business and management perspective, has a great deal in common with mainframe computing. Mainframes are powerful, expensive and centralized pieces of computing equipment. This is in line with their role as infrastructure for mission-critical applications. For these types of applications, mainframes can be fairly efficient, even though they tend to need large teams of support specialists... Cloud computing is a new style of computing...
Qtel, the largest carrier in Qatar (and nearly the only Internet provider) appears to connect all their users (~600K) to the Internet through just one or a very few public IPv4 addresses. 82.148.97.69 was their single public address in 2006-2007. How can network address translation (NAT) put all those users through just one IP address?