There is currently a discussion going on between Milton Mueller and Patrik Fältström over the deployment of DNSSEC on the root servers. I think the discussion exemplifies the difficult relation between those who develop standards and those who use them. On the one hand, Milton points out that the way the signing of the root zone will be done will have a great influence on the subjective trust people and nation states will have towards the system. On the other hand, Patrik states that "DNSSEC is just digital signatures on records in this database". Both are right, of course, but they do not speak the same language...
OK, you know things are getting bad when Ameritrade leaks its customer information yet again, and I don't even bother to report it because it's not news anymore. Well, recent updates to the story have prompted me to correct that omission. Yes, it happened again. Roughly a month ago, correspondents began to receive pump-n-dump spam to tagged email addresses which they had given only to Ameritrade... This now marks the third major confirmed leak of customer information from Ameritrade. In addition, the Inquirer reported the loss of 200,000 Ameritrade client files in February 2005. One correspondent informs me that this has happened to him on four or five previous occasions.
Sometimes in our worries about the Duopoly, we fail to recognize that some extraordinary wealth of opportunity sits right underneath our noses. National Lambda Rail (NLR) is one such case. About six months ago I wrote in some detail about NLR and what made this entity different from previous attempts at research networks in the US... NLR runs on a philosophy of a user owned and administered research network. Intrernet2 (I2), during the ten years of its existence, has run on the basis of first a Qwest donated backbone known as Abilene and since November 2006 on the basis of a seven year managed services contract with Level 3 Communications.
To date, most of the discussion on net neutrality has dealt with the behaviour of conventional wireline ISPs. RCR Wireless News is carrying an opinion piece called "Paying for the bandwidth we consume" by Mark Desautels, VP -- Wireless Internet Development for CTIA -- the trade association for the US wireless industry. His article follows up on reports of Comcast cable moving to discontinue internet access service to so-called "bandwidth hogs"...
DNS root servers function as part of the Internet backbone, as explained in Wikipedia, and have come under attack a number of times in the past -- although none of the attacks have ever been serious enough to severely hamper the performance of the Internet. In response to some of the common misconceptions about the physical location and total number of DNS root servers in the world, Patrik Faltstrom has put together a visual map on Google, pin-pointing the approximate location of each server around the world.
Damien Allen of VTalk Radio recently interviewed Professor Eric Goldman of the Santa Clara University School of Law on the topic of "Domaining". The interview covers the nature of domaining as a business and how it differs from cybersquatting. From the interview: "Often times the domainers are not particularly interested in profitable resale and, in fact, in my experience many times when domainers get complaints about domains, they'll just hand the domain name back, no questions asked and no money charged. They're not looking to make money from the resale of the domain names..."
Microsoft has filed 3 cybersquatting cases at the beginning of September 2007, as reported in an Inside Indiana Business article. I took the liberty of accessing the cases via the PACER system, and posted the major documents... It looks like they're stepping up efforts to defend their trademarks, and seeking big damages in court, rather than go the way of the UDRP. These cases demonstrate that new TLDs should not be a priority with ICANN until the problems in existing TLDs are addressed.
When a network is subject to a rapid increase in traffic perhaps combined with a rapid decrease in capacity (for example due to a fire or a natural disaster), there is a risk of congestion collapse. In a congestion collapse, the remaining capacity is so overloaded with access attempts that virtually no traffic gets through. In the case of telephony, everyone attempts to call their family and friends in a disaster area. The long standing telephony approach is to restrict new call attempts upstream of the congested area... This limits the amount of new traffic to that which the network can handle. Thus, if only 30% capacity is available, at least the network handles 30% of the calls, not 3% or zero...
Zango, a company that used to be called 180 solutions, has a long history of making and distributing spyware. (See the Wikipedia article for their sordid history.) Not surprisingly, anti-spyware vendors routinely list Zango's software as what's tactfully called "potentially unwanted". Zango has tried to sue their way out of the doghouse by filing suit against anti-spyware vendors. In a widely reported decision last week, Seattle judge John Coghenour crisply rejected Zango's case, finding that federal law gives Kaspersky complete immunity against Zango's complaint...
On August 23 ( while I was in China) a list member Lee S. Drybrugh wrote in jest: I happened to bump into Peter Cochrane stating, "The good news is -- bandwidth is free -- and we have an infinite supply." Next by sheer accident I bumped into this in relation to Gilder, "Telecosm argues that the world is beginning to realise that bandwidth is not a scarce resource (as was once thought) but is in factinfinite." Can anyone explain this infinite bandwidth as I think I am getting ripped off by my ISP if this is true? Craig Partridge then offered what I think is a very good commentary of a difficult question where the answer depends very much on context...