One of the longstanding goals of network security design is to be able to prove that a system -- any system -- is secure. Designers would like to be able to show that a system, properly implemented and operated, meets its objectives for confidentiality, integrity, availability and other attributes against the variety of threats the system may encounter. A half century into the computing revolution, this goal remains elusive.
A very Interesting meeting The Internet Governance Forum (IGF) with an ambitious theme of connecting the worlds next billion people to the Internet took place in early November 2015 in a beautiful resort city of João Pessoa in Brazil under the auspice of the United Nations. Few citizens of the world paid attention to it yet the repercussions of the policy issues discussed affect us all.
According to data from the FttH Council, the number of homes passed with fibre in the US increased 13% in 2015, year-on-year, to 26 million. Combined with Canada and Mexico, the number of passed homes has reached 34 million. The take-up rate is excellent by international standards, at more than 50%. Commonly operators look to about 20% to 30% take-up before work can begin on new fibre infrastructure to communities.
The Domain Name System (DNS) offers ways to significantly strengthen the security of Internet applications via a new protocol called the DNS-based Authentication of Named Entities (DANE). One problem it helps to solve is how to easily find keys for end users and systems in a secure and scalable manner. It can also help to address well-known vulnerabilities in the public Certification Authority (CA) model. Applications today need to trust a large number of global CAs.
The longer I have been in the tech industry, the more I have come to appreciate the hidden complexity and subtlety of its past. A book that caught my attention is 'Open Standards and the Digital Age' by Prof Andrew Russell of Stevens Institute of Technology in New Jersey. This important work shines a fresh light on the process that resulted in today's Internet. For me, it places the standard 'triumphant' narrative of the rise of TCP/IP into a more nuanced context.
More than a decade ago we predicted that the telecoms industry would be transformed, driven by its own innovations and technological developments. As a result we indicated that in many situations the telecommunications infrastructure would be offered as a service by hardware providers. We also predicted that this would open the way for a better sharing of the infrastructure.
This year, the IGF Multistakeholder Advisory Group which provide assistance in the preparations for Global IGF meetings called for Intersessional work (activities that are pursued in the months between annual IGFs with the aim of helping the IGF produce more tangible outputs that can become robust resources). Previously, the IGF has used best Practice Forums and Dynamic coalitions to bring out key issues that affect the world as it relates to the Internet. This year's Intersessional activity is centred on "Policy Option for connecting the Next Billion".
In a previous article, I discussed how telecoms is facing a growing complexity crisis. To resolve this crisis, a new approach is required. Here I explore how that complexity can be tamed... 'Invariants' are things that are meant to be 'true' and stay true over time. Some invariants are imposed upon us by the universe... Others are imposed by people. As engineers, we aim to establish these abstract 'truths' about the system.
Reading about the EU Neutrality vote, I'm reminded of the challenge faced by traditional telecommunications regulators in understanding the very concept of the Internet. To put it bluntly zero-rate is a policy framed in terms of Minitel and setting the price based on what phone number is dialed and not at all about the Internet where the value is determined by relationships entirely outside of a network.
I recently read an interesting post on LinkedIn Engineering's blog entitled "TCP over IP Anycast -- Pipe dream or Reality?" The authors describe a project to optimize the performance of www.linkedin.com. The web site is served from multiple web server instances located in LinkedIn's POPs all over the world. Previously LinkedIn used DNS geomapping exclusively to route its users to the best web server instance, but the post describes how they tried using BGP routing instead.