At the Internet Engineering Task Force (IETF) it is time we accept the wide range of drivers behind (and implications of) standards and for stakeholders to start listening to each other. A protocol recently released by the IETF, DNS over HTTPS (DoH), is at the centre of an increasingly polarised debate. This is because DoH uses encryption in the name of security and privacy and re-locates DNS resolution to the application layer of the Internet.
With the upcoming celebration of the 50 years of the Internet, I'm trying to figure out how the traditional story misses the powerful idea that has made the Internet what it is -- the ability to focus on solutions without having to think about the network or providers. It's not the web -- thought that is one way to use the opportunity. The danger in a web-centric view is that it leads one to make the Internet better for the web while closing the frontier of innovation.
A dialogue between Michael Warner (Historian, United States Cyber Command) and Tony Rutkowski (Cybersecurity engineer, lawyer and historian). Michael is chairing a cyber history panel at the October biennial Symposium on Cryptologic History hosted by the National Security Agency; his panel will include discussion of the almost unknown key role of cryptologist Ruth Nelson leading a team in the 1980s in a major initiative to secure public internet infrastructure.
We got used to it: if we open a website, it's always like stop and go on a high-traffic highway or city traffic jam. At some point, we will reach the destination. The constant stalling is due to a traffic rule for the Internet called TCP (Transmission Control Protocol). The TCP/IP protocol family comes from the American defense industry. It was introduced by DARPA (Defence Advanced Research Projects Agency) in the early 1970s. At that time, no one had the Internet as the need of the masses on the screen.
In June, I participated in a workshop, organized by the Internet Architecture Board, on the topic of protocol design and effect, looking at the differences between initial design expectations and deployment realities. These are my impressions of the discussions that took place at this workshop. ... In this first part of my report, I'll report on the case studies of two protocol efforts and their expectations and deployment experience.
The first RFC describing Border Gateway Protocol (BGP), RFC 1105, was published in June 1989, thirty years ago. By any metric that makes BGP a venerable protocol in the Internet context and considering that it holds the Internet together, it's still a central piece of the Internet's infrastructure. How has this critically important routing protocol fared over these thirty years, and what are its prospects? Is BGP approaching its dotage or will it be a feature of the Internet for decades to come?
By any metric, the queries and responses that take place in the DNS are highly informative of the Internet and its use. But perhaps the level of interdependencies in this space is richer than we might think. When the IETF considered a proposal to explicitly withhold certain top-level domains from delegation in the DNS the ensuing discussion highlighted the distinction between the domain name system as a structured space of names and the domain name system as a resolution space...
Do you know of someone who has made the Internet better in some way who deserves more recognition? Maybe someone who has helped extend Internet access to a large region? Or wrote widely-used programs that make the Internet more secure? Or maybe someone who has been actively working for open standards and open processes for the Internet?
The IETF is in the midst of a vigorous debate about DNS over HTTP or DNS over HTTPS, abbreviated as DoH. How did we get there, and where do we go from here? (This is somewhat simplified, but I think the essential chronology is right.) Javascript code running in a web browser can't do DNS lookups, other than with browser.dns.resolv() to fetch an A record, or implicitly by fetching a URL which looks up a DNS A or AAAA record for the domain in the URL.
Quick UDP Internet Connection (QUIC) is a network protocol initially developed and deployed by Google, and now being standardized in the Internet Engineering Task Force. In this article we'll take a quick tour of QUIC, looking at what goals influenced its design, and what implications QUIC might have on the overall architecture of the Internet Protocol.