|
This is the final in a multi-part series on cryptography and the Domain Name System (DNS).
In previous posts in this series, I’ve discussed a number of applications of cryptography to the DNS, many of them related to the Domain Name System Security Extensions (DNSSEC).
In this final blog post, I’ll turn attention to another application that may appear at first to be the most natural, though as it turns out, may not always be the most necessary: DNS encryption. (I’ve also written about DNS encryption as well as minimization in a separate post on DNS information protection.)
In 2014, the Internet Engineering Task Force (IETF) chartered the DNS PRIVate Exchange (dprive) working group to start work on encrypting DNS queries and responses exchanged between clients and resolvers.
That work resulted in RFC 7858, published in 2016, which describes how to run the DNS protocol over the Transport Layer Security (TLS) protocol, also known as DNS over TLS, or DoT.
DNS encryption between clients and resolvers has since gained further momentum, with multiple browsers and resolvers supporting DNS over Hypertext Transport Protocol Security (HTTPS), or DoH, with the formation of the Encrypted DNS Deployment Initiative, and with further enhancements such as oblivious DoH.
The dprive working group turned its attention to the resolver-to-authoritative exchange during its rechartering in 2018. And in October of last year, ICANN’s Office of the CTO published its strategy recommendations for the ICANN-managed Root Server (IMRS, i.e., the L-Root Server), an effort motivated in part by concern about potential “confidentiality attacks” on the resolver-to-root connection.
From a cryptographer’s perspective the prospect of adding encryption to the DNS protocol is naturally quite interesting. But this perspective isn’t the only one that matters, as I’ve observed numerous times in previous posts.
A common theme in this series on cryptography and the DNS has been the question of whether the benefits of a technology are sufficient to justify its cost and complexity.
This question came up not only in my review of two newer cryptographic advances, but also in my remarks on the motivation for two established tools for providing evidence that a domain name doesn’t exist.
Recall that the two tools—the Next Secure (NSEC) and Next Secure 3 (NSEC3) records—were developed because a simpler approach didn’t have an acceptable risk / benefit tradeoff. In the simpler approach, to provide a relying party assurance that a domain name doesn’t exist, a name server would return a response, signed with its private key, “<name> doesn’t exist.”
From a cryptographic perspective, the simpler approach would meet its goal: a relying party could then validate the response with the corresponding public key. However, the approach would introduce new operational risks, because the name server would now have to perform online cryptographic operations.
The name server would not only have to protect its private key from compromise, but would also have to protect the cryptographic operations from overuse by attackers. That could open another avenue for denial-of-service attacks that could prevent the name server from responding to legitimate requests.
The designers of DNSSEC mitigated these operational risks by developing NSEC and NSEC3, which gave the option of moving the private key and the cryptographic operations offline, into the name server’s provisioning system. Cryptography and operations were balanced by this better solution. The theme is now returning to view through the recent efforts around DNS encryption.
Like the simpler initial approach for authentication, DNS encryption may meet its goal from a cryptographic perspective. But the operational perspective is important as well. As designers again consider where and how to deploy private keys and cryptographic operations across the DNS ecosystem, alternatives with a better balance are a desirable goal.
In addition to encryption, there has been research into other, possibly lower-risk alternatives that can be used in place of or in addition to encryption at various levels of the DNS.
We call these techniques collectively minimization techniques.
In “textbook” DNS resolution, a resolver sends the same full domain name to a root server, a top-level domain (TLD) server, a second-level domain (SLD) server, and any other server in the chain of referrals, until it ultimately receives an authoritative answer to a DNS query.
This is the way that DNS resolution has been practiced for decades, and it’s also one of the reasons for the recent interest in protecting information on the resolver-to-authoritative exchange: The full domain name is more information than all but the last name server needs to know.
One such minimization technique, known as qname minimization, was identified by Verisign researchers in 2011 and documented in RFC 7816 in 2016. (In 2015, Verisign announced a royalty-free license to its qname minimization patents.)
With qname minimization, instead of sending the full domain name to each name server, the resolver sends only as much as the name server needs either to answer the query or to refer the resolver to a name server at the next level. This follows the principle of minimum disclosure: the resolver sends only as much information as the name server needs to “do its job.” As Matt Thomas described in his recent blog post on the topic, nearly half of all .com and .net queries received by Verisign’s .com TLD servers were in a minimized form as of August 2020.
Other techniques that are part of this new chapter in DNS protocol evolution include NXDOMAIN cut processing [RFC 8020] and aggressive DNSSEC caching [RFC 8198]. Both leverage information present in the DNS to reduce the amount and sensitivity of DNS information exchanged with authoritative name servers. In aggressive DNSSEC caching, for example, the resolver analyzes NSEC and NSEC3 range proofs obtained in response to previous queries to determine on its own whether a domain name doesn’t exist. This means that the resolver doesn’t always have to ask the authoritative server system about a domain name it hasn’t seen before.
All of these techniques, as well as additional minimization alternatives I haven’t mentioned, have one important common characteristic: they only change how the resolver operates during the resolver-authoritative exchange. They have no impact on the authoritative name server or on other parties during the exchange itself. They thereby mitigate disclosure risk while also minimizing operational risk.
The resolver’s exchanges with authoritative name servers, prior to minimization, were already relatively less sensitive because they represented aggregate interests of the resolver’s many clients1. Minimization techniques lower the sensitivity even further at the root and TLD levels: the resolver sends only its aggregate interests in TLDs to root servers, and only its interests in SLDs to TLD servers. The resolver still sends the aggregate interests in full domain names at the SLD level and below2, and may also include certain client-related information at these levels, such as the client-subnet extension. The lower levels therefore may have different protection objectives than the upper levels.
Minimization techniques and encryption together give DNS designers additional tools for protecting DNS information—tools that when deployed carefully can balance between cryptographic and operational perspectives.
These tools complement those I’ve described in previous posts in this series. Some have already been deployed at scale, such as a DNSSEC with its NSEC and NSEC3 non-existence proofs. Others are at various earlier stages, like NSEC5 and tokenized queries, and still others contemplate “post-quantum” scenarios and how to address them. (And there are yet other tools that I haven’t covered in this series, such as authenticated resolution and adaptive resolution.)
Modern cryptography is just about as old as the DNS. Both have matured since their introduction in the late 1970s and early 1980s respectively. Both bring fundamental capabilities to our connected world. Both continue to evolve to support new applications and to meet new security objectives. While they’ve often moved forward separately, as this blog series has shown, there are also opportunities for them to advance together. I look forward to sharing more insights from Verisign’s research in future blog posts.
Read the previous posts in this six-part blog series:
Sponsored byVerisign
Sponsored byDNIB.com
Sponsored byCSC
Sponsored byWhoisXML API
Sponsored byIPv4.Global
Sponsored byVerisign
Sponsored byRadix