Home / Blogs

The Design of the Domain Name System, Part V - Large Data

In the previous four installments, we’ve been looking at aspects of the design of the DNS (see part I, II, III and IV). Today we look at the amount of data one can ask the DNS to store and to serve to clients.

Most DNS queries are made via UDP, a single packet for query and a single packet for the response, with the packet size traditionally limited to 512 bytes. This limits the payload of the returned records in a response packet to about 400 bytes, after allowing for the overhead in a DNS response. Many clients and caches (about half in my experience) support the EDNS0 extension, which lets the client specify the maximum packet size, usually 4096. The length fields in DNS records are 16 bits, so the absolute limit on a packet or a record is 64K bytes. The DNS spec says that if a response is too big to fit in a UDP packet, the server sends a partial response with the truncation bit set, which tells the client to retry the query over TCP.

Until a year or two ago, it was rare to see a DNS response that didn’t fit in a 512 byte packet other than for AXFR and IXFR. Now, as DNSSEC is starting to become more widespread, larger packets are becoming more common, since DNSSEC adds a great deal of signature material to every response and requires EDNS0.

Although it is possible in principle to put large chunks of data into the DNS, up to 64K, the bigger a response is, the less likely it is to be returned reliably. Some DNS servers still don’t support TCP, far too many firewalls don’t allow TCP DNS traffic, a few broken firewalls won’t pass DNS packets bigger than 512 bytes, and DNS caches are not tuned to cache large data well. I’ve seen the occasional proposal to store chunks of XML in the DNS, but they generally seem to be from people who want to see XML everywhere.

Since a TCP query requires a UDP query and response, which includes the truncation flag, and a subsequent TCP session, a more sensible way to handle large data is to put a pointer to the data such as a URI in the DNS which can be returned via UDP, and then have the application use another scheme such as HTTP to retrieve the large data. That’s about the same amount of net traffic (a UDP round trip followed by a query and response via TCP), but HTTP servers and HTTP caches are designed to handle large data that DNS servers and caches aren’t.

In our next installment, we’ll look at the ever vexing issue of overloaded record types or, why everything shouldn’t be a TXT record.

By John Levine, Author, Consultant & Speaker

Filed Under

Comments

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

Related

Topics

Brand Protection

Sponsored byCSC

Cybersecurity

Sponsored byVerisign

New TLDs

Sponsored byRadix

IPv4 Markets

Sponsored byIPv4.Global

Domain Names

Sponsored byVerisign

Threat Intelligence

Sponsored byWhoisXML API

DNS

Sponsored byDNIB.com