|
Google, OpenDNS, content delivery networks and other operators have announced a joint effort called “The Global Internet Speedup,” to “make the Internet faster”. According to the group, this collaboration will be executed via an open IETF proposed standard called “edns-client-subnet” in order to help better direct content to users thereby decreasing latency, decreasing congestion, increasing transfer speeds and helping the Internet to scale faster and further.
Sponsored byWhoisXML API
Sponsored byIPv4.Global
Sponsored byVerisign
Sponsored byVerisign
Sponsored byRadix
Sponsored byDNIB.com
Sponsored byCSC
Who wouldn’t want to “make the internet faster”? I guess we all want that but we are not all agreed on the right approach. For example, if you run a DNS-based Content Delivery Network (CDN) which requires that you provide different DNS answers to different DNS clients based on what you think each client’s TCP/IP connectivity is—that is, which of many possible servers will give them the fastest response—then you certainly do need to know the “edns-client-subnet” for that client. But you also need the enterprise or ISP or OpenDNS or GoogleDNS intermediate DNS recursive name server to keep track of the different answers you’ve provided and to have them help you by using the “client subnet” as a lookup key when they re-use the data the CDN has previously sent them.
This shows a collision between the desire to outsource recursive DNS services (as OpenDNS and Google DNS both do) and the desire to multisource web content (as CDN’s all do). When recursive DNS was a local operation, and where it remains so, the Internet is as fast as it ever was, because DNS queries share TCP/IP connectivity fate with the end-user “web requests”. Meaning, a CDN can predict a “best web server” for an end-user based on where their DNS requests are coming from. Not so if the end-user is using OpenDNS or Google DNS or any other outsourced recursive DNS server which makes use of “anycast”. So for those users, some web sites come up more slowly.
So while I am all for a “Global Internet Speedup”, I have an alternative proposal.
Let’s remember what David Isenberg said, which is that the Internet’s success came from making endpoints smart and making the network stupid. No network can ever be as smart as an endpoint assuming that both competitors have access to the same information. There is a compelling IETF project called ALTO which allows applications to decide for themselves which instance of multisourced content (web or otherwise) is likely to provide the best service, based on many factors, including TCP/IP connectivity and instantaneous topology.
Smart edge, dumb core. It’s what makes the Internet great.