DNS |
Sponsored by |
|
The ICANN Implementation Recommendation Team (IRT) working group has published its final report, which I decided to analyze a bit further. I already made a few comments last month, both in the At-Large Advisory Council framework and on my own. There are several issues raised by the recommendations of this report. The Uniform Rapid Suspension system (URS) is one. more
ICANN's response to the European Union's Network and Information Security Directive (NIS2) is a litmus test on whether its policy processes can address the needs of all stakeholders, instead of only satisfying the needs of the domain industry. Early indications from the ICANN Hamburg meeting point to another disappointment for law enforcement, cybersecurity professionals, and the many businesses seeking to reinstate WHOIS as required by NIS2. more
UCLA and Washington University in St. Louis recently announced the launch of the Named Data Networking (NDN) Consortium, a new forum for collaboration among university and industry researchers, including Verisign, on one candidate next-generation information-centric architecture for the Internet. Verisign Labs has been collaborating with UCLA Professor Lixia Zhang, one of the consortium's co-leaders, on this future-directed design as part our university research program for some time. more
The 1st Latin American & Caribbean DNS Forum was held on 15 November 2013, before the start of the ICANN Buenos Aires meeting. Coordinated by many of the region's leading technological development and capacity building organizations, the day long event explored the opportunities and challenges for Latin America brought on by changes in the Internet landscape, including the introduction of new gTLDs such as .LAT, .NGO and others. more
Most everyone who visits CircleID is familiar with Moore's Law, which stated simply holds that computing power doubles every 18 months. This has been going on since the 1960s and shows no sign of slowing. Moore's Law drives faster and faster computing, which produces more and more data and network complexity. This inexorable trend is putting immense pressure on corporate networks, and the strain is too much for many of them to handle on their own. more
As a registrar at the front end of the DNSSEC deployment effort, our technical team has made a sustained investment in DNSSEC deployment so that our customers don't get overwhelmed by this wave of changes to the core infrastructure of the Domain Name System. Along the way, we've learnt a lot about how to implement DNSSEC which might hold useful lessons for other organizations that plan to deploy DNSSEC in their networks. more
To prepare DNS security for a post-quantum future, Verisign and partners are testing new cryptographic strategies that balance security, performance, and feasibility, especially through the novel Merkle Tree Ladder mode for managing large signatures. more
The recent attacks on the DNS infrastructure operated by Dyn in October 2016 have generated a lot of comment in recent days. Indeed, it's not often that the DNS itself has been prominent in the mainstream of news commentary, and in some ways, this DNS DDOS prominence is for all the wrong reasons! I'd like to speculate a bit on what this attack means for the DNS and what we could do to mitigate the recurrence of such attacks. more
The recent attack on the Comodo Certification Authority has not only shown how vulnerable the current public key infrastructure is, but also that the protocols (e.g., OSCP) used to mitigate these vulnerabilities once exploited, are not in use, not implemented correctly or not even implemented at all. Is this the beginning of the death of the PKI dragons and what alternatives do we have? more
By any metric, the queries and responses that take place in the DNS are highly informative of the Internet and its use. But perhaps the level of interdependencies in this space is richer than we might think. When the IETF considered a proposal to explicitly withhold certain top-level domains from delegation in the DNS the ensuing discussion highlighted the distinction between the domain name system as a structured space of names and the domain name system as a resolution space... more
The transition to IPv6 is top of mind for most service providers. Even in places where there are still IPv4 addresses to be had surveys we've run suggest v6 is solidly on the priority list. That's not to say everyone has the same strategy. Depending where you are in the world transition options are different -- in places such as APAC where exhaustion is at hand one of the many NAT alternatives will likely be deployed since getting a significant allocation of addresses is not going to happen and other alternatives for obtaining addresses will prove expensive. more
The COVID-19 pandemic has led to the rapid migration of the world's workforce and consumer services to virtual spaces, has amplified the Internet governance and policy issues including infrastructure, access, exponential instances of fraud and abuse, global cooperation and data privacy, to name but a few. The need for practical, scalable and efficient solutions has risen dramatically. more
The Domain Name System has always been intended to be extensible. The original spec in the 1980s had about a dozen resource record types (RRTYPEs), and since then people have invented many more so now there are about 65 different RRTYPEs. But if you look at most DNS zones, you'll only see a handful of types, NS, A, AAAA, MX, TXT, and maybe SRV. Why? A lot of the other types are arcane or obsolete, but there are plenty that are useful. more
I recently came across a copy of a ruling in the bizarre case of MySpace vs. theglobe.com. Theglobe.com was the ultimate dot.com bubble company. It started up here in Ithaca, and went public at the peak of dot.com hysteria with one of the the greatest one-day price runups ever. Since then they bought and sold a variety of busineses, none of which ever made any money, including the Voiceglo VoIP service which appears to be what the spam was promoting. more
By design, the Internet core is stupid, and the edge is smart. This design decision has enabled the Internet's wildcat growth, since without complexity the core can grow at the speed of demand. On the downside, the decision to put all smartness at the edge means we're at the mercy of scale when it comes to the quality of the Internet's aggregate traffic load. Not all device and software builders have the skills - and the quality assurance budgets - that something the size of the Internet deserves. more