DNS |
Sponsored by |
Once again it is time for CircleID's annual roundup of top ten most popular posts featured during the past year (based on overall readership). Congratulations to all the 2015 participants and best wishes in the new year.
Since the launch of the New gTLD Program in 2012, it has become evident that new gTLD registries overestimated the demand for new Top-Level Domain name extensions. Furthermore, new gTLD registries did not anticipate the hurdles in raising awareness, not to mention creating adoption for new domains. Even the most pessimistic New gTLD Program critic did not expect such uninspiring results. It was a wake up call for many in the domain industry. The New gTLD Program currently lacks credibility. No new gTLD has yet to go mainstream and capture the world's imagination.
The Domain Name System is now over 25 years old. Since the publication of RFCs 1034 and 1035 in 1987, there have been over 100 RFC documents published that extend and clarify the original DNS specs. Although the basic design of the DNS hasn't changed, its definition is now extremely complex, enough so that it's a challenging task to tell whether a DNS package correctly implements the specs.
What's the difference between .local and .here? Or between .onion and .apple? All four of these labels are capable of being represented in the Internet's Domain Name System as a generic Top Level Domains (gTLDs), but only two of these are in fact delegated names. The other two, .local and .onion not only don't exist in the delegated name space, but by virtue of a registration in the IANA's Special Use Domain Name registry, these names cannot exist in the conventional delegated domain name space.
On Nov. 30 and Dec. 1, 2015, some of the Internet's Domain Name System (DNS) root name servers received large amounts of anomalous traffic. Last week the root server operators published a report on the incident. In the interest of further transparency, I'd like to take this opportunity to share Verisign's perspective, including how we identify, handle and react, as necessary, to events such as this.
Do you have an idea for a new way to use DNSSEC or DANE to make the Internet more secure? Have you recently installed DNSSEC and have a great case study you can share of lessons learned? Do you have a new tool or service that makes DNSSEC or DANE easier to use or deploy? Do you have suggestions for how to improve DNSSEC? Or new ways to automate or simplify the user experience? If you do, and if you will be attending ICANN 55 in Marrakech, Morocco (or can get there), we are now seeking proposals for the ICANN 55 DNSSEC Workshop that will take place on Wednesday, 9 March 2016.
The RIPE 71 meeting took place in Bucharest, Romania in November. Here are my impressions from a number of the sessions I attended that I thought were of interest. It was a relatively packed meeting held over 5 days. So this is by no means all that was presented through the week... As is usual for RIPE meetings, it was a well organised, informative and fun meeting to attend in every respect! If you are near Copenhagen in late May next year I'd certainly say that it would be a week well spent.
The Domain Name System (DNS) offers ways to significantly strengthen the security of Internet applications via a new protocol called the DNS-based Authentication of Named Entities (DANE). One problem it helps to solve is how to easily find keys for end users and systems in a secure and scalable manner. It can also help to address well-known vulnerabilities in the public Certification Authority (CA) model. Applications today need to trust a large number of global CAs.
On November 5, 2015 the Office of the U.S. Trade Representative (USTR) released the official text of the Trans-Pacific Partnership (TPP). That text consists of 30 separate Chapters totaling more than 2,000 pages, and is accompanied by four additional Annexes and dozens of Related Instruments. Only those who negotiated it are likely to have a detailed understanding of all its provisions, and even that probably overstates reality.
If a scholar was to look back upon the history of the Internet in 50 years' time, they'd likely be able to construct an evolutionary timeline based upon threats and countermeasures relatively easily. Having transitioned through the ages of malware, phishing, and APT's, and the countermeasures of firewalls, anti-spam, and intrusion detection, I'm guessing those future historians would refer to the current evolutionary period as that of "mega breaches" (from a threat perspective) and "data feeds".