If you are at all interested in how the Internet's Domain Name System (DNS) works, then one of the most rewarding meetings that is dedicated to this topic is the DNS OARC workshops. I attended the spring workshop in Amsterdam in early May, and the following are my impressions from the presentations and discussion. What makes these meetings unique in the context of DNS is the way it combines operations and research, bringing together researchers, builders and maintainers of DNS software systems, and operators of DNS infrastructure services into a single room and a broad and insightful conversation.
I am excited to announce the recent release of the industry first Best Common Practices document for Cloud and Hosting providers for addressing abuse issues that was created by M3AAWG and the i2Coalition. M3AAWG has been collaborating with the Best Practices Working Group of the i2Coalition over the past 2 years to discuss ways to solve malicious activity within hosting and cloud ecosystems.
Denial of service attacks have been around since the Internet was commercialized and some of the largest attacks ever launched relied on DNS, making headlines. But every day a barrage of smaller DNS-based attacks take down targets and severely stress the DNS ecosystem. Although DNS servers are not usually the target of attacks they are often disrupted so attention from operation teams is required. There is no indication the problem is going away and attackers continue to innovate.
The following is a selected summary of the recent NANOG 63 meeting, held in early February, with some personal views and opinions thrown in! ...One view of the IETF's positioning is that as a technology standardisation venue, the immediate circle of engagement in IETF activities is the producers of equipment and applications, and the common objective is interoperability.
As I read through multiple postings covering the proposed Computer Fraud and Misuse Act, such as the ever-insightful writing of Rob Graham in his Obama's War on Hackers or the EFF's analysis, and the deluge of Facebook discussion threads where dozens of my security-minded friends shriek at the damage passing such an act would bring to our industry, I can't but help myself think that surely it's an early April Fools joke.
Here we are with CircleID's annual roundup of top ten most popular posts featured during 2014 (based on overall readership). Congratulations to all the participants whose posts reached top readership and best wishes for 2015.
It has been another busy quarter for the team that works on our DDoS Protection Services here at Verisign. As detailed in the recent release of our Q2 2014 DDoS Trends Report, from April to June of this year, we not only saw a jump in frequency and size of attacks against our customers, we witnessed the largest DDoS attack we've ever observed and mitigated -- an attack over 300 Gbps against one of our Media and Entertainment customers.
As few as seven years ago, cyber-threat intelligence was the purview of a small handful of practitioners, limited mostly to only the best-resourced organizations - primarily financial institutions that faced large financial losses due to cyber crime - and defense and intelligence agencies involved in computer network operations. Fast forward to today, and just about every business, large and small, is dependent on the Internet in some way for day-to-day operations, making cyber intelligence a critical component of a successful business plan.
In the seminal 1968 paper "The Tragedy of the Commons" , Garrett Hardin introduced the world to an idea which eventually grew into a household phrase. In this blog article I will explore whether Hardin's tragedy applies to anti-spoofing and Distributed Denial of Service (DDoS) attacks in the Internet, or not... Hardin was a biologist and ecologist by trade, so he explains "The Tragedy of the Commons" using a field, cattle and herdsmen.
In Tony Li's article on path MTU discovery we see this text: "The next attempt to solve the MTU problem has been Packetization Layer Path MTU Discovery (PLPMTUD). Rather than depending on ICMP messaging, in this approach, the transport layer depends on packet loss to determine that the packet was too big for the network. Heuristics are used to differentiate between MTU problems and congestion. Obviously, this technique is only practical for protocols where the source can determine that there has been packet loss. Unidirectional, unacknowledged transfers, typically using UDP, would not be able to use this mechanism. To date, PLPMTUD hasn't demonstrated a significant improvement in the situation." Tony's article is (as usual) quite readable and useful, but my specific concern here is DNS...