![]() |
||
|
In January 1995, the RFC Editor published RFC 1752: “The Recommendation for the IP Next Generation Protocol,” which included this interesting perspective on Internet security:
We feel that an improvement in the basic level of security in the Internet is vital to its continued success. Users must be able to assume that their exchanges are safe from tampering, diversion and exposure. Organizations that wish to use the Internet to conduct business must be able to have a high level of confidence in the identity of their correspondents and in the security of their communications. The goal is to provide strong protection as a matter of course throughout the Internet.
The Internet is a security officer’s nightmare—so much openness, so easy to capture packet traffic (and/or spoof it!) and send all manner of unwanted traffic. It was built as a research network, hosted by institutes that were 1/ professionally responsible and 2/ interested in working together collegially.
So, in the 19 years since the publication of that statement, have we really failed to address the stated goal?
The gut-reaction answer is “yes!”, but I think we should be careful about how we review history. The reality is that there has been slow, steady improvement in the basic level of security in the Internet, in response to known, credible threats. Very few people use cleartext passwords, or unencrypted POP protocol to get their email. We are, collectively, much better at dealing with phishing attempts, at least in business contexts. Networks are operated at business-level reliability, which includes regular monitoring for odd activities. People don’t generally experience diversion of or tampering with their traffic—at least not in ways that impede them getting done what they set out to do. So, yes, we don’t have strong protection as a matter of course throughout the Internet, but we do have a functioning and functional Internet.
In my opinion, the big difference the past year’s revelations about government surveillance make is a step function change in understanding of credible threats. As those threats were not known (or believed), we haven’t ramped up to address them; our security mechanisms have not kept the pace we needed.
As we look to address that gap, we should bear in mind our best path to do so that doesn’t impede our ability to have that functioning and functional Internet. Andrei Robachevsky recently published “The Danger Of The New Internet Choke Points” on the Internet Technology Matters blog, which explains the Internet Society’s perspective on how to achieve security through collaborative stewardship. From the post:
In our paper submitted to the W3C/IAB workshop on “Strengthening the Internet Against Pervasive Monitoring” (STRINT), we looked at the problem of pervasive monitoring from an architectural point of view. We identified some components of Internet infrastructure that provide attractive opportunities for wholesale monitoring and/or interception, and, therefore, represent architectural vulnerabilities.
Can their impact be mitigated? And how? Can the Internet evolve to reduce such vulnerabilities, and what would be the driving forces? What are the forces that could prevent this from happening? We pondered these questions, too, and encourage you to read our paper, provide feedback in the comments below, and engage in the dialog that will be coming up at IETF 89 in London.
We welcome your feedback, too!
Sponsored byVerisign
Sponsored byCSC
Sponsored byIPv4.Global
Sponsored byDNIB.com
Sponsored byVerisign
Sponsored byRadix
Sponsored byWhoisXML API