NordVPN Promotion

Home / Blogs

IETF and Crypto Zealots

I’ve been prompted to write this brief opinion piece in response to a recent article posted on CircleID by Tony Rutkowski, where he characterises the IETF as a collection of “crypto zealots.” He offers the view that the IETF is behaving irresponsibly in attempting to place as much of the Internet’s protocols behind session level encryption as it possibly can. He argues that ETSI’s work on middlebox security protocols is a more responsible approach, and the enthusiastic application of TLS in IETF protocol standards will only provide impetus for regulators to coerce network operators to actively block TLS sessions in their networks.

Has the IETF got it wrong? Is there a core of crypto zealots in the IETF that are pushing an extreme agenda about encryption?

It appears that in retrospect we were all somewhat naive some decades ago when we designed and used protocols that passed their information in the clear. But perhaps that’s a somewhat unfair characterisation. For many years the Internet was not seen as the new global communications protocol. It was a far less auspicious experiment in packet switched network design. Its escape from the laboratory into the environment at large was perhaps more because of the lack of credible alternatives that enjoyed the support of the computer industry as it was to the simplicity and inherent scalability of its design. Nevertheless, encryption of either the payload or even the protocols was not a big thing at the time.

Yes, we knew that it was possible in the days of Ethernet common bus networks to turn on promiscuous mode and listen to all traffic on the wire, but we all thought that only network administrators held the information on how to do that, and if you couldn’t trust a net admin, then who could you trust? The shift to WiFi heralded another rude awakening. Now my data, including all my passwords, was being openly broadcast for anyone savvy enough to listen to, and it all began to feel a little more uncomfortable. But there was the reassurance that the motives of the folk listening in on my traffic were both noble and pure. They twiddled with my TCP control settings on the fly so that I could not be too greedy in using the resources of their precious network. They intercepted my web traffic and served it from a local cache only to make my browsing experience faster. They listened in to my DNS queries and selectively altered the responses only to protect me. Yes, folk were listening in on me, but evidently, that was because they wanted to make my life better, faster, and more efficient. As Hal Varian, the Chief Economist of Google, once said, spam is only the result of incomplete information about the user. If the originator of the annoying message really knew all about you it would not be spam, but a friendly, timely and very helpful suggestion. Or at least that’s what we were told. All this was making the Internet faster, more helpful and, presumably by a very twisted logic, more secure.

However, all this naive trust in the network was to change forever with just two words.

Those words were, of course, “Edward Snowden.”

The material released by Edward Snowden painted the shades of a world that was based on comprehensive digital surveillance by agencies of the United States Government. It’s one thing to try and eavesdrop on the bad people, but it’s quite another to take this approach to dizzying new heights and turn eavesdropping into a huge covert exercise that gathers literally everyone into its net. Like George Orwell’s 1984, the vision espoused within these agencies seemed to be heading towards capturing not only every person and every deed, but even every thought.

It was unsurprising to see the IETF voice a more widespread concern about the erosion of the norms of each individual’s sense of personal privacy as a consequence of these disclosures. From RFC 7258:

“Pervasive Monitoring (PM) is widespread (and often covert) surveillance through intrusive gathering of protocol artefacts, including application content, or protocol metadata such as headers. Active or passive wiretaps and traffic analysis, (e.g., correlation, timing or measuring packet sizes), or subverting the cryptographic keys used to secure protocols can also be used as part of pervasive monitoring. PM is distinguished by being indiscriminate and very large scale, rather than by introducing new types of technical compromise. The IETF community’s technical assessment is that PM is an attack on the privacy of Internet users and organisations.”

The IETF took the stance that it “will strive to produce specifications that mitigate pervasive monitoring attacks.”

Strong stuff indeed. It certainly seems as if the Internet is sealing up its once very loose seams. The network that carries our packets is no longer a trusted associate that enables communications. It is instead viewed as a toxic, hostile environment that simply cannot be trusted. And if it cannot be trusted, then no information should be exposed to it, and all transactions should be verified by the user.

It also seems that this message is finding receptive ears. We’ve seen programs such as Let’s Encrypt that bring the price of domain name public key certificates down to a base of free. As a consequence, secure web services are no longer an esoteric luxury but an affordable commodity. And we are now seeing one of the most popular browser in today’s Internet voicing an intention to emblazon open web pages as “insecure”. The same browser will also prefer to use an encrypted transport wherever and whenever possible (QUIC) concealing not only the payload but also the entirety of the transport protocol from the network. It seems to be the case that about the only protocol that has a hope of passing a packet across the Internet lies in the secure payload of a TLS session, and this has not escaped anyone’s attention. A good starting position is to use port 443 (HTTPS). A better position is to use QUIC. Not only is the payload encrypted, but the entire transport flow control is covered by the veil of encryption.


But the chorus is not one of universal acclaim for these measures. Some folk have not only become accustomed to a network that spewed out information, but they rely on it. As Tony Rutkowski’s article points out, there is an entire world of middleware in our networks that relies on visibility into user traffic that extends right into individual sessions. Even so-called secure sessions are vulnerable. Various network-level DDOS mitigation methods rely on the ability to identify malicious or otherwise hostile traffic patterns within the network. It seems that many network operators see it as some kind to right to be able to inspect network traffic.

Side note: This is not a recent development. When Australia was first connected to Europe in 1872 via the overland telegraph, telegrams to and from the United Kingdom were outrageously expensive. A thirty-word telegram cost the same as three weeks average wages. Little wonder that the press, a major user of the service, took to using code to improve the compression rate and at the same time attempt to hide their messages from their press rivals. The reaction was perhaps entirely predictable: all codes and ciphertext were banned from the Australian telegraph service.

Will the widespread use of robust encryption destroy any form of content caching in the network? This seems unlikely. For example, while it’s true that third-party content caching is frustrated by session encryption, that does not mean that content is no longer cached. What has happened is the rise of the content distribution network, where the content caches are operated by the original content publisher or their accredited agents. The user has a result of local content delivery coupled with carriage encryption and the ability to validate that the material being provided is genuine.

Perhaps the more critical question is whether the uptake of encryption implies some dire predicament for government security agencies? It seems unlikely. There is little doubt in my mind that those who have a need to worry about eavesdropping use encryption as a matter of course. It seems that the concern from these agencies is not about having a clear window on the online activities of obvious targets, but the desire to see across the entire online environment and harvest from this larger pool of data patterns and inferences that can be analysed.

And therein lies the tension. Individually, we still value some semblance of personal privacy. We’d like to protect our digital credentials if only to secure ourselves against theft and other forms of personal damage. At the same time, we’d like to ensure that agencies who have a protective role in our society are able to operate effectively and gather intelligence from online activities. But where do we draw the line? Should we be forced to eschew online encryption and revert to open protocols simply to feed the unquenchable thirst of these agencies for more and more data about each and every online transaction? Or should we be in a position to trust that our communications are not openly available to anyone which the means and motivation to peer into the network?

There is no doubt that the current technology stance, as espoused in the IETF, is weighted heavily on the side of privacy. We can expect more use of TLS, more use of obscured transport protocols such as QUIC, and far more paranoid behaviour from applications which no longer trust the network. Trust, once eroded, is fiendishly difficult to restore, and in this case, the network has lost the trust of the applications that operate across it and the trust of the users that drive these applications. I suspect that the case for winding back the level of encryption at the network layer is long gone, and it’s not coming back anytime soon!

However, I also suspect that the intelligence agencies are already focussing elsewhere. If the network is no longer the rich vein of data that it used to be, then the data collected by content servers is a more than ample replacement. If the large content factories have collected such a rich profile of my activities, then it seems entirely logical that they will be placed under considerable pressure to selectively share that profile with others. So I’m not optimistic that I have any greater level of personal privacy than I had before. Probably less.

Meet the new boss. Same as the old boss.

Side note: The Who’s classic song, written by Pete Townshend, “Won’t Get Fooled Again” was first recorded as part of the aborted LifeHouse project in early 1971. It was re-recorded with a synthesizer track in April 1971 and released as a single and on the Who’s Next album in August 1971. This song formed the climax of their stage set. This song is about as old as the Internet!

NORDVPN DISCOUNT - CircleID x NordVPN
Get NordVPN  [74% +3 extra months, from $2.99/month]
By Geoff Huston, Author & Chief Scientist at APNIC

(The above views do not necessarily represent the views of the Asia Pacific Network Information Centre.)

Visit Page

Filed Under

Comments

Thoughtful reflections Anthony Rutkowski  –  Mar 8, 2018 6:53 PM

Thanks. Unfortunately, we have many layers of communications related challenges today. They are not so much new, as arising faster, on a larger scale, and more complex. The national (especially in Washington) and world news every day are also a reminder that we have some serious contemporary meta-challenges that have no obvious communications technology solutions that make one yearn for the simplicity of habitation at the Ganden Sumtseling Monastery (in the photo)

I am fully in favor of true end-to-end - and TLS is not that Karl Auerbach  –  Mar 14, 2018 12:37 AM

My perspective on this may be colored by the fact that I was working on true end-to-end protection of network traffic even before there was an Internet.  The approaches that we always used back in the 1970’s were to have a layer between IP and TCP that encrypted packets, not connections.  IPSEC resembles that approach, and as would be expected the really hard part is not the basic protection but, rather, the key management and replay protection.  I still prefer the IPSEC approach to the SSL/TLS approach.

What I do not like about the SSL/TLS (including TLS 1.3) approach is that unless an endpoint is really careful about inspecting certificates and chains of delegation it may end up chatting via an unwanted proxy that will have access to the clear data.

It is that ability to slip into the middle that is is one of the things being claimed as being a valuable thing by network operators.  I tend to disagree.  If a proxy/web cache is so valuable to me a user then why should an operator hide this thing from the user.  My, decidedly suspicious, thought is that perhaps the operator is rather more interested in mining user data from the connection and that any performance gains are merely a nice side effect.

I am a strong advocate of the stupid network concept - which is a shorthand for saying that the internet’s data plane should be as simpleminded as is reasonable.  I moderate that by accepting that control planes can be rather complex.

Clearly internet control planes could benefit from knowing more about the traffic being carried.  But linkages from the data plane to the control planes always invite security leakage as well as reliability concerns.

When I was at Cisco I engaged in some work to see if I could do better control plane decisions - or rather give end point clients and servers better information about which pairing of client to server would best suit the proposed network communication.  I came up with a thing I called the Fast Path Characterization Protocol - a highly imcomplete design in which a client could provide a description of the proposed communication (I used an Int-Serve TSPEC) and ask the net for information about the potential paths and peers that could be used.  The incomplete work is still up on the net at https://www.cavebear.com/archive/fpcp/fpcp-sept-19-2000.html

I do not accept the proposition that network providers will make the best decisions about how to satisfy user needs.  Rather, I believe, that providers ought to make information available to users, or to user’s agents (human or, more likely, software) and let the users make the choices of how to use network services.

I’ve spent decades building tools (and using tools) to diagnose and repair network problems.  Yes, security protections get in the way of network repair.  But that difficulty does not lead me to the conclusion that we should open the network to deep packet inspection by providers any more than the fact a person may occasionally need to be examined by a doctor should lead to a requirement that people should walk around everywhere and at all times naked.

The idea that a network operator must depend on user traffic - which is typically bursty and non-reproducible - to diagnose problems strikes me, the grandson of a radio repairman and son of a TV repairman - that those who depend on that traffic are not sufficiently skilled to have learned the value of test generators and reproducible traffic.  Sure, user traffic serves as an initial indicator of trouble - although one would hope that a provider might have systems to learn of problems before users do.  However, when getting down to the hard job of problem isolation and repair, depending on user traffic - and thus arguing that user traffic should be open to view at all times - tells me that the provider is an amateur.

>I do not accept the proposition that Charles Christopher  –  Mar 14, 2018 1:39 AM

>I do not accept the proposition that network providers will make the best decisions >about how to satisfy user needs. Rather, I believe, that providers ought to make >information available to users, or to user's agents (human or, more likely, software) >and let the users make the choices of how to use network services. Agreed. Trust is how the issue is being discussed, but choice is the primary concern. This is about choice, specifically not taking away the choice of good actors. Bad actors likely never trusted the network to begin with, only the good actors did. As the good actors lost trust they made choices causing them to become ambiguous (network privacy techniques) relative to the bad actors, and that is not the good actors problem (some fail to realize this) …. The lack of network trust and choice by the bad actors is a “cost of doing business", and thus bad actors don’t get to complain, its their career choice. They made their bed, now lay in it. Don’t like it? Choose an honest career, then you have network choice returned to you. The “anarchists” I am aware of are seeking the positive dynamic of increased choice. And they also tend to apply the golden rule in their lives, they expect to be able to do what everyone else does. That means there is accountability, they can defend themselves when needed and others can to. Anarchy is in fact not always chaos, although in the limit it can be. Most rational people do in fact seek “anarchy” but not in the extreme, they simply seek to increase choice as government tries to decrease choice. Such people call themselves libertarians. If you want an example of chaos, walk into the well defined rules based motor vehicle office, fill out the wrong form for your purpose and fill it out incorrectly (for your safety, be sure NOT to sign it), then hand it to the lady behind the window with a serious look on your face .... The network chokes on that packet …. That’s what happens in the limit of taking all choice away. Total lack of choice DOES result in chaos, always. If you offer me more choice it tends to increase my trust. I will feel in control, the more choices the more control and the more trust. There is generally peace. If you take away my choice(s) you will destroy my trust. I will feel vulnerable, I will do something to regain control, and you will not like what I do: "Learn the rules so you know how to break them properly.” - Dalai Lama "Among the many misdeeds of the British rule in India, history will look upon the act depriving a whole nation of arms as the blackest." - Mahatma Gandhi

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

Related

Topics

Brand Protection

Sponsored byCSC

Domain Names

Sponsored byVerisign

IPv4 Markets

Sponsored byIPv4.Global

DNS

Sponsored byDNIB.com

Threat Intelligence

Sponsored byWhoisXML API

Cybersecurity

Sponsored byVerisign

New TLDs

Sponsored byRadix

NordVPN Promotion