Home / Blogs

The Worm and the Wiretap

BLACK FRIDAY DISCOUNT - CircleID x NordVPN
Get NordVPN  [74% +3 extra months, from $2.99/month]

According to recent news reports, the administration wants new laws to require that all communications systems contain “back doors” in their cryptosystems, ways for law enforcement and intelligence agencies to be able to read messages even though they’re encrypted. By chance, there have also been articles on the Stuxnet computer worm, a very sophisticated piece of malware that many people are attributing to an arm of some government. The latter story shows why cryptographic back doors, known generically as “key escrow”, are a bad idea.

This isn’t the first time that the government has been worried about the proliferation of cryptography in the civilian sector. In the 1970s, there were attempts to discourage or even suppress such research. 15 years ago, the administration tried pushing the so-called “Clipper chip”, a government-designed cipher that was nevertheless readable by anyone who knew certain secrets. The opposition to the Clipper chip asserted that this was an

inherently risky scheme, that it was all but impossible to build such a scheme without creating new risks. Time has shown that we were correct to be afraid. Time has also shown that the government has almost always managed to go around encryption, as it did recently in the case of the Russian suburban spy ring.

Cryptography, it turns out, is far more complex than one would think. It isn’t enough to have an unbreakable cipher; one has to use the cipher precisely correctly, in a stylized exchange of messages known as a cryptographic protocol. If the protocol is designed incorrectly, it’s often possible for an attacker to read encrypted messages even if the underlying cipher remains secure. The oldest cryptographic protocol in the unclassified literature was published in 1978; a previously-unsuspected flaw was found in 1996—and this protocol, in modern notation, is only three messages long. More recently, a serious flaw was found in crucial cryptographic components of the World-Wide Web. All of these flaws were blindingly obvious in retrospect, but the flaws had gone unnoticed for years. And all of these were for the simplest case: two parties trying to communicate securely.

The administration’s proposal would add a third party to communications: the government. This demands a much more complicated protocol. Will these new variants be correct? History suggests not. (Here is a technical paper on that result.) Many previous attempts to add such features have resulted in new, easily exploited security flaws rather than better law enforcement access. In other words, instead of helping the government, the schemes created new opportunities for serious misuse. In that respect, the new proposal is much worse than the Clipper chip. The Clipper chip used one protocol, designed by the NSA, and (as noted) even they didn’t get it right. The new proposal would require everyone to devise their own key escrow mechanism. The odds on everyone getting this right are exceedingly low; others have created security flaws when trying.

Complexity in the protocols isn’t the only problem; protocols require computer programs to implement them, and more complex code generally creates more exploitable bugs. In the most notorious incident of this type, a cell phone switch in Greece was hacked by an unknown party. The so-called “lawful intercept” mechanisms in the switch—that is, the features designed to permit the police to wiretap calls easily—was abused by the attacker to monitor at least a hundred cell phones, up to and including the prime minister’s. This attack would not have been possible if the vendor hadn’t written the lawful intercept code.

Quite apart from purely technical vulnerabilities, any form of key escrow creates a new class of trusted parties: those in the government who have access to the database of keys. One need not assume that the authorized agencies will themselves abuse their authority (though that has certainly happened in the past), but there will always be spies. What happens if the next Robert Hanssen works in the key escrow center?

Which governments should have the right to monitor traffic? This proposal has come from the U.S. government, but India has stated a similar requirement. The U.K.? China? Russia? Iran? The government of the service provider plus the government of wherever the device is currently located? This last seems to be the most logical, but it’s also the most complex, because it requires that computers and mobile phones learn and relearn the identity of the authorized monitoring parties. This, of course, requires more cryptographic protocols and more programs. The potential for mischief here is clear: can a hacker or a spy deceive the device and hence trick it into letting someone else decrypt the traffic?

Given all of this fragility, who can actually cause trouble? The existence of the Stuxnet worm—by far the most sophisticated piece of malware ever found—shows that there are high-end attackers out there, probably supported by a government, people who seem able to crack most computer systems. Assuredly, they will be able to exploit even minute flaws in a system designed to provide back doors around the cryptography. The U.S. government reportedly promulgated the Data Encryption Standard—the first cryptosystem intended to protect unclassified data—after it learned that the Soviets had monitored American grain dealer’s phone calls. Today, billions of dollars are at risk from foreign interception of purely commercial communications: high-tech intellectual property, mergers and acquisitions, and more. This proposal will expose all of that.

It may very well be that more serious harm will be averted if the government has more ability to monitor encrypted traffic. But the question isn’t just the tradeoff between privacy and security; there is also a tradeoff between different kinds of security. Given what is at risk from key escrow—every sensitive transaction on the Internet, from e-commerce on up, could be compromised—we cannot afford the gamble. Only stupid criminals will be caught by this; serious enemies—foreign spies, al Qaeda, etc.—will use their own cryptography, perhaps wrapped in officially-approved mechanisms to avoid suspicion. The past history of key escrow tells that there will be more problems; historically, law enforcement has been able to work around cryptography.

By Steven Bellovin, Professor of Computer Science at Columbia University

Bellovin is the co-author of Firewalls and Internet Security: Repelling the Wily Hacker, and holds several patents on cryptographic and network protocols. He has served on many National Research Council study committees, including those on information systems trustworthiness, the privacy implications of authentication technologies, and cybersecurity research needs.

Visit Page

Filed Under

Comments

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

Related

Topics

Threat Intelligence

Sponsored byWhoisXML API

New TLDs

Sponsored byRadix

DNS

Sponsored byDNIB.com

IPv4 Markets

Sponsored byIPv4.Global

Brand Protection

Sponsored byCSC

Cybersecurity

Sponsored byVerisign

Domain Names

Sponsored byVerisign