|
Those who care about security and usability—that is, those who care about security in the real world—have long known that PGP isn’t usable by most people. It’s not just a lack of user-friendliness, it’s downright user hostile. Nor is modern professional crypto any better. What should be done? How should crypto in general, and PGP in particular, appear to the user? I don’t claim to know, but let me pose a few questions. These are conceptual questions, but until they’re answered those who really understand user interfaces can’t begin to build a suitable solution.
There are a few assumptions I want to start with. First, for the foreseeable future, there will be a mix of secure—encrypted and/or digitally signed—and insecure email. It can’t be otherwise; the net is too large to flash-cut anything.
Second, even individuals who sometimes use crypto won’t necessarily have it available all the time. They may be using a machine without their keys, or without the necessary software, or they may be temporarily using a web mailer.
Third, our end systems are not as secure as we’d like.
Fourth, certain concepts (certificates, key fingerprints, web of trust, etc.) are far too geeky and must be hidden. By “geeky” I mean that the concepts are quite unfamiliar to most people, and unless and until we can find an analogy that fits people’s mental models they have to be hidden.
Should users request security?
In an ideal world, all email would be secure. We’re not in such a world, and per my first assumption we’re not going to be for a very long time, Furthermore, and per my second assumption, even PGP users can’t always receive encrypted email. Should senders be forced to request encryption explicitly? Should they be able to turn it off if it’s on by default? What about email to a group of recipients, some of whom can receive PGP-protected email and some of whom cannot? How should that be indicated to the sender? I could make a very good case that this situation shouldn’t be allowed—but I could make an equally good case that it should.
How should encrypted email be indicated to the recipient?
If I have reason to think that someone hostile is watching my email or my correspondents’, I would not expect certain things to be said in unprotected email—and if they are said, I might be suspicious about what’s really going on.
How should signed email be indicated to the recipient?
Digitally-signed email has a higher degree of assurance of who sent it. Not certainty by any means, but higher and perhaps considerably higher. How should this distinction be shown? This is more or less the same problem as indicating an encrypted web site or distinguishing a phishing site from the real one: you’re adding a new indicator that people aren’t accustomed to looking for. In fact, it may be worse. The real sites for many banks are always encrypted, but per my assumptions a lot of email will be in the clear even if from folks who sometimes use PGP. (Having some sort of near-forgeable “seal” could create its own problems: attackers will spoof it, and users may have more trust than they should. Users want email that’s really from the bank, not email that’s signed by someone random, possibly with the bank’s logo.
How do we protect private decryption keys?
In an ideal world, my decryption key could sit in my mailer, which would quietly decrypt anything sent to me. Mailers, though, are huge, complex, ungainly things; should we trust them with keys when they’re not needed? Also, having to supply a key is a sure sign that I’ve just received some encrypted email—but most people won’t remember that not supplying the key means the email wasn’t secure. This is especially true if the mailer caches the decrypted private key during an email conversation. Should we use external hardware? Apart from the fact that some interesting platforms (e.g., Apple’s smartphones and tablets) don’t have useful external ports, the insecure host hypothesis means that malware could be feeding encrypted emails into this outboard hardware and silently sending the cleartext back to an adversary.
Oh yes—will most users choose a key-protecting passphrase of “123456”? Experience suggests yes. Get rid of passphrases? Sure—but what do we replace them with? Two-factor authentication? Many tokens have their own usability challenges, even if users don’t have to supply passphrases, fingerprints, DNA samples, or worse. A fingerprint reader, as is present on recent iPhones (and on some laptops going back a fair number of years) for key unlock assumes that the device is secure (per assumption, hosts aren’t); besides, it’s awfully hard to convert a biometric into a key-encrypting key. (Yes, there have been some papers on the subject. It’s still hard, and I’m not convinced it’s been done securely enough.)
How do we protect private signing keys?
Similar considerations apply to signing keys. In fact, they’re worse; I only need my decryption key when I receive encrypted email (which by hypothesis will be rather unusual), but I may want to sign everything I send as a way to help bootstrap crytpo.
The difficulty of protecting my private keys is why I personally don’t sign all of my outbound email.
What should key exchange look like?
In order to send secure email to someone, your mailer has to have access to their public key; to sign the email, they have to have access to your key. These bindings have to be (adequately) secure. How should this be done? The “official” way, with certificates, fingerprints, and the web of trust, is unacceptably complex. Is there a good analogy that is also accpetably secure?
How should exceptions be handled?
How do we handle unusual conditions, such as key change? Key revocation?
What is our threat model?
Who is the enemy? A sibling? A suspicious spouse? An employer? Criminal hackers? Hackers with government backing? A law enforcement agency with a stack of subpoenas and court orders? A major intelligence agency?
This matters a lot. There are relatively simple solutions to some of the key-handling problems for the lower threat models: provider-stored, self-signed certificates, key continuity, key-caching based on all previous emails, and more. These strategies are not very useful against, say, the NSA or the PLA’s equivalent, but the loudest calls for ubiquitous encryption are from people who are worried about just such threats.
The thing that distinguishes encryption from just about any other user interface question is that by definition, here we have an enemy. (Phishing? If we had usable crypto, phishing wouldn’t be a problem…)
I have some tentative answers to some of these questions, but mostly for lower threat models. Is that good enough? Is it worth the effort?
Sponsored byVerisign
Sponsored byIPv4.Global
Sponsored byDNIB.com
Sponsored byCSC
Sponsored byRadix
Sponsored byVerisign
Sponsored byWhoisXML API