For one reason or another – increasing surveillance powers, increasing awareness of the extent to which online activities are tracked by myriad data hogs, Edward Snowden – crypto parties have come somewhat back into vogue over the last few years after a 20-plus-year hiatus. The idea behind crypto parties is that you get a bunch of people together and they all sign each other’s keys. Fun! For some value of fun.
This is all part of the web of trust that is supposed to accrue when you use public key cryptography software like PGP or GPG: each new signature on a person’s public key strengthens the trust you can have that the key truly belongs to that person. In practice, the web of trust, also known as “public key infrastructure”, does not scale well, and the early 1990s excitement about at least the PGP version of the idea died relatively quickly.
A few weeks ago, ORG Norwich held such a meeting and I went along to help workshop about when and how you want to use crypto. Like any security mechanism, encrypting email has its limits. Accordingly, before installing PGP and saying, “Secure now!” a little threat modeling is a fine thing. As bad as it can be to operate insecurely, it is much, much worse to operate under the false belief that you are more secure than you are because the measures you’ve taken don’t fit the risks you face.
For one thing, PGP does nothing to obscure metadata – that is, the record of who sent email to whom. Newer versions offer the option to encrypt the subject line, but then the question arises: how do you get busy people to read the message?
For another thing, even if you meticulously encrypt your email, check that the recipient’s public key is correctly signed, and make no other mistakes, you are still dependent on your correspondent to take appropriate care of their archive of messages and not copy your message into a new email and send it out in plain text. The same is true of any other encrypted messaging program such as Signal; you depend on your correspondents to keep their database encrypted and either password-protect their phone and other devices or keep them inaccessible. And then, too, even the most meticulous correspondent can be persuaded to disclose their password.
For that reason, in some situations it may in fact be safer not to use encryption and remain conscious that anything you send may be copied and read. I’ve never believed that teenagers are innately better at using technology than their elders, but in this particular case they may provide role models: research has found that they are quite adept at using codes only they understand. To their grown-ups, it just looks like idle Facebook chatter.
Those who want to improve their own and others’ protection against privacy invasion therefore need to think through what exactly they’re trying to achieve.
– Who might want to attack you?
– What do they want?
– Are you a random target, the specific target, or a stepping stone to mount attacks on others?
– What do you want to protect?
– From whom do you want to protect it?
– What opportunities do they have?
– When are you most vulnerable?
– What are their resources?
– What are *your* resources?
– Who else’s security do you have to depend on whose decisions are out of your control?
At first glance, the simple answer to the first of those is “anyone and everyone”. This helpful threat pyramidshows the tradeoff between the complexity of the attack and the number of people who can execute it. If you are the target of a well-funded nation-state that wants to get you, just you, and nobody else but you, you’re probably hosed. Unless you’re a crack Andromedan hacker unit (Bellovin’s favorite arch-attacker), the imbalance of available resources will probably be insurmountable. If that’s your situation, you want expert help – for example, from Citizen Lab.
Most of us are not in that situation. Most of us are random targets; beyond a raw bigger-is-better principle, few criminals care whose bank account they raid or which database they copy credit card details from. Today’s highly interconnected world means that even a small random target may bring down other, much larger entities when an attacker leverages a foothold on our insignificant network to access the much larger ones that trusts us. Recognizing who else you put at risk is an important part of thinking this through.
Conversely, the point about risks that are out of your control is important. Forcing everyone to use strong, well-designed passwords will not matter if the site they’re used for stores them in with inadequate protections.
The key point that most people forget: think about the individuals involved. Security is about practice, not just technology; as Bruce Schneier likes to say, it’s a process not a product. If the policy you implement makes life hard for other people, they will eventually adopt workarounds that make their lives more manageable. They won’t tell you what they’ve done, and you won’t have anyone to shout to warn you where the risk is lurking.
Illustrations: Aladdin panomime at Nottingham Playhouse, 2008 (via Wikimedia).
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard – or follow on Twitter.