Information security concepts for activists, journalists & troublemakers

Jonathan Eyler-Werve

January 16, 2012

One Bush Street, San Francisco. (image: Thomas Hawk CC by/nc)

Part 1 of a Series: INFORMATION SECURITY FOR ACTIVISTS

During a recent Transparency/Accountability Initiative workshop, I led several discussions of information security for activists. During these I was told by incredibly smart leaders in the nonprofit technology space — people I really respect — that they had never given the matter much thought. This is a problem.

A bit about me: I’m not a security expert. I can’t talk shop on SSH flaws or random number generators. I am, instead, a practitioner of social ventures who cares about the people in our field. I can translate from the tech experts into usable, field tested recommendations, but to do this I depend on peer review from experts, which I gratefully invite on this and all posts.

Prologue: Keep Calm and Carry On

There are a lot of reasons to be paranoid [1]. Our friends and allies are searched without warrant or probable cause beyond being politically active [2]. In the US, anti-corporate websites are accidentally banned without cause or recourse (Web sites of the powerful fare better) [3]. Journalists are detained at Internet cafes for attempting to file stories [4]. Email is routinely intercepted and stored by governments, without cause — in the United States, when this practice was challenged in court, Congress blocked the lawsuit and explicitly legalized the mass warrantless interception of email (Senator Obama voted yes) [5].

Much attention has been given to the potential of technology to connect, empower and accelerate movements. Less attention has been given to the fact that technology also empowers the adversaries of these movements. See Evegeny Morezov’s book The Net Delusion for a discussion of these issues, reviewed here and here.

This creates conflict between privacy advocates and controlling institutions; this conflict has been slow building but is coming to a boil, particularly after Wikileaks and the Arab Spring. To quote a developer  of the TOR privacy tools (used by the US military, among other ironies), who is regularly detained at the US border:

“The [US Customs and Border Patrol] agents in Seattle were nicer than ones in Newark. None of them implied I would be raped in prison for the rest of my life this time. ” [6]

Technology is not neutral. It is not “just a tool.” All technology creates patterns of power distribution that actively tip the balance towards or away from democratic decision making, often without our intention or consent. Our urgent mission, as activists and technologists participating in the early Internet, is to proactively create, refine and distribute systems that empower the values we care about. I care about democratic participation, diversity of opinion and human rights.

So, that brings us to security for activists.

How to think about security

The good people at the Electronic Frontier Foundation frequently take the lead on these issues. I turn to them now, by using an intellectual framework they laid out in the guide Surveillance Self Defense. Please take 30 minutes and read this text now. It’s ok, I’ll wait.

My take away from SSD is to break the unhelpful binary of “secure vs. insecure” into a more actionable set of information:

  • you have assets to protect (your browser history; the contents of an email; a list of sources).
  • you have threats against those assets (they could be intercepted, they could be published, they could be lost)
  • you have adversaries (specific and finite: government agencies, corporate security, vigilante networks, private surveillance firms)
  • you have a risk assessment: based on our understanding of the assets, threats and adversaries, how likely is a bad thing to happen? How do our actions increase or decrease that risk?

To this, I would add the historical observation that privacy by “policy” isn’t working [7]. Instead we have to focus on privacy inherent to the technology we use. It is no longer sufficient to trust institutions to protect our privacy, because they have shown — repeatedly — that they do not deserve trust [8]. They roll over to governments, they sell our info for profit, they expose our data through carelessness. Human nature won’t change. The solution is to build technical solutions that allow us to conduct business, relationships and politics online without transferring power over our privacy, communications, and political participation to invisible controlling bodies.

You are not going to be “secure” or “insecure”. Instead, you have a framework for understanding undesirable outcomes, and adjusting behavior and technology choices to mitigate against the most destructive or most likely threats. Security is a series of tradeoffs — most often between usability and privacy of communications.

Fortunately, there are good tools, which give immediate benefits at little cost to users. This is the first post in a series. We’ll talk about specific counter measures and behavior changes in my next posts.

You feedback and critique is welcome in comments or towards @eylerwerve on Twitter.

— Jonathan Eyler-Werve

 

[1] Wired – 9 Reasons Wired Readers Should Wear Tinfoil Hats

[2] Wired – Appeals Court Strengthens Warrantless Searches at Border 

[3] Ars Technica – ICE admits year-long seizure of music blog was a mistake 

[4] The Investigative Fund – Fiji Water: Spin the Bottle

[5] Open Congress – H.R.6304 – FISA Amendments Act of 2008

[6] BoingBoing – Wikileaks volunteer detained and searched (again) by US agents 

[7] Not my idea; credit to security activists who prefer not to be named.

[8] LA Times – Bank of America data leak destroys trust