SecureMac, Inc.

Runa Sandvik on security for high-risk people

September 28, 2021

An interview with Runa Sandvik on digital security and privacy for high-risk people — and how tech companies can keep them safe.

Runa Sandvik

Runa Sandvik is a security researcher who focuses on security and defense for high-risk people. She serves as a cybersecurity consultant to the Ford Foundation’s Building Institutions and Networks (BUILD) program and to the Norwegian Armed Forces Cyber Defence. Previously, Sandvik was Senior Director of Information Security at The New York Times and a developer at the Tor Project. She is a featured speaker at the upcoming Objective by the Sea 4.0 Apple security conference, where she will be presenting research on macOS security and the cyber-espionage activities of U.S. intelligence agencies.

Telling the truth is a dangerous job.

UNESCO estimates that on average, one journalist is killed every five days just “for bringing information to the public”. The situation for activists and political dissidents is worse still. 

Now, reporters, activists, and human rights advocates are facing a new source of risk: technology. Repressive governments around the world have easy access to powerful mobile surveillance tools like Pegasus spyware. And even “harmless” technologies carry risks: social media sites, for example, may be used to pinpoint a person’s location or enumerate their personal contacts.

For people like this, cybersecurity can literally be a matter of life and death. Security expert Runa Sandvik is trying to keep them safe. For the past 10 years, she has worked with journalists and other high-risk individuals to help them improve their digital security and do their jobs more safely.

I saw how Tor could really enable and empower people to do the work that they wanted to do.

Sandvik’s initial foray into computer security came during her university days, when she took part in a summer programming project. Back then, her main interest was simply in the technology itself. But she soon became fascinated by the real-world applications of the code she was working on:

RS: In 2009, I was studying for a bachelor’s degree in computer science. And I ended up working for the Tor Project as part of Google’s Summer of Code. When I first heard about Tor, I just thought it was really cool that you could be anonymous online! I found it exciting that there was technology and lines of code that enabled that. At the time, I didn’t consider who might be using this tool, or the impact that it was having. 

But through my work with the Tor Project, I got to meet different groups of people who were using Tor to do their jobs. And I saw the benefit of that type of tool … of how it could really enable and empower people to do the work that they wanted to do.

Over the past decade, Sandvik has watched the threat landscape evolve — and she has noticed a growing awareness of security and privacy issues among the general public:

RS: We know more about the risks, threats, and what has happened behind the scenes than we did 10 years ago. A lot of information has come out, sometimes through whistleblowers like Edward Snowden, or through work done by organizations like Amnesty International or Citizen Lab, or through threat intelligence firms. So we know more today about how reporters, activists, lawyers, and organizations are targeted and compromised.

Some technology vendors are making very strong marketing claims to non-technical audiences. In some cases, these are claims that they can’t deliver on.

This newfound awareness of cybersecurity issues is clearly a net positive. However, Sandvik thinks that the public’s concern for security and privacy may have produced an unexpected — and potentially harmful — side-effect: Tech companies have started to use it as a marketing angle:

RS: The general public is hearing so much more about privacy, surveillance, digital identity, and compromises. We want more privacy. We want more security. But as a result, some technology vendors — vendors that are often doing really good work otherwise — are making very strong marketing claims to non-technical audiences about what their platforms can provide. In some cases, these are claims that they can’t deliver on. And unfortunately, the general public is not necessarily in a position to evaluate the claims that these companies are making. 

For example, ProtonMail recently got a legal order for information about a user; they were legally required to hand it over. That isn’t surprising: I think it was just a matter of time. But what is problematic in this case is that ProtonMail had been, for a very long time, making these sorts of strong marketing claims about what they could provide. 

The ProtonMail user turned out to be a French climate activist. And while they weren’t a high-risk person in the same sense as a journalist in an authoritarian country, neither did they fit the profile of an “everyday” email user. Most people who sign up for encrypted email services have fairly modest expectations: they’re looking for better online privacy, or an end to targeted advertising. But they aren’t worried about becoming the subject of an international law enforcement investigation. The activist in the ProtonMail incident, however, most likely did have higher expectations of their email service provider, and different privacy needs as well.

To Sandvik, this diversity of users is precisely why tech companies need to engage with high-risk people during the development process. That way, she says, they can avoid bad assumptions about who their end users are — and about what they need to stay safe:

Tech companies can greatly benefit from working with high-risk people to make their platforms safer.

RS: There are a lot of assumptions made about how people use technology. There’s a blog post that goes around on social media every now and then entitled “Your Threat Model is Not My Threat Model”. It speaks to this very important idea that the things that work for you may not necessarily work for me. I may have different assumptions about how I’m using technology. I may have different desires for my privacy settings on Facebook than you do. 

It really comes down to this: There are differences in how people live their lives. There are differences in the boundaries that they set, and in how they choose to express themselves. And technology solutions developed by engineers in California can’t possibly meet all of those needs if they aren’t designed with input from a wide range of people. Tech companies can greatly benefit from working with high-risk people to make their platforms safer. 

There are numerous examples of the problems that arise when tech companies fail to do this. Apple ran into this issue recently with their Expanded Protections for Children initiative. One of the new features sends alerts to parents when children are sending or receiving sexually explicit material in Messages. But in an open letter to Apple, signed by over 80 civil society organizations, critics point out that Apple is making exactly the kind of assumptions that Sandvik has warned about. The letter reads, in part:

“The system Apple has developed assumes that the ‘parent’ and ‘child’ accounts involved actually belong to an adult who is the parent of a child, and that those individuals have a healthy relationship. This may not always be the case; an abusive adult may be the organiser of the account, and the consequences of parental notification could threaten the child’s safety and wellbeing. LGBTQ+ youths on family accounts with unsympathetic parents are particularly at risk”.

Historically, we’ve seen a lot of examples of engineers attempting to develop technical solutions for non-technical problems.

To its credit, Apple has delayed the rollout of the proposed features in response to the criticism. The company says it has listened to “feedback from customers, advocacy groups, researchers, and others” and will take additional time “to collect input and make improvements” before it releases the new features — a decision that Sandvik calls “a good move, a very smart move … and something Apple should have done at the beginning”.

Tech companies still have a long way to go when it comes to meeting the needs of high-risk people. But Sandvik does say that she’s starting to see a change in the way they approach these issues:

RS: Historically, we’ve seen a lot of examples of engineers attempting to develop technical solutions for non-technical problems. But what we are now seeing, at least in some cases, is that tech companies are including a more diverse group of people in designing and rolling out security features, and in presenting options that fit a bit better with the target users. 

For example, Nathaniel Gleicher, the Head of Security Policy at Facebook, recently tweeted a thread that explained how people in Afghanistan could lock down their Facebook and Instagram accounts. And what was so good about this was that they took the approach of “what would be most helpful for these people right now”. For example, they launched a one-click tool that would let people quickly lock down their accounts. When a profile is locked, people who are not their friends cannot download or share their profile photo, or see posts on their timeline. So that’s an example of something that is really helpful. A discussion about how to send encrypted email, in contrast, would not have been nearly as helpful for people in Afghanistan at that point in time. 

So it’s good to see tech companies actually paying attention to what’s happening on the ground, and providing solutions that meet those needs, instead of attempting to provide overly technical solutions to what, in many cases, are not technical problems.

Nobody sets out to become a high-risk person. It’s often something that just sort of happens in the course of their work.

Most people are no doubt glad to see tech companies doing a better job of protecting high-risk users. But this may be more relevant to them than they realize, since “high-risk” is an extremely fluid category. As Sandvik points out, “Nobody sets out to become a high-risk person”. She goes on to explain: 

RS: It’s often something that just sort of “happens” in the course of their work. There are reporters that have taken on stories that might have seemed, internally, “normal” and not very high-risk to begin with. But then halfway into the reporting, they look in a closet, or they lift up a stone, and five skeletons fall out. And suddenly, the whole thing is very sensitive and high risk, and they may become a target. 

Student protesters can also end up in high-risk situations without warning. For example, over the past few years there has been a wave pro-democracy movements in Asia. In Hong Kong, Thailand, and Myanmar, thousands of students have taken to the streets to demand freedom of speech, institutional reform, and human rights. In most cases, these are just ordinary young people, more concerned with term papers and final exams than mobile spyware and encrypted comms. But after attending a public protest, many have found themselves targeted for surveillance (or worse) by their governments. 

It’s tempting to suppose that these students, by virtue of being “digital natives”, have a strong understanding of cybersecurity, and will know how to keep themselves safe. But according to Sandvik, this is not a valid assumption at all: 

RS: Tech savvy does not necessarily mean knowledgeable about security. You can be a very good driver, but not know a whole lot about exactly how your car functions, or what to do if it breaks down. 

Fortunately, there is help for people who find that they have suddenly become “high-risk”. Sandvik recommends the Surveillance Self-Defense guide, produced by Electronic Frontier Foundation (EFF) and available in a number of different languages, as “a very good starting point”. 

For organizations like EFF, and for individual researchers like Sandvik, cybersecurity is a high-stakes endeavor. Their adversaries are among the most repressive regimes on the planet. And the people they are defending are frequently in dangerous or even life-threatening situations. It is not a job for the faint of heart. But despite the challenges, Sandvik says that she finds her work to be immensely rewarding:

RS: I care deeply about doing work that has impact, and that makes a difference for people. I personally get a lot of value from that. Being able to combine technology and knowledge about society is a really interesting puzzle, and something that gives me a lot to work on. 

SecureMac would like to thank Runa Sandvik for taking the time to talk with us. To keep up with her work, follow her on Twitter or visit her website. At Objective by the Sea 4.0, Sandvik will be discussing the tools and tactics used by U.S. intelligence agencies to compromise Mac security. Her talk, entitled “Made In America: Analyzing US Spy Agencies’ macOS Implants”, will be co-presented with OBTS founder Patrick Wardle. For more information on how to watch the talk via live stream, please see the conference website.

Get the latest security news and deals