SecureMac, Inc.

Eva Galperin on fighting tech-enabled abuse

September 23, 2021

An interview with EFF’s Eva Galperin about intimate partner surveillance, stalkerware, and Apple’s CSAM scanning plans.

Eva Galperin

Eva Galperin is Director of Cybersecurity for Electronic Frontier Foundation, the leading nonprofit organization defending civil liberties in the digital realm. Her work focuses on providing privacy and security for vulnerable groups around the world. In recent years, Galperin has been the leading voice within the cybersecurity community in raising awareness about the dangers of stalkerware malware. We sat down with Galperin to discuss the issue of tech-enabled abuse — and what can be done to stop it.

If you ask most people about mobile spyware, their minds will turn to the things that make headlines: APT groups, zero-click exploits, and commercial spyware firms like NSO Group.

But one of the most pernicious threats to our security and privacy can come from the people closest to us, in the form of intimate partner surveillance. This underappreciated insider threat is often facilitated by stalkerware — a catch-all term that describes programs and apps used to spy on people through their devices. 

Eva Galperin has been leading the fight against stalkerware for several years. And awareness of the issue, both inside and outside of the cybersecurity community, is growing. Galperin’s seminal 2019 TED talk, “What you need to know about stalkerware”, has now been viewed millions of times. And just recently, the Federal Trade Commission obtained its first-ever ban of a stalkerware app. But while more people than ever know about the phenomenon of stalkerware, there are still some misconceptions about it, and about intimate partner surveillance generally:

EG: When people talk about intimate partner surveillance, they’ll often say that it “leads to” abuse, that it “leads to” stalking, or to physical violence. But the most important thing to understand is that the surveillance itself is abusive.

If you’ve reached the point where you’re spying on your partner’s phone, then you are an abuser.

But the most important thing to understand is that the surveillance itself is abusive. If you’ve reached the point where you’re spying on your partner’s phone, then you are an abuser. And if you’re doing it because you think that they’re cheating, or because you think that they’ve done something wrong, or because you want to maintain a sense of control, or even because they have abused you in some way, then all that’s happening is that this abusive person has turned you into an abuser as well. And letting your abuser turn you into an abuser means that they have won.

Galperin helped to found the Coalition Against Stalkerware, an international working group dedicated to fighting tech-enabled abuse. Since 2019, the Coalition has grown, and now counts more than 40 partner organizations among its members: cybersecurity vendors, academic research centers, domestic violence support groups, and digital rights organizations like Electronic Frontier Foundation (EFF).

Galperin says that they’ve made progress, especially with respect to the antivirus software industry:

EG: When I started doing this work, one of the biggest stumbling blocks was that the AV industry hesitated to define all stalkerware as potentially unwanted programs, or as malicious. But I think that a lot of that fight has been won, which is a relief.

Nevertheless, there’s still a lot of work left to do. While some law enforcement agencies have shown support for the Coalition Against Stalkerware (most notably INTERPOL), Galperin says that victims of tech-enabled abuse often struggle when they seek help from the police:

EG: The next step is to really center the experience of the user: to teach people how to find this stuff, and how to report it. It’s also going to be necessary to teach law enforcement and people who work with survivors of domestic abuse to recognize the prevalence of stalkerware —  and to take it seriously.

The experience with law enforcement is still essentially Russian roulette.

One of the biggest problems that I see is that when victims of this kind of abuse go to the police and say, “I’ve got stalkerware on my phone. Can you please help me do something about it?” they just get gaslit. They’re told, “I have no idea what you’re talking about. I have no idea who put this on your phone. I can’t do anything about it. You’re imagining things, go away”.

Sometimes you’ll get a sympathetic and tech-savvy person who has experience dealing with abuse, and who understands a trauma-based approach to your problem. But most of the time, you won’t. The experience with law enforcement is still essentially Russian roulette — and it’s impossible to predict what you’re going to get.

The Coalition Against Stalkerware is largely made up of cybersecurity professionals and experts who specialize in helping victims of domestic violence. But Galperin says there are some important steps that ordinary, non-technical people can take to support survivors of tech-enabled abuse:

EG: When a survivor of this kind of abuse comes to you and tells you about it, believe them. That’s actually the most important part: believe them, take it seriously. 

It’s also important to take a survivor’s boundaries seriously. If they tell you something like, “I need you to not post about me on social media in public, because my ex is stalking me”, then that is an extremely important boundary to respect.

Once an abuser “loses control” of their victim, and are no longer able to use stalkerware in order to keep track of them, their next move is often to track them through information provided by their friends and family — often under the guise of being “concerned” about the survivor’s mental health. That’s very important for friends and family to watch out for.

Galperin and others have fought hard to raise awareness about the problem of stalkerware. But another form of tech-enabled abuse has been in the public eye for many years: the online spread of child sexual abuse material (CSAM).

Recently, Apple announced a controversial plan to build a CSAM scanning system into iOS and iPadOS. The company claims that the new system will protect children while at the same time respecting user privacy. But there has been an outcry against the move — from both the security and privacy community and from the general public as well. EFF has created a petition asking Apple to stop its plan to implement on-device scanning, and an open letter to Apple hosted at appleprivacyletter.com has already garnered thousands of signatures. 

In response to this pushback, Apple’s leadership has issued public statements and given media interviews. But they haven’t truly addressed the concerns that people — especially people in the cybersecurity and privacy advocacy communities — are raising. 

Apple’s general message seems to be that everything is just fine, and that the only real problem is that people don’t understand what they’re doing. Apple exec Craig Federighi called the system “widely misunderstood”, saying “we wish that this had come out a little more clearly for everyone”. Others have been even more dismissive. A leaked message from The National Center for Missing & Exploited Children (NCMEC) to Apple’s engineering teams referred to critics as “the screeching voices of the minority”.

Galperin counts herself firmly among that group (as she quipped on Twitter, “I will never stop screeching about the importance of privacy, security, or civil liberties”). She explains why she sees Apple’s move as so misguided:

EG: There’s a lot of confusion and misinformation around what Apple is doing, and how they’re doing it. But the most important takeaway about Apple’s plans to scan Americans’ phones for CSAM is that the scanning happens directly on your phone. It is not happening in the cloud. It is happening on your device: They’re scanning photos that are destined to be uploaded to iCloud.

Having the system at all — under any circumstances — is very dangerous.

I’m not somehow unaware that people can just turn off iCloud. I’ve thought about that! The reason why this is particularly worrisome is because they have now built this capability to do on-device scanning. And this opens Apple up to demands from governments to use it for other things; for things other than photos intended to be uploaded to iCloud. Having the system at all — under any circumstances — is very dangerous.

Defenders of Apple’s CSAM scanning technology tend to argue that the harms posed by CSAM are clear, immediate, and real, while concerns about harms from government abuse are merely hypothetical. But Galperin says that this is “simply not true”, adding:

EG: I’ve devoted the last 15 years of my life to studying the ways in which governments are tracking dissidents and activists, and silencing them — sometimes by arresting or threatening or killing them. Those harms are very real. 

Apple claims that it would refuse demands from governments to scan for other types of content, saying “this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it”. 

But Galperin worries that despite Apple’s best intentions, the company may have painted itself into a corner — and made promises that it won’t be able to keep:

EG: I think that Apple is being incredibly naive. Many countries already have laws on the books requiring the kind of scanning that Apple’s new system allows Apple to do. Apple has resisted (up until now) by telling governments that it’s simply not capable of doing what they ask. But that’s no longer true. And while there are some governments that Apple could potentially stand up to, or countries where Apple could potentially pull out of, one notable exception is China.

Apple’s relationship with China has been fraught (to say the least).

I certainly wouldn’t want to see Apple become repression’s little helper in this case.

I don’t think that they’re in a particularly strong position to pull out of China, where they have now left themselves extremely vulnerable to being required to scan Chinese people’s phones for, say, banned Uyghur or Free Tibet content, or pictures of Winnie the Pooh — or anything else that might be viewed as political dissent. And that is extremely worrisome. The harm to Tibetans and Uyghurs and pro-democracy protesters in Hong Kong is extremely real, and extremely concrete. I wouldn’t want to see it worsen. And I certainly wouldn’t want to see Apple become “repression’s little helper” in this case. 

It is indeed difficult to imagine Apple exiting China at this point. In its latest financial statement, the company reported over $14.7 billion in sales in China alone (out of a total of $81.4 billion worldwide). One could be excused for wondering if Apple is really making its promises in good faith. But to Galperin’s way of thinking, this is almost beside the point:

EG: Even if you trust Tim Cook, even if you think that Tim Cook is a stand-up guy who would be willing to pull all of Apple’s operations out of China rather than spy on Chinese dissidents’ phones by scanning their content directly — what happens when he steps down, and somebody else takes his place? I think that the last couple of presidential elections have done a lot to illustrate how much promises depend on who is in charge.

The critics make a compelling case that Apple’s well-intentioned child safety feature may have unintended consequences. But the question still remains: What can, or should, be done about the proliferation of CSAM material?

People working to stop online child predators have pointed out that a large amount of CSAM material is distributed via social media. Because of this, there have been calls for large social platforms like Facebook and Twitter to play a more active role in the fight against CSAM.

But Galperin cautions that this approach is not simple or straightforward either:

EG: Platform censorship is complicated and extremely hard to do correctly at scale. It’s unsurprising that CSAM makes it through the system. But that doesn’t mean that the platforms are simply letting it slide. 

Closer work with the platforms is necessary if you want to approach it from that angle. Because again, it’s difficult, it’s complicated, it’s hard. One of my biggest concerns is not that we’ll fail to catch all of the CSAM, but that Facebook or Twitter or the other platforms will massively over-censor. Because if you take a look at the history of sex censorship on the Internet, that’s how things have played out. 

Platform censorship also has another serious problem: the risk of false positives. And unfortunately, Galperin says, we could expect a large number of those because of the tech companies’ preferred strategy for handling illegal content:

Image recognition systems reflect the biases of the people who build them.

EG: Companies often just want to throw machine learning at the problem. It’s like, “We have robots! AI is doing this! Surely nothing could go wrong…” But there is a mountain of research on recognizing images through machine learning, especially sexually explicit images, that shows that image recognition systems reflect the biases of the people who build them.

For example, you’ll see an image of two men kissing labeled as “sexually explicit” where an image of a man and a woman kissing is not. Or an interracial couple kissing is labeled as sexually explicit, but two white people kissing is just nothing. So a machine learning approach is something that concerns me very much. 

In the case of the NCMEC CSAM database, there is an interesting unintended consequence because the image matching is so fuzzy. The image matching is not based directly on a comparison of hashes. The images don’t have to be exactly the same: They only have to be vaguely the same. And so if a child is abused, and their image ends up in the NCMEC database, then when that child grows up, images of that person will still trip the CSAM database. This is actually something that we hear from people whose images are in the CSAM database (and not infrequently).

This is a very serious concern. A lot of the discussion of Apple’s decision just assumes that NCMEC is a fair actor that provides zero false positives, and that there are no problems with their database — and that’s simply not true.

It can be deeply frustrating to face problems as serious as intimate partner surveillance or CSAM — and to realize that there are no quick fixes or easy answers. For this reason, Galperin advocates for a holistic, principled approach to solving problems of tech-enabled abuse:

EG: It’s very important to take a broader and more systemic approach to problems. Because trying to solve these problems one person at a time is impossible. As I touched on in my TED talk, the “hero model” doesn’t work.

We need to change the systems that allow this kind of abuse to go on. That starts with just being able to recognize stalkerware when it’s on phones. But it also stretches to addressing the kind of behavior that allows people to become abusive in the first place: people thinking that it’s “OK” to spy on other people’s phones, because it doesn’t occur to them that this is immoral, or unethical, or illegal — or because they feel that it is justified.

A certain sort of abusive person is always going to be abusive. Some people are just toxic. But a lot of people sort of end up stumbling down this path to abuse, because they feel that the ends justify the means. If there’s one thing I really hope that people could come to understand, it’s that the ends do not justify the means

SecureMac would like to thank Eva Galperin for taking the time to talk with us. To learn more about her work, follow her on Twitter or check out her author page at Electronic Frontier Foundation. If you or someone you know is facing intimate partner surveillance, you can find help at the Coalition Against Stalkerware’s resource page.

Get the latest security news and deals