Checklist 195: Attacks on the 2020 Election with Nick Leon
The 2020 elections are just two months away, and security experts say they’re under attack by nation-state intelligence groups. This week, we’ll talk about:
The players and their goals
The Black Hat USA security conference was held several weeks ago, and one of the keynote talks was about how nation states further their geopolitical aims through “information operations” on social media and the Internet.
Renée DiResta gave the keynote. DiResta is Research Manager at the Stanford Internet Observatory, an interdisciplinary institute at Stanford University tasked with understanding the misuse of technology and, in particular, social media. She studies the ways in which narratives spread on social platforms, and is a Mozilla Fellow in Media, Misinformation and Trust as well as a Presidential Leadership Scholar.
Her remarks focused on two main threat actors: China and Russia (in particular, information operations groups linked to the militaries and intelligence agencies of those countries).
According to DiResta, information operations tend to have four broad goals:
Distraction is when an intelligence operation attempts to shift the public’s attention away from something they’re focused on, and get them to start thinking or talking about something else.
Persuasion refers to an attempt to get the target audience to believe something, or to think and feel a certain way about an issue.
Entrenchment means intentionally fostering groups that have strong social or political identities, or encouraging people to become even more strongly identified with their existing group.
Division is the way in which nation-state actors attempt to exploit entrenched group identities in order to create confusion, mistrust, and fragmentation in the adversary’s society.
While China and Russia are both engaged in information operations, they tend to differ in terms of the kinds of ops they’re running.
China, on the one hand, seems more interested in persuasion: They want to convince people around the world that China’s growing power and influence is a good thing; to show China in the most positive light possible; and to counter, downplay, or deflect any criticism of China.
Russia, by contrast, is focused on dividing their adversaries’ societies. In the case of their actions against the United States, this means producing media and social media content tailored to different segments of American society, with the ultimate goal of creating opposing groups that have deeply entrenched identities, and then leveraging this to create further division. While this may seem like a strange goal at first, there is a grim logic to it: If you see another country as a competitor, it makes perfect sense that you’d prefer not to compete with a strong, unified adversary, and would much rather square off against a society that’s tearing itself apart.
A full-spectrum threat
When we hear about “foreign influence”, especially in the context of social media and elections, many of us immediately think of social media or Internet comment “bots”, fake accounts that act on behalf of a nation-state to shape narratives online.
But DiResta explained that this is only one part of the way in which information operations work, because China and Russia both take a “full-spectrum” approach to exerting influence online.
On social media platforms, this can mean analyzing feed curation algorithms and figuring out how they can be gamed to artificially boost content created by or favorable to these nation states.
In addition, both countries have a wide range of content production organs. These range from easily attributable state-run newspapers like China’s People’s Daily, to organizations with far less obvious ties to the government, such as Russia’s Internet Research Agency, who are independent contractors on paper, but whose links to the Russian government have been documented by the U.S. intelligence community. In addition, both countries use content from fake personas or organizations, for example “news outlets” that don’t exist anywhere but on Twitter, or “journalists” who aren’t even real people. In some cases, this can even take the form of outright misdirection or disinformation: a site called BlackMattersUS, which presented itself as run by American Black Lives Matter activists, was later found to be the work of the Internet Research Agency.
With this wide array of content at their disposal, China and Russia are able to use their other tools to amplify stories and narratives that suit their aims. They do this in part by using fake social media accounts, but they also use their knowledge of how to manipulate the feed algorithms in order to reach a wider audience. In addition, they will sometimes use legitimate content boosting tools provided by the social media platform itself (for example, Facebook ads). The goal of information operatives is to eventually get actual people to begin sharing their content organically, leading to its viral spread. The best case scenario, from their perspective, is that major news organizations like CNN or Fox News pick up the stories after they notice that a critical mass of people are talking about them.
Yet while both Russia and China use many of the same tools, their end goals, as mentioned previously, are different. And this means that their tactics are different as well.
China, with its focus on persuasion, puts a lot of effort into creating language-localized news content designed for foreign consumption. News outlets with ties to Beijing will often have large international followings, sometimes on social media platforms that are even banned in China! These outlets will often run stories that are sympathetic to the Chinese government’s position on some issue, or that are designed to deflect international criticism of China.
In this vein, China has also started to use its social media bots in an attempt to shape public discussion of its actions. During last year’s Hong Kong protests, many journalists on Twitter found themselves besieged by fake accounts that seemed intent on pushing Beijing’s narrative about events in Hong Kong.
Russia, on the other hand, differs in terms of the kind of content that it produces, and in terms of how they engage with their audiences. While they certainly have government-funded media organizations such as RT, they also tend to create a great deal of content specifically designed for social media: memes, short videos, and the like. Interestingly, this content is created to appeal to groups on both the political left and the political right of American politics, but the common denominator is that it all appears to be carefully crafted to highlight divisions within American society, and to get people to identify even more strongly with the social or ideological group that they’re a part of.
In addition, there is evidence that the admins of social media pages run by Russian operatives are interested in building relationships with their audiences. Investigators found that some page admins have reached out to ordinary group members offering them money and support to help spread content, and even to engage in real-world protests.
Another point of difference is Russia’s use of literal hacking, as opposed to opinion hacking, in service to their information operations. Russian APT groups with ties to the military engage in hacking campaigns against U.S. public officials (a prominent example of this being the hack of Hillary Clinton campaign manager Jon Podesta’s emails during the 2016 presidential elections). Any material that they manage to obtain is then leaked to the media through fake personas, used to create social media content, or reported on by government-connected news outlets like RT. This “hack and leak” strategy can be a powerful way to shift the narrative on an issue, or hijack the news cycle in the middle of an election season.
The final point of difference between Russia and China is their relative degree of success, at least in terms of getting their content to go viral on social media. Researchers who studied the data from Twitter’s takedown of about 23,000 fake accounts found that China was not having much luck at getting people to engage with their bots: around 90% of their fake accounts had fewer than 10 followers, and the average engagement per tweet was < 1. Russia, on the other hand, had engagement metrics that were an order of magnitude better than China’s.
There may be several reasons for this. In the case of China’s fake Twitter accounts, many of them seemed to be…well…not very convincing fake accounts. They were all created around the same time (many of them with similar-sounding usernames), and would often use stock profile photos and contain very simple bios or no bio at all. In other words, very few people would be fooled into thinking that these accounts belonged to real Twitter users. Russia, on the other hand, has consistently showed its talent for running carefully thought-out campaigns, and demonstrates a deep understanding of the different sub-groups in whatever society it’s targeting.
Part of the reason for this may be historical. China’s rise to global prominence is relatively recent. For most of the 20th century, their propaganda activities were inward-looking: focused on domestic audiences and issues. Now that they’re turning those capabilities outward and attempting to shape world opinion, they’re playing a new game — and so there’s bound to be a bit of a learning curve. Russia, by contrast, has military units and intelligence agencies with roots that go back to the old Soviet system, and thus is building on decades and decades of Cold War experience propagandizing the West.
How to fight back
On the Checklist, we don’t like to share scary security news and just leave it at that! We always want to offer some suggestions on how you can take action to make your world a little safer.
This week’s topic is a little more daunting than usual: The stakes are extremely high, the issues are sensitive, and the players are, well, not just your garden variety teenage hackers in hoodies, but the intelligence agencies of major world powers.
So in the face of all that, is there anything you can do? Yes — and if you’ve read this far, you’ve already started.
Before anything else, you need to understand why and how this kind of nation-state public opinion hacking is happening. The “why”, again, has to do with adversaries preferring to deal with a weakened, divided US as they pursue their geopolitical aims. In the immediate context of the upcoming 2020 elections, that means taking the existing divisions in US society, amplifying them, and using them to further divide their adversary — with the ultimate goal of making Amerians lose confidence in the legitimacy of their elections. The “how” is everything we’ve just discussed, which entails understanding a.) the way it works and b.) the fact that it’s going on right now.
Once you’ve taken that first step, you can then begin to think about more concrete actions.
First, you can take personal responsibility for content you share on social media. This means fact-checking and sourcing anything that you share (especially if it’s about a sensitive or controversial social or political topic), and making sure you’re not sharing anything that’s inaccurate or impossible to source. And it also means thinking critically about the content you’re sharing before you share it: Asking yourself if this content seems intentionally provocative; or crafted to highlight divisions in your society; or designed to create an “us vs them” mentality. Be aware that—as crazy as it might sound—the content you’re thinking of sharing could actually be part of a foreign adversary’s information operation, and designed to entrench you in your political identity and make you feel more divided from your neighbors. So before sharing something on Facebook, or retweeting something on Twitter, ask yourself: Is this helpful, or divisive? And do I really know where this comes from?
In addition, make use of the reporting tools offered by the major social media platforms (Facebook offers a “report post” button; Twitter has a “report tweet” option). These tools can be used by community members to sound the alarm on content that looks like disinformation, or to alert moderators to fake accounts. Obviously, this shouldn’t be used for things you just happen to disagree with! But if you do see something that truly looks like disinformation, or if you spot a bot account, say something: Help the social platforms keep tabs on it. It’s easy to become cynical about these big tech companies (and it’s certainly reasonable to criticize how they deal with the issue). But remember that they have taken action in the past, as evidenced by Twitter’s takedown of all of those thousands of fake accounts, and the community can help them to do a better job by flagging suspicious activity.
Finally, share this information with the people in your life. It’s important to do this in a way that will actually get them to listen, of course, as opposed to putting them on the defensive and making them shut down. Accusatory Facebook comments are definitely not the way to go! Instead, try having offline conversations about what you’ve learned: the mechanics of how information operations work; the ways in which state actors encourage and exploit divisions in U.S. society; and the fact that they do all of this by creating hard-to-attribute content for both the Left and the Right. If you’re not sure exactly what to say, or if you just want to reach more people, another option is to share this podcast. You can be low-key about it: Sometimes it’s enough just to say, “Hey, here’s this thing I was listening to; I found it interesting; maybe you will too, give it a listen!”
That brings us to the end of another Checklist, but if you’d like to keep learning about security, privacy, and how you can keep yourself safe, check out our archives for past episodes and show notes. And of course, if you have a comment, question, or an idea for a future topic, drop us a line!