SecureMac, Inc.

Chris Hadnagy on social engineering

August 3, 2020

Chris Hadnagy is one of the world’s foremost authorities on social engineering. He has written four books on the topic, including Social Engineering: The Science of Human Hacking and Unmasking the Social Engineer: The Human Element of Security. Hadnagy has been a prominent figure in the security industry for many years, founding the popular Social Engineering Village (SEVillage) at DEF CON, and establishing the Innocent Lives Foundation, a non-profit organization that coordinates the resources of the infosec community to fight online child predators. Hadnagy’s company, Social-Engineer, LLC, helps …

Chris Hadnagy is one of the world’s foremost authorities on social engineering. He has written four books on the topic, including Social Engineering: The Science of Human Hacking and Unmasking the Social Engineer: The Human Element of Security. Hadnagy has been a prominent figure in the security industry for many years, founding the popular Social Engineering Village (SEVillage) at DEF CON, and establishing the Innocent Lives Foundation, a non-profit organization that coordinates the resources of the infosec community to fight online child predators. Hadnagy’s company, Social-Engineer, LLC, helps organizations protect themselves from social engineering attacks through a variety of testing and training services.

When we think about hacking, our minds naturally turn to technology: vulnerabilities and exploits, networks and firewalls, and all the different kinds of malware. But many times, malicious actors will attack the one part of the system that’s hardest to secure: human beings.

In cybersecurity, social engineering (SE) refers to the act of influencing a user to take action or divulge information — especially when it’s not in their best interests to do so. Phishing emails are probably the most well-known example of this kind of attack, but SE encompasses many other threats as well, including phone scams, malicious text messages, scareware, impersonation, attempts to gain physical access to restricted areas, and more.

And although the threat posed by social engineering is well known, SE expert Chris Hadnagy believes that there are two enduring misconceptions about the subject within the security community:

CH: There are two things that the industry often gets wrong about social engineering.

The right pretext, at the right time, to the right person — and anyone can fall for social engineering.

First and foremost, we still perpetuate this idea that if you fall for a social engineering attack, then you must be some kind of dummy. But the right pretext, at the right time, to the right person — and anyone can fall for social engineering. I’ve been social engineered! And I’ve written four books, coming up on my fifth, about the topic. I’ve given a TED talk and speeches around the globe. I’ve run the DEF CON Social Engineering Village and helped to create the industry. And I’ve fallen for SE. 

Now, I don’t feel that I’m a stupid human. So I think that the notion that “if you fall for this, then you must be dumb”, is something that we really need to get away from. Because it doesn’t help with education or training.

Another thing that people get wrong about social engineering, especially in the security industry, is what’s needed to do this work. I think a lot of times what happens is that someone’s really good at talking to people, or they’re not afraid of taking risks, so they say to themselves, “I could do this, I could be a social engineer”. But there’s just so much more to doing this as a career, to helping people as a professional (that is, not being a bad guy, a malicious social engineer, which almost anyone on earth can do). To be a professional in this space takes so much more than just being a “people person”. 

Social engineering has been the focus of Hadnagy’s work for many years, and in that time, he has watched SE tactics evolve as threat actors adapt to changes in the world at large. Over the past several months, COVID-19 has caused a seismic shift in the way that people live and work. As Hadnagy sees it, this has created a situation that is ripe for exploitation by social engineers: 

CH: The new landscape definitely makes it easier for social engineering to succeed. Before, most people were working in a large office. If you and I worked together, and you got an email that looked like it came from me, you could pop your head over the cube and say “Hey Chris, is this email from you?” And I could say, “I didn’t send you an email”. And so now you know it’s a phish. Or if you had a question for IT, you would just walk down the hallway, pop your head in the office, and say “Hey, I’ve got a problem with this, can you help me out”. Now, that isn’t happening anymore. 

Instead, people are making decisions sitting in their living rooms or in their makeshift home offices. They’re not at work, so they don’t have the normal protections: IPS, IDS, packet inspection, all that other stuff that we have in our corporate offices. They’re working a lot more hours. And they’re under extreme stress — because mom and dad are both working from home, the kids are at home doing school work — and that stress level makes people much more vulnerable to social engineering. We know that we make worse decisions when our bodies are drenched in cortisol; when we’re under stress, we don’t have all the proper thinking capabilities that we normally do.

In addition, this world pandemic has opened up a whole new arena of possible attack vectors. When the 2011 tsunami hit Japan, within eight hours there were fake charities all over the place, but they were targeted. They targeted people who either had family in Japan, or who were from Japan, or who were of Japanese descent, or who loved Japan. Those were targeted attacks, but COVID-19 has affected literally the whole globe. So when you get a text message that says the CDC is issuing a new app to tell you about everyone in your neighborhood who has COVID, or you get an email from what looks like your HR department saying that there are new policies in place for working from home because of COVID, we quickly click on it, we don’t think, because this is such a hot topic and we’re all worried about it. 

This is the perfect storm: stress, fear, being tired, working at home, and a world pandemic. That’s making social engineering not only easier, but also a lot more prevalent.

So this is the perfect storm: stress, fear, being tired, working at home, and a world pandemic. That’s making social engineering not only easier, but also a lot more prevalent.

With everyone facing an increased threat of social engineering attacks, improving organizational preparedness is paramount. However, many smaller businesses, government agencies, and school districts lack the resources to conduct penetration tests or specialized training. Nevertheless, Hadnagy believes that there are some basic steps that such organizations can take in order to mitigate their risks, and he points out that much of this can be done at low or no cost:

CH: Education is the best path forward for organizations like that. 

If schools and colleges, for example, can spend whatever little budget they have on just educating their people about the attack vectors, that can make a real difference. When I give speeches at places like that, I find that so many folks don’t even know about common attack vectors. You’ll ask, “How many of you have ever seen a smish”? And they’re like, “What’s that”? “How many of you have received a vishing call”? “What’s that”? Everyone knows what phishing is, but they don’t even know about other attack vectors. And how can you protect against something that you don’t even know exists? 

You may not have the money for a penetration test, you may not have the money for custom vendors like us to come in and do high-end trainings. But can you at least spend some of your budget on getting your people educated, so that they know what’s happening? Can you send out an email that says, “Hey, here’s what the COVID phish of the day looks like”? Just something to keep them aware of what’s happening. And that, at least, could save some folks from falling for these attacks. 

The balance with that is that you don’t want to be sending out 100 emails a day warning them of all the attacks, because people will just shut down. But you can educate them, say, twice a month, or once a week if that’s OK. Just to a level where people will read it. Don’t make it overly wordy. Do make it personal: Tell them why this is going to affect their families, their kids, their husband, their wife, their boyfriend, their girlfriend, their mom, their dad. You’ll get better compliance with security best practices by doing those kinds of things. 

Again, you don’t need a huge budget to do that, but you do need someone who is dedicated to getting hold of those resources. They can go to websites like ours, for example: On our corporate Twitter account we literally have about six stories a day about attacks that are happening around the globe, just to keep people informed. We’re trying to find the latest breaches and attacks and we’re tweeting them out to let people know, “Hey, be aware, this is happening right now”. We have major corporations and tiny companies that use our feed to just keep their people up with what’s happening. And of course we’re not the only ones who do this; you have Krebs and others that constantly have news going out about the current state of security. So you can have a person who doesn’t need to go do all the research themselves, but who can aggregate all of that data, compile it, and put it out in a list for people. That way, you can make it easy for your folks to stay in tune with what’s happening.

Social engineering has been making headlines lately, most recently due to the massive security breach at Twitter, which the tech giant has attributed to a successful SE attack on its own employees. The details of the incident are still being investigated, but initial reports suggested that the hackers somehow managed to gain access to internal company administrative tools — and that they may have had the help of one or more Twitter employees. 

This raises difficult questions about how companies are supposed to protect themselves and their users from internal threats. Hadnagy cautions that it’s still too early to make definitive statements about the Twitter breach, but he says that the possibility of a malicious individual “on the inside” is something every company must face. And although it may not be possible to completely eliminate such a threat, there are measures that an organization can take to prepare itself for a worst-case scenario:

CH: This particular threat is called insider threat, because you have an insider, a person who has already been granted privileges, access, and trust within your network, and they now pose a threat — either by partnering with bad actors or by themselves committing some kind of offense against your company. 

The big problem with insider threat in the world, not just in Corporate America, is that most people don’t start thinking about it until after they have experienced it. If you have what they call a flat network, which means everyone has the same permissions across the board, then you allow for a very low-level employee, some person who can just come in and get a job at your company, to have the same permissions and access as someone who’s been there for 15 years. And that poses a threat.

Without creating a hostile work environment that’s like some dystopian, Orwellian, Big Brother future, you can’t protect against insider threat 100%.

That being said, without creating a hostile work environment that’s like some dystopian, Orwellian, Big Brother future, you can’t protect against insider threat 100%. You can’t. And I’ll give you an example: My COO started with the company, has moved up in the ranks, and he now has access to everything — literally everything — in my company. If he turns bad, how am I supposed to protect myself against that? He can’t do his job if I say, “Listen, because you may turn bad someday, I’ve got to lock down these 10 things and you can’t have access to them when you need them, you have to ask me for permission”. I don’t have time for that, and he doesn’t have time for that. If I want him to do his job, I’ve got to have trust. And that is where the threat comes in. So part of insider threat protection is having a levelized network that doesn’t allow everyone the same access, but at the same time, you also need to have procedures and policies in place so that if or when the insider threat comes to light, there’s a quick reaction time to fix the problem. 

You levelize it so that trust has to be earned, which doesn’t mean that you mitigate all risk, but you know, maybe on your first day on the job you don’t have access to everything, maybe access is granted as you earn trust. And yes, someone can still be disgruntled, and turn, but having proper mitigation policy and procedures will help fix the problem quickly if it comes up. But unfortunately, these are the two things that often don’t happen until a company experiences insider threat. 

From unprecedented security breaches to pandemic-driven economic disruption, 2020 has been a tumultuous year all around the world. But in the United States, there has also been dramatic social unrest as well. Following the killing of George Floyd by Minneapolis police officers, massive public protests have erupted all around the country, spurring a national conversation about race and institutional bias.

The work of professional social engineers requires them to understand how people’s thoughts, attitudes, and prejudices impact security. Hadnagy shared some of his reflections on the subject of bias, both from the perspective of an infosec leader who wants to see his industry to grow and improve, and also as a social engineer who knows all too well how bad actors can weaponize our biases against us:

CH: Biases exist. And oftentimes, when something happens like what’s happening in the United States right now, the first reaction is to do a bunch of training: “Let’s train people, and they’ll be more sensitive to it”. But has that worked over the years? No.

Now, I’m not saying, “Don’t do training”. What I’m saying is that what works more effectively is the company making the change. If you see a problem in your industry, for example, there’s not enough women in infosec, well, then, hire women! Foster them in a positive environment, and help them become leaders in the industry. Or, you can just make everyone sit through a computer-based training about how to treat women better. What’s more likely to create real change? Leading by example? Or just making people sit through a CBT? 

But there are biases, and they do create security challenges. I live in Florida, and here, routinely, repair people, or cleaning people, are Latinos. Now, that’s a bias, but it is a fact, it is the truth, and it means that if I’m going to go break into a bank here (because I’ve been hired by them do that job), and I’m going to gain access by posing as a repairperson or a cleaning person, I probably want to be a younger Latina — because that’s what people expect. And if I do what people expect, they don’t think. And if they don’t think, that means I can come in under the radar. I’m a big, 6’3” white guy. If I come in pushing the cleaning cart, that’s going to look a little odd to most people around here, as compared to maybe my young Latina employee. So I’ll send her in to do the job, and she’ll be able to blend right in. People’s biases are used by con men, social engineers, and scammers all the time. Why do you think when you get that call from “Microsoft”, it’s always an Indian woman? It’s because our brain says, “Oh, that’s what we expect from tech support, Indian women, that makes sense, I’m OK with that”. The scammers use that bias, because that is what we expect. 

Biases create security vulnerabilities; biases can be used against us. But the way to make the difference within an organization is to lead by example.

So biases create security vulnerabilities; biases can be used against us. But the way to make the difference within an organization is to lead by example. And again, it’s not, “don’t do CBTs”, it’s not “don’t do training”, because those things are good, but they’re not effective if you’re not also making changes internally that people can see. 

This prescription for organizational improvement is very much in the spirit of “be the change you wish to see in the world”, which, for Hadnagy, is much more than just a noble sentiment or an abstract ideal. Earlier in his career, he worked on several cases that involved tracking down and capturing criminals involved in the sexual exploitation and trafficking of children. Hadnagy was deeply affected by the experience, and decided to use his position in the security community to make a difference. He founded the Innocent Lives Foundation, a non-profit organization that pools the resources and expertise of the infosec world to help stop online child predators. 

The online sexual exploitation of children is, sadly, a huge and growing problem. But not everyone agrees on the best way to stop it. Recently, U.S. politicians introduced the EARN IT Act, which was presented as an attempt to crack down on child sexual exploitation. The law would force tech companies to follow a list of government-mandated best practices — or face legal liability for any criminal use of their platforms. But while EARN IT is ostensibly aimed at preventing the misuse of digital services by online child predators, critics believe that the proposed law is little more than a backhanded attack on strong encryption, which has long been a point of contention between the U.S. government and privacy-minded tech companies like Apple. 

Hadnagy is obviously passionate about protecting children, but as an infosec professional, he also understands the security and privacy ramifications of weakening our current encryption standards. And while he acknowledges the difficulty of finding an ideal solution to the problem, he says that attacking encryption is not only wrong-headed, but is also unlikely to stop criminals:

CH: The idea that we should get rid of encryption is utterly ridiculous. I don’t even know how any rational person could argue that this is the answer. That would be like saying, “We need to put government cameras in every car because some cars are used for hit and runs; some cars are used in drug deals; some cars are used in human trafficking. Therefore we should put monitoring devices in every car so that the government can see what’s happening, because some of them are used by these really bad organizations”. Of course that would be ridiculous — just as ridiculous as saying “let’s get rid of or let’s monitor all encryption just because some bad people use it”. 

The fact is, some bad people do use encryption. But some bad people do use cars for crime. Some bad people use guns for crime, or kitchen knives. But we don’t need to monitor all of those things just because bad people use them. And besides, encryption is also used for very good things: It’s used for privacy; it’s used to transmit information back and forth securely; it’s used to keep your data safe.

Now, should platforms be doing more to protect children? Yes.

Not to pick on any one company, but let’s take Facebook and Twitter as examples. They both have policies that state “no pornography” on their networks. You can’t put nude pictures on Twitter, they state that in their user policies. But there are thousands of accounts of full-on hardcore pornography on Twitter, videos being posted, all sorts of things — and yet this violates their user policy! If you have a user policy that you can’t enforce, then why have it? And if you are going to put a policy out there that says “no nude pictures on this network”, then enforce it. 

You do see networks that do that, like Instagram. Everyone gets upset because they ban a picture of a female breast or something like that, but there’s a network that’s saying, “Hey, we have a policy that says none of this content; when that content gets posted, your account gets banned”. That’s enforcing the policy. 

Now, could they do better? Yes, and I think that when an account breaks that policy and the content could possibly be someone underage, then the account should be automatically reported to law enforcement. That should just be part of their user policies, because the way that a lot of this stuff is getting transmitted (when we’re not talking about darknet) is through social networks.

In the United States — and this is a really, really sad, terrible fact — “child erotica” is not illegal. Child pornography is. But child erotica is not illegal in America. So you can have erotic pictures of children (which are not considered pornographic), and you can post them. Getting rid of encryption is not going to stop that. So how could a social media network or a platform do better? Well, when people post child erotica, then that account should get flagged and watched, to see if it’s also linking to dark web sites. The ILF has approached so many social media outlets saying, “Look, if you want, we can partner with you. If you have accounts that are posting child erotica and you want us to just check them out (like, you don’t want to turn them over to law enforcement right away, you just want us to go investigate), we’ll find out if they’re posting other content elsewhere. We’ll do that!” But nobody wants to take that step. Nobody wants to go that far to protect children. And that presents a big problem.

But attacking encryption is not going to fix that. In 2018, there were something like 22 million child sex abuse images reported. In 2019 there were 48 million. In one year, the number of reported images jumped over a hundred percent. The supply and demand of 2020 is probably going to be staggering, when we see the numbers, because we’re all stuck at home, and that includes predators, that includes people interested in children. They’re stuck at home too. So the numbers are probably going to go through the roof. And much of that — I wouldn’t say all, but much of that — is being transmitted through clearnet.

If you ban encryption, do you really think the bad guys aren’t going to find a way around that.

It’s a hard one, because it’s easy to look at someone’s solution and go, “That’s not the right answer”. But I don’t have a replacement, and I hate that. I just know that banning encryption doesn’t make sense, because if you ban encryption, do you really think the bad guys aren’t going to find a way around that? Let’s say you ban encryption in America. What about all of these companies that are in Amsterdam or Seychelles or some small island in the middle of the ocean, wherever they’re running their VPNs from, are they going to comply with U.S. law? What’s America going to do? Are we going to become Russia or China and have a firewall in America that stops all ISPs from allowing remote VPN traffic? It’s just too far-fetched to think that we can actually make that happen. What you’re going to do is hurt all of the law-abiding citizens…but you’re not going to stop the criminals. Laws are great, and they need to be there, we need to have them, but the bad guys aren’t sitting at home going, “Oh, darn it, this is illegal, I shouldn’t do it”.

Hadnagy’s acknowledgement of the intractability of certain problems, as well as an awareness of his own limitations, is part of his overall attitude of humility. But in his view, humility isn’t merely an ethical imperative: It’s also essential to changing people’s behaviors, and to keeping them safe. As Hadnagy sees it, this kind of humility is something the security community needs more of — and is also something which can be a tremendous source of strength:

CH: I came to the conclusion maybe about seven years ago that for me to make a change in any of my clients or in people who will listen to me speak, I had to be willing to talk about my failures. 

And it’s not an easy thing to do, especially when you’re supposed to be an industry leader, when you’re supposed to be the guy that everyone’s looking to for answers, to stand up and say: “Let me tell you how I failed this month”. It’s a little scary. 

But what I found is something I didn’t anticipate: This has made me more powerful in the community. I’m willing to say, “Let me tell you about the time I got phished” or “Let me tell you about the time that I fell for this scam”. And people react to this. They say, “Well, if this guy can fall for it, I guess I’m not as dumb as people told me I was”. And then I can talk about these failures, and say, “But now here’s what I learned from it” or “Here’s the process I had in place before I fell for it, that let me quickly mitigate the issue”. 

I fell for a phish. So, yeah, granted, I did it, I fell for a phish. But I didn’t fall for the whole attack, because I had been educated, and because I had the proper processes in place, so that as soon as I realized I was phished, I was able to mitigate the risk and the potential danger quickly. And to someone who didn’t have those things in place, I wouldn’t say, “That’s why you’re a stupid human”. No, I would just say, “Well, that’s the lesson in it”. Any of us can fall for social engineering. So don’t hide it. And don’t feel like you’re a moron because you fell for it. 

We in the security industry can improve things by being more open about our failures.

We in the security industry can improve things by being more open about our failures. By talking about them a little more, and using them as examples: “OK, now this is what happened. I clicked that link, I went to a site that looked just like an Amazon login, and I typed in my username and password. And as soon as I realized it was actually a Russian domain, here’s what I did to fix it. Instead of running around, burning my computer, and crying and wailing about it, or just ignoring it and brushing it under the rug, here are the 10 things I did very quickly to fix the problem, so that I didn’t have to worry about the aftereffects of being breached”. 

That’s a powerful lesson, because a lot of times people fall for an attack and think: “Wait, was I not supposed to do that? Eh, I’ll just go on with my day and we’ll see what happens” or “It’s just my credit card, they’ll make me whole again”. I can’t tell you how many times I’ve heard that! But they’re not thinking about the fact that the bad guys now have their name, their address, their social security number. And that all of those things make them vulnerable to 50 other attacks. 

Hadnagy is passionate about effective communication, and believes that the security community needs to do a better job of educating the general public about the threat landscape — an approach which may ultimately be the best way to protect people from social engineering attacks:

CH: We’re not doing enough as a security community to educate people. 

When Target was breached, and all of those credit cards were hacked, there was a news story that said “Yeah, but the credit card companies have to make you whole, so it’s not as bad as everyone thinks”. OK, great, but what about all the other data they had? As soon as that story came out, there was a phishing email that was supposedly from Target, saying “Hey, we’re giving you this free credit report software, install it now”. And it was malware. And because the news was just saying “Don’t worry about it, the credit card companies will make you whole”, everyone just went on with their life ignoring the problem. So we can do a better job of educating people. We could do a better job of saying, “Look, here are all of the things that could happen with these breaches”. 

Right now, there’s a huge rise in something called sexploitation. Bad actors are exploiting young girls. When there’s a major data breach, for example the Nintendo breach, or a breach of a social media application, they get a bunch of passwords. Well, there was a survey done recently in which 68% of the people surveyed admitted to using the same password on all of their websites, or on many websites. So what happens is that the attacker takes a gamble. They take the Nintendo password, and they email their target. So let’s say I’m a teenage girl: They’ll say, “Hey Chris, we have your password” — and they put the password in the email — “and we’ve hacked your laptop, and we have nude pictures of you, and if you don’t do X, Y, and Z, then we’re going to put these pictures on Facebook”. And the victim sees their password and thinks, “Oh my God! That’s my password! They definitely hacked me”. They believe it, and they get onto Snapchat with these attackers, and they do something compromising, and now the attacker really does have nude pictures of the victim, when they didn’t before. And they’ll say, “OK, now that we have these, you’re going to do this whenever we say or we’re going to embarrass you”. 

And the reason that this is happening is because we’re not doing a good enough job of educating that age group, of saying, “Hey kids, here’s what’s happening. If you get this email, go to your parents right away. Don’t fall for it. Don’t do what they say, because it’s most likely not true, and even if it is, we can take care of it, or we can help you take care of it, so that you don’t give them more humiliating material”. But because we’re not doing that, people are, sadly, falling for it more and more.

SecureMac would like to thank Chris Hadnagy for taking the time to talk with us. If you’d like to learn more about Chris and his work, you can follow him on Twitter, stop by his company’s website, or listen to his podcast. To find out more about the Innocent Lives Foundation, or to contribute to their efforts, please visit the ILF website.

Get the latest security news and deals