SecureMac, Inc.

Cybersecurity in the Age of AI 

June 7, 2023

AI cybersecurity tips and more. How is AI a cyber threat? What can you do to protect yourself in an age of ChatGPT, voice cloning, and deep fakes?

Cybersecurity in the Age of AI 

Artificial Intelligence (AI) is on the rise. It’s transforming industry, education, medicine, and media. But like any new technology, AI may lead to unexpected cybersecurity outcomes. We are still in the early days of artificial intelligence, but there are already signs that AI may have a serious negative impact on digital security and privacy. Here’s what you should know about cybersecurity dangers in the coming AI age—along with some tips on how to stay safe.

How can AI be a cybersecurity threat?

Artificial intelligence is neither good nor bad—it’s just new tech. But what we’ve seen time and again over the past 30 years is that threat actors jump at the chance to abuse new technological developments. And that’s what’s happening now with AI. Here are some major developments in AI and how they’re being weaponized by the bad guys:

  1. AI voice cloning

    AI voice cloning refers to the use of AI technology to create audio that mimics a specific person’s voice. Using sample audio files from a real person, AI can create an audio profile that will output speech that sounds very much like that individual. Once the AI model is trained, it can be given text input and will read it out in the person’s voice. This type of AI is already being used in telephone voice scams and vishing attacks, and may be able to bypass voice recognition authentication systems used by banks and other large organizations.

  2. Generative AI text

    Generative AI creates text and other media in response to written prompts. The most well-known version of generative AI is ChatGPT. If you give chat GPT a prompt, it will output written text in perfectly idiomatic English. It works like a chatbot, meaning that it can simulate a conversation with a real human being. As we will see, bad actors are already finding ways to use generative AI for malicious purposes.

  3. Deep fakes

    Deep fakes are AI-generated images or videos that can be created completely from scratch. Early versions of deep fakes were as easy to spot as badly Photoshopped images. But the technology is becoming so sophisticated that it’s now difficult to distinguish between AI-generated media and the genuine article. This opens up all sorts of possibilities for threat actors looking to engage in online disinformation campaigns, social engineering, and more.

  4. AI-assisted development

    AI is a powerful tool to assist people with technology-related tasks. It’s already being used to help software engineers do their work more efficiently. But malware developers are also software engineers. The danger here is that AI will allow skilled malware developers to work faster—or bridge gaps in understanding or technique for moderately skilled actors, letting them build/wield malware that would otherwise be out of their reach. Overall, the risk is that AI development tools may increase the volume and pace of new malware threats.

The long-term effects of AI on cybersecurity are still unknown. And in the coming years, there will likely be even more abuses of artificial intelligence by bad actors. But there are things you can do right now to help protect yourself.

Cybersecurity tips for an AI world

The recommendations below are based on a premise that may at first seem counterintuitive: Despite its novelty, AI will probably not bring about brand new cyber threats in the near-term. Rather, AI is going to increase risk in the existing threat landscape—by making current bad actor tools, tactics, and techniques far more effective and, thus, more dangerous. The silver lining here is that the steps needed to keep safe are, in large part, things we should all already be doing. The only difference is that now they’re more essential than ever before.

  1. Keep it updated

    We’re entering an age of AI-assisted vulnerability scanning and malware iteration. It’s imperative to ensure that you aren’t using vulnerable software. Set up automatic updates for all operating systems and apps—and enable Rapid Security Responses to receive urgent Apple security patches. 

  2. Get a password manager 

    In an era of increased security risks, all accounts need to be protected by strong, unique passwords—and monitored so you know if they’ve appeared in a data breach. To be blunt: There’s no good way to do this manually. Get a password manager today. If you’re entirely on Apple platforms, Keychain is all you need. If you need a cross-platform password manager, the free and open-source Bitwarden or the subscription-based 1Password are both good options.

  3. Use 2FA for everything

    AI means the risk of successful password phishing will increase. Thus, you should protect your accounts with multi-factor authentication whenever possible. SMS-based two-factor authentication is good. App-based or key-based 2FA are even better.

  4. Add account PINs 

    AI will enable voice-cloning scams aimed at the call center workers who handle customer accounts. Add an extra layer of protection to your accounts by using customer service PINs when available. Many businesses will allow you to protect your account with a special PIN. If you call them for account service, they’ll ask for this PIN before they make changes. It’s a good way to prevent someone from calling in and impersonating you. They may have your voice and personal information, but they won’t have your PIN!

  5. Opt out of voice verification

    If a bank or organization is offering voice-based authentication for your account, see if you can opt out in favor of strong, unique passwords supplemented by two-factor authentication. Researchers have demonstrated how AI voice cloning technology can bypass voice-based biometrics. Don’t rely on voice verification to keep your account safe.

  6. Rethink the old phishing advice

    For years, people have been advised to look for grammatical errors and spelling mistakes to spot a phishing email. But with generative AI tools like ChatGPT, you can produce a well-written, professional email—even if you give the AI its prompts in another language. Phishing emails will get better, and the “tells” that were common in the past will no longer be as prevalent. In short, from now on, dial your skepticism way up when evaluating emails! 

  7. Confirm information independently

    AI voice simulation will make vishing attacks more effective. Due to the prevalence of phone scams, we already recommend that folks independently confirm unsolicited calls concerning account issues, delivery problems, unpaid invoices, and so on. Our advice is always to thank a caller and follow up on your own by checking for alerts or messages in an online account area—or by calling a customer service line that you find the number for yourself. We reiterate this advice here: It’s more important than ever to confirm (alleged) issues independently before taking any action, even if you think you recognize the voice of the caller. 

  8. Screen your incoming calls

    In order to help yourself follow the advice above—and avoid headaches—make it tough for phone scammers to talk to you directly. You can use iOS settings to limit incoming calls to contacts only. Screen incoming calls with your cybersecurity smarts as well: Remember to consider attacker techniques like caller ID spoofing, and don’t trust a number simply because it looks familiar.

  9. Limit your exposure

    AI will make targeted scams and phishing attacks easier for attackers to perpetrate. In general, one of the best defenses against targeted attacks is to avoid the bad guys’ attention in the first place! Limit your online presence as much as you can by using privacy checkup tools on social networks. Make sure that strangers can’t easily find your account, see your posts, or locate you using a search engine. Limit your posts to an audience of friends and family only. If possible, limit samples of your voice online as well: Remember that AI voice cloning tools need audio samples to train on, so make yours hard to get!

  10. Be alert to social disinformation

    Realistic AI video and images will assist purveyors of online disinformation go viral on social media. This class of bad actors isn’t just limited to trolls and pranksters: They are often sponsored by nation states or government intelligence agencies in order to sow confusion and discord in the societies of geopolitical rivals. Before reacting to a provocative image or video you see online, remember to consider the possibility that AI is being used to spread disinformation. Question, source, and fact-check media that you find online before taking it at face value—or sharing it with others.

  11. Be aware of social scams

    It has always been easy to pretend to be someone you’re not online. But now it’s even easier, and you can do it over video or voice chat. Beware of online scams and fraud, especially on social sites like Facebook or LinkedIn. Don’t believe that anyone you’ve never met in person is who they say they are—or that they are a person at all! Educate yourself and your loved ones about common scams like LinkedIn job offer scams, Facebook romance scams, and so on.

We believe that no matter how much technology may change, knowledge will always be one of your best defenses. Keep up to speed about emerging scams and new cybersecurity developments in AI by following our podcast: The Checklist. If you’d like to learn more about AI in general, here is some recommended reading to get you started: 

What Is ChatGPT Doing…and Why Does It Work?
The Supply of Disinformation Will Soon Be Infinite
From Scams to Music, AI Voice Cloning Is on the Rise

Get the latest security news and deals