SecureMac, Inc.

Checklist 49: Artificial Intelligence and Security

August 10, 2017

The rise of artificial intelligence in the consumer space seems to have happened overnight. It feels like we’ve gone from Apple’s introduction of Siri to virtual assistants everywhere in the blink of an eye. On today’s episode, we’ll be covering some security concerns that accompany the rise of AI.

Checklist 49: Artificial Intelligence and Security

Artificial intelligence is on the rise, and powers many of devices we use on a day-to-day basis. On today’s episode, we’ll be covering some current problems posed by this new technology as well as some possible issues that might arise in the future when it comes to AI.

The rise of artificial intelligence in the consumer arena has seemingly happened overnight, and we’ve gone from Apple’s revolutionary introduction of Siri to ubiquitous virtual assistants in our daily lives in the blink of an eye. While movies have long-predicted AI with evil intent, the reality is much more mundane: Instead of Skynet, we have Siri. Instead of Hal, we have Google Home. While the current generation of AI hasn’t yet become self-aware and decided that the world is better off without us humans, there are still some issues when it comes to the security of our virtual assistants.

  • Virtual assistants are always listening
  • Insecure servers, data, and devices
  • Sometimes Siri can be *too* helpful
  • Chatbots gone bad
  • The dilemma faced by self-driving cars

Virtual assistants are always listening

George Orwell came really close to the mark in Nineteen Eighty-Four. He might have been off by a few decades, but the prevalence and sheer multitude of always-on IoT devices could be something straight out of his novel. There are even so-called Smart TVs that have microphones and cameras built right in!

According to Orwell, “Big Brother is watching you,” but in the case of AI it might be more appropriate to say “Virtual assistants are always listening.” Anyone who has used Siri, Alexa, Google Home, or similar virtual assistants knows they have to address the AI by name before it will be able to do anything. These “wake words” vary from device to device, but all serve the same purpose: letting the device know that the user is trying to interact with the virtual assistant and to start processing the question or command that follows the wake word. But…how does the device know to start listening for the wake word in the first place? It doesn’t. The way these devices work is by always listening to everything, all the time.

This obviously has led to some privacy concerns, as it’s essentially the same as having a “hot mic” in your house at all times, but it’s a necessary evil for these virtual assistants to be able to function correctly in the first place. The processing power required to analyze and respond to complex queries is way beyond the capabilities of the hardware packed into these small devices, and requires an internet connection to massive farms of cloud servers to do the actual hard work. Have you ever tried asking Siri to set a timer only to wait for awhile with nothing happening, only to then be told she’s having trouble? That usually happens if you have a weak wifi or cellular signal and she can’t reach Apple’s audio processing servers.

When things are working correctly, the device won’t actually send audio to the cloud servers for processing and analysis until after it hears the wake word, but once it does it also includes some extra audio from the time just before it heard the wake word as well as whatever it heard after the wake word. Sometimes, a device might hear something it thinks is “close enough” to the wake word to trigger audio analysis, and accidentally send audio over to the cloud servers that it wasn’t supposed to. This behavior has resulted in some unexpected legal tussles for the makers of these Virtual Assistants, most recently when Arkansas police demanded that Amazon hand over audio data gathered by a murder suspect’s Echo device. Amazon resisted, citing the First Amendment’s free speech protection, but the fact still remains that all of this data being generated by these devices ends up being stored in the cloud. We might have mentioned it once or twice before, but just because something is stored in the cloud doesn’t mean it’s secure from prying eyes (or ears, as the case may be).

Insecure servers, data, and devices

We mentioned earlier that the hardware in all of these devices isn’t up to the task of handling the requirements of a fully functioning virtual assistant all on their own – they rely on powerful cloud servers to store, analyze, and respond to questions and commands heard by the end-user devices. It would be great, from a privacy-perspective, if these devices could do all of the audio processing and analysis without the need for an internet connection, but unfortunately the technology behind these virtual assistants is still relatively young and we’re just not at that stage yet. Some basic audio processing can be done locally without an internet connection, such as that found in the offline mode for Apple’s Dictation feature. The only reason that feature can work offline is because it’s not actually doing anything with the audio it hears, it’s simply spitting it back out in text form. That isn’t even in the same league as the complex audio analysis required to interpret a question or command and respond appropriately to it – thus, the need for an internet connection to the cloud for virtual assistants to work correctly.

When it comes to securely storing that data in the cloud, your mileage may vary depending on the company in question. While Apple, Amazon, and Google are large companies with top-notch security teams and take the security of their cloud servers very seriously, other companies are a little…ok, maybe a lot more lax in terms of cloud security. In 2015, Mattel released the Hello Barbie doll, which could answer questions and participate in conversations with children all over the world. Security researchers took note, and quickly identified numerous flaws in the Hello Barbie companion app as well as on the cloud servers themselves, which could have allowed hackers to eavesdrop on the built-in microphone, access stored audio files, or grab personally identifiable information such as a person’s home or business address.

While Hello Barbie’s capabilities as a virtual assistant are eclipsed by the likes of Siri, Alexa, and Cortana, it still goes to show that all of this data being generated, whether on the device or stored in the cloud, is a ripe target for hackers. Security concerns aren’t limited to the virtual world, either. Beyond the security of the audio data itself, the way these devices are marketed as the digital hub for a connected Smart Home could pose risks to physical security in the real world as well.

Many of these virtual assistant devices can integrate with IoT “smart locks.” While it’s nice to be able to make sure your front door is locked from halfway across the country, it might not be the smartest idea to have a wi-fi enabled lock on your door. Some of the smart lock manufacturers that integrate with virtual assistants have taken extra steps to ensure security. One smart lock company, August, requires a user to enter a four to 12 digit pin code in addition to telling the AI to unlock the door. Now, let’s be clear here: locks basically exist to keep honest people honest. It’s not hard to pick the locks found on most front doors, and it doesn’t even take a skilled lockpicker much time to do so. If a would-be thief didn’t care about making some noise, they could always just break a window to get into your house anyway. That being said, if you decide to purchase a smart lock that integrates with your AI virtual assistant be sure to go for one that has some additional security. You don’t want some random person going up to your kitchen window and shouting “Alexa, unlock my front door!” and making it that simple for them to gain access to your house.

Sometimes Siri can be *too* helpful

Helpfulness is a trait we expect from our virtual assistants, right? Even if it takes Siri four tries to play the right song, she’s trying her best! Unfortunately, at times, Siri can be a bit too helpful.

Over the years there have been a number of incidents where Siri was either divulging sensitive information from a locked iPhone or bypassing the passcode entirely and allowing access to the device itself. In April 2016, a flaw was discovered where an attacker could ask Siri to “Search Twitter” from the lock screen on an iPhone 6s. When Siri asked what to search for, the attacker would say “at-sign yahoo dot com” or any other similar domain name, which would bring up a list of tweets containing a valid e-mail address. From there, the attacker just needed to tap on the tweet with the valid e-mail address, 3D Touch the email address to bring up a contextual menu, and they could add a new contact to the address book. From there, they just needed to add a photo for the new contact and they would have access to all the photos on the target iPhone.

Another similar attack surfaced back in November 2016, when a security flaw was found where attackers could bypass the passcode on an iPhone, gaining access to photos and messages stored on the device. All an attacker needed was the phone number and physical access to the iPhone itself for a couple minutes. In the event that the attacker didn’t know the phone number for the device, they could simply ask Siri, who would helpfully provide that information. By following a specific set of steps, including asking Siri to enable and disable the Voice Over feature at the right time, the attacker could create a new contact on the target iPhone, and from there browse through photos stored on the device or view conversations with any of the other contacts in the address book.

In February 2017, a user on Twitter found out that if she asked the right questions, Siri was all too happy to provide a bunch of personal information from a locked iPhone she had found.

By asking “What’s my name?” she was able to learn the iPhone owner’s name.

“Who do I call the most?” pulled up the recent calls list.

By looking at the phone’s notifications and asking Siri the right questions, she was able to determine the iPhone owner’s first and last name, where she lived, and where her car was parked. Yikes.

In June 2017, a security researcher discovered a flaw that could allow thieves to hide from Find My iPhone. By simply activating Siri from the lock screen and saying “Mobile Data,” Siri brings up a toggle switch that a thief can use to basically take the iPhone offline. This would effectively make the stolen iPhone invisible if the rightful owner tried using Find My iPhone, and they wouldn’t be able to initiate a remote wipe of the device either. This would allow the thief to try and bypass the passcode screen at their leisure, without the fear of being tracked down by the police.

Apple has been quick to patch many of these problems, but people who are concerned about their privacy might want to take the extra step to disable Siri entirely when the iOS device is locked. This can be done by opening the Settings app in iOS, tapping Siri, then sliding the “Access When Locked” toggle to the left to completely disable Siri when the iOS device is locked.

Chatbots gone bad

Artificial Intelligence isn’t limited to the virtual assistants present on our smartphones or in-home devices, and it actually isn’t as new of a technology as people think it is – in fact, people have been having conversations with natural language processing software for more than 50 years! Some of our listeners might be familiar with ELIZA, an early ancestor to modern AI. Created in the mid 1960’s at MIT, ELIZA was a chatbot that could simulate conversation through pattern matching and string substitution.

Chatbots evolved through the years, becoming more and more adept at mimicking human conversational patterns. SmarterChild was an extremely popular chatbot that was originally released for AOL’s Instant Messenger platform in 2001. SmarterChild used the input from conversations with millions of people to help learn and shape its responses, providing a fairly realistic illusion of conversation with an actual human being. It actually had many of the same capabilities that we currently see in modern AI like Siri – SmarterChild could provide sports scores, weather forecasts, and movie showtimes, and more.

While they kind of dropped out of fashion for awhile, chatbots have been making a comeback in a huge way in recent years. The rollout of these next-generation chatbots has been a bit bumpy at times, however. One particularly cringe-worthy example of machine learning gone bad was evident in March of 2016 when Microsoft released Tay, a Twitter bot that they described as an experiment in “conversational understanding.” Like SmarterChild, Tay learned from conversations it had with users, and used what it learned as a basis for future conversations. Unfortunately, trolls quickly took advantage of this fact with disastrous results. Within 24 hours, Tay had turned into a vulgar, racist, and misogynistic chatbot – a far cry from Microsoft’s original intent. After making a vain attempt at deleting some of Tay’s more offensive tweets, Microsoft eventually gave up and pulled the plug on Tay.

While Tay wasn’t originally created with malicious intent in mind, other chatbots are. In 2016, a number of people fell victim to a chatbot on the Tinder dating app that pretended to be a female user. After exchanging a few back-and-forth messages with a potential victim to lure them in, the chatbot would convince the victim to click a link to become “verified by Tinder.” Clicking the link resulted in a form asking for the victim’s username, password, and e-mail address. To complete the “verification process,” the user needed to “confirm their age” by providing a credit card number. You can probably guess where this is going…people who fell victim to this chatbot-powered form of phishing ended up with expensive memberships to porn sites, with nary a date in sight.

Sometimes it’s not the chatbots themselves that are the problem, but bad guys impersonating the chatbots. Modern-day chatbots are increasingly being used to provide basic customer service capabilities – somewhat like those phone tree menus when you call a big company (Press 1 for billing, Press 2 for sales, etc). Impersonation of an official chatbot recently resulted in a massive theft of the Ethereum cryptocurrency. In early July 2017, cybercriminals infiltrated the Ethereum communities on the Slack chat messaging system. By impersonating the official Slack chatbot, they sent fake messages to all members of the chat channel that looked like they came from an administrator. These fake messages claimed that one of the wallet services used to store the Ethereum cryptocurrency had been hacked, and urged users to click and link and log in to check their balance. Once again, you can probably guess where this is going…the link went to a phishing site, and the bad guys used the stolen login credentials to drain the victim’s wallets. In less than a week, the cybercriminals had stolen more than half a million dollars worth of the cryptocurrency.

The dilemma faced by self-driving cars

We’ve spent most of this episode talking about AI in terms of virtual assistants or chatbots that base their decisions off of input that you provided. You ask Siri a question, and she supplies an answer. You tell Alexa to turn on the lights, and she turns on the lights. There’s a whole other class of AI, however, that makes complex decisions based off of external input and events. One example of this type of autonomous AI can be found in self-driving cars. By using a variety of sensors, self-driving cars can tell where they are on the road, identify other cars, stop signs and stop lights, stick to the speed limit, and generally try and avoid getting into accidents. This technology is far from perfect, however, and the stakes are much higher when you’re talking about a two ton chunk of metal going 70 miles per hour. If Siri goofs up and plays the wrong song…no big deal. If your self-driving car goofs up, however…

There have been some recent examples of self-driving cars failing at their prime directive. In December 2016, Uber decided to deploy some self-driving cars in San Francisco, explicitly in violation of California’s state regulators. It turns out the state regulators had good reason, as Uber’s self-driving cars blew through at least six red lights. Thankfully, and luckily for Uber, nobody got hurt, but nonetheless they took their self-driving cars off the streets shortly thereafter.

One man using Tesla’s autopilot feature wasn’t so lucky, however. In May 2016, Joshua Brown enabled his Tesla’s self-driving mode, and about a half hour later it drove straight under a semi truck’s trailer while it was making a turn. An NHTSA report later stated that during the 37 minutes of autopilot mode leading up to the fatal crash, Mr. Brown only had his hands on the steering wheel for 25 seconds total. Tesla has since made updates to the autopilot feature, requiring more frequent hand contact with the steering wheel, and disabling self-driving mode altogether if the driver goes too long with their hands off the wheel.

Not all of the problems faced by self-driving cars are immediate, however. The technology is still in its infancy, and obviously not ready for wide-scale deployment at this time. In the future, however, self-driving cars will be faced with some tough ethical decisions to make when it comes to the safety of their passengers as well as others on the road. Should a self-driving car try and protect its occupants at all costs, even if it means injuring or killing someone else? Should a self-driving car deviate from its lane and crash into another car in order to avoid hitting a child who runs into the street? What if there’s an entire group of people in the road, and hitting at least some of them is unavoidable – who should the car hit, and who should it try and miss?

There aren’t easy answers to any of these questions, and while we’re not quite at the point of needing to come up with them yet, they are going to play an important role in the future as autonomous cars become more prevalent.

That’s it for today’s episode of the checklist!

Do you have a topic you’d like to see us cover in a future episode, or a security question in need of an answer? If you have anything to ask us, send us an email at checklist@securemac.com!

Get the latest security news and deals