SecureMac, Inc.

Checklist 166: Warnings from the FUN Bureau

December 6, 2019

On this week’s Checklist, we’ll look at some dumb ways to buy a smartwatch. We’ll share a PSA from the FBI. And we’ll talk politics and app security by way of FaceApp.

Checklist 166: Warnings from the FUN Bureau

On this week’s Checklist, we’ll look at some dumb ways to buy a smartwatch. We’ll share a PSA from the FBI. And we’ll talk politics and app security by way of FaceApp.

Here’s the Checklist for this week:

  • When is a smart watch a dumb idea? 
  • Is a smart TV really that smart? 
  • And facing the dangers of FaceApp… 

Getting smart about smartwatches

The first story this week involves security issues with smartwatches designed for kids. 

When it comes to smart things, it’s always good to ask: “Do I really need this?” But despite our usual skepticism about “smart” things, we have to admit that there are plenty of good reasons that parents might want their children to have a smartwatch: The ability to create safe contacts lists and block problem contacts, family voice chat, GPS tracking and an SOS button for emergencies, and even a “virtual fence” system to prevent little kids from wandering off.

So the question isn’t really “Is it dumb to want a smartwatch for my kid?”, but rather, “What are some not-too-smart ways to pick said watch?”.

Generally speaking, if you are going to get a smartwatch (or smart anything, really) for your child, you’ll want to look out for the following red flags:

  1. 1

    Deals that are “too good to be true”

    If a smartwatch is selling for $50, the company in question probably didn’t invest much in testing and secure development.

  2. 2

    Unknown companies

    If you’ve never heard of the manufacturer before, and if there’s little background information on them online, then it’s going to be hard to check out their reputation.

  3. 3

    Companies that don’t know about security

    Electronics and computer companies like Samsung, Apple, and Alphabet have large, experienced security teams familiar with the challenges of IoT development. But a company which specialized in plush teddy bears until just last week? Probably not so much.

  4. 4

    Bargain hunting for safety

    We love a deal as much as anyone. But when it comes to cybersecurity, it’s generally best to buy the best protection you can afford. Bargain basement “solutions” can end up being more trouble than they’re worth — as this week’s story demonstrates.

The watch that caused all the problems in this week’s story was the M2 smartwatch, manufactured by the Chinese company SMA. Security researchers at AV-TEST uncovered a number of serious security flaws with the device. 

The first major problem was that SMA didn’t lock down the device’s backend as it should have. Anyone with the know-how and tools would be able to use the publicly available API (Application Program Interface) to access user data over the web. This is the sort of thing that would be well within the abilities of many malicious actors. In the case of this smartwatch, the backend data would include the location and personal data of children — obviously not something you want accessible to strangers.

The second issue with the M2 watch was that it appeared to have a broken or nonexistent verification protocol for its authentication tokens. APIs will often require an API key or token of anyone trying to access app data, sort of like a password, to ensure that the party trying to access the data is actually authorized to do so. These keys are typically long, random strings generated by the host company. Unfortunately, it seems that something went seriously wrong with SMA’s verification process: The company’s servers never even attempted to check the validity of supplied tokens. In other words, even if the proper token was supposed to be “82170ef6e88cf1049c5a0af4a4f24976b641c0cb”, a hacker could input the word “tomato” as the API key and still receive access to the data they wanted. Again, laughably bad security — but not so funny when you realize that we’re talking about exposing the personal data of little kids.

These vulnerabilities mean that a malicious actor could potentially use the web API to connect to SMA’s servers and collect reams of data on children and parents using the M2 smartwatch. This data would include things like the children’s location, the types of devices they were using, as well as SIM card and IMEI data. Armed with this kind of information, an attacker could track children, or even download the M2 companion app onto their own smartphone and use the app’s vulnerabilities to pair their own device to a child’s, lock the parent out of the app, and communicate directly with the child.

The good news — at least for our US readers — is that this watch doesn’t appear to be sold in the States. But it’s clearly a disturbing story, and an excellent opportunity to talk about best practices for picking smartwatches for children. Here are three things to consider:

  1. 1

    Beware of bargains

    As we said before, when you’re shopping for something designed to keep your child safe, remember that low-end products are not the way to go, for the simple reason that the companies in question probably didn’t invest too much in secure development. The sad case of the M2 smartwatch shows us what happens when manufacturers cut corners on secure development!

  2. 2

    Find your why

    Ask yourself why you’re actually getting the smartwatch. If it’s for any one of the good reasons mentioned above, fair enough. But if you’re really just looking for an easy way to monitor your kid’s location or online activities, remember that this functionality could be abused by bad actors as well. This precisely is why we don’t recommend parental tracking and spying apps in general.

  3. 3

    Buy what you need (and what they want)

    If you do want to get a smartwatch for your child, maybe start by asking them what they want. In addition to avoiding birthday disappointments, you can also work together to figure out which features your child would be most likely to use — and which would probably just gather dust, so to speak. If it turns out, for example, that you really just need a device to encourage physical fitness, then there are watches on the market which use the wearer’s own movements to power the device — no batteries or APIs required! And if you are going to invest in a more serious smart device, then go with a name you trust: Apple, Samsung, Fitbit, or some other major company which has the experience and personnel needed to engineer a truly secure wearable.

Trust No 1

The good people at the Federal Bureau of Investigation have a PSA for you about smart TVs — and you might want to hear what they have to say.

But first, let’s define what we mean by “smart TV”. Basically, for the purposes of this discussion, a smart TV is any television which can connect to the Internet to use streaming services like Netflix and Hulu or apps like YouTube and Vimeo. These devices often have other advanced capabilities, like a microphone which lets you change the channel via voice command, or even cameras capable of facial recognition that can offer up personalized viewing suggestions to different members of the household.

All of which sounds … awesome. So why is the FBI so concerned about smart TVs? Well, because all of these cool features are also potential vulnerabilities. 

First of all, there’s an obvious privacy concern when that giant machine in your living room, equipped with a camera and microphone, can listen to you, watch you, and transmit that data back to app developers and electronics manufacturers. But there are also network security issues as well. Think about it from an attacker’s perspective. An updated MacBook running robust security software is actually fairly hard to hack. But smart TVs represent a much softer target, as they are not nearly as well protected. If a bad actor manages to compromise your smart TV, then they could do all sorts of things with its hardware and software capabilities, from the annoying (changing the channel unexpectedly to mess with you), to the disturbing (showing inappropriate content to children or spying on you), to the legitimately dangerous (keeping track of when you leave the house in order to burglarize your home).

This certainly doesn’t mean that you should toss your smart TV — and that’s not what the FBI is saying or implying. But there are definitely some steps you can take to make yourself safer if you have a smart TV:

  1. 1

    Know your TV

    Read the manual to understand your device and know what its security capabilities are — and how to manage them. Look up the TV’s model number online and double-check anything pertaining to “camera”, “microphone”, or “security”.

  2. 2

    Change the defaults

    Take a serious look at your TV’s security settings, and decide what level of protection you’re comfortable with. Don’t just use the default settings. Play around with the device until you know how to disable the camera and microphone (and if that’s not possible, seriously reconsider if you want this thing in your living room). If possible, change the password on the smart TV.

  3. 3

    Tape it up

    If you can’t turn off the camera, and you’re more concerned with privacy than receiving customized recommendations, do a Mark Zuckerberg and put a nice piece of tape over the camera lens to protect yourself from prying eyes. Black electrical tape works well. Don’t forget that your microphone might still be listening.

  4. 4

    Check for updates

    We say it almost every week with respect to iOS and macOS security, but the same rule applies to smart devices: Check for updates. When reputable smart device manufacturers find a vulnerability, they will patch it — but that only helps you if you’re regularly updating your device. So check to see if your smart TV offers updates, how often they are released, and if there’s any way to enable automatic updates.

  5. 5

    Read their policy

    Smart TV manufacturers will have a privacy policy outlining what they intend to do with your data. So will the apps and streaming services. If you’re genuinely concerned about your privacy, take a moment to read these so you can make an informed decision about the technology you’re using (and the permissions you’re giving it).

  6. 6

    Dumb it down

    It’s probably safe to say that in a few years, the only TVs you’ll be able to buy will be smart TVs. But just because your device has “smart” capabilities doesn’t mean you have to connect it to the Internet! If you really, truly just want to watch the game, your favorite crime drama, or the latest episode of Peppa Pig, consider disabling the network connection and just using your smart TV as a … TV.

From Russia with love

Our last story this week has to do with FaceApp, an entertainment app which can change a user’s photo to alter their age or gender. The app caught on last summer when social media personalities began uploading their own altered photos to show to followers, and it became immensely popular in a very short time.

All well and good, but how is this a security story?

Because FaceApp was created by a development team based in Russia — and this has led to questions about who can access all those user photos.

Senator Chuck Schumer voiced his concerns over the app in a letter to the FBI and FTC, specifically mentioning FaceApp’s ties to Russia. Considering the app’s data policy, which allows the developers to retain users’ uploaded photos indefinitely, as well as the potential to abuse such photos to mine biometric data, there is certainly reason to ask questions about the app itself. And considering all the issues with foreign interference in the 2016 election, as well as the ongoing political battle over the Trump administration’s links with Moscow, it’s not wrong to worry about Russia’s intentions. But was Schumer here just taking a cheap shot at Russia (and, by implication, the current administration) at the expense of blameless Russian software developers?

The FBI doesn’t seem to think so. In their reply to Sen. Schumer, they said that they consider any app developed in Russia — including FaceApp — to be “a significant counterintelligence threat”. 

To be clear, it would be unfair to paint all Russian developers as likely Kremlin agents, since there are doubtless tons of talented, hardworking, and honest software engineers operating in Russia and doing everything the right way.

But having said that, there are some special considerations to bear in mind when talking about the environment in which Russian developers work. In particular, Russia’s new Internet sovereignty law raises serious concerns, as it mandates that all Russian ISPs install monitoring hardware which allows the government to listen in on, block, or pinpoint sources of web traffic. In other words, a Russian developer might have the best of intentions — and might not want anything to do with their government spying on citizens of other countries — but would be powerless to prevent the authorities from intercepting and making use of their app’s data.

In the case of FaceApp, the developers say that even though their development team is based in Russia, their app never transferred user data back to that country, which is somewhat reassuring. 

But it still raises larger issues — in particular, whether or not we put too much faith in the App Store.

Let’s face it: Most of us just trust something if we find it in the App Store. But that may be a little naive. To start with, Apple’s review process for new apps, while improving, is still far from perfect. But more to the point, app review is mainly aimed at spotting problems with the apps themselves: code signing issues, issues with app design or usability, malicious code embedded in apps, and so forth. But if the problem isn’t the app itself, but rather the country in which the app was developed…what then?

Apple is not likely to get into the business of banning apps based on the country of origin, which would probably be a bad idea from a business perspective (not to mention a fairness perspective). But an intermediate step might be to inform App Store users of the country of origin for every app — and alert them to the potential dangers if the app in question is from a country that has exhibited problematic behavior in the past.

Anything beyond that would likely take us into the territory of allowing government regulators to decide for users which apps and countries are “allowed” in the App Store, which somewhat defeats the purpose of living in a free society … and is also a slippery slope to other forms of censorship.

Get the latest security news and deals