SecureMac, Inc.

The Checklist Podcast

SecureMac presents The Checklist. Each week, Nicholas Raba, Nicholas Ptacek, and Ken Ray hit security topics for your Mac and iOS devices. From getting an old iPhone, iPad, iPod, Mac, and other Apple gear ready to sell to the first steps to take to secure new hardware, each show contains a set of easy to follow steps meant to keep you safe from identity thieves, hackers, malware, and other digital downfalls. Check in each Thursday for a new Checklist!

Checklist 102: The Head Shaking Edition

Posted on August 16, 2018

This week, we’ve dubbed our episode the “head shaking edition” because each of our stories today left us shaking our heads for one reason or another. From Google quietly looking over your shoulder to Comcast fumbling the ball with yet another big leak of user data, there’s plenty to discuss — so today we’re diving right in to our discussion. Items we’ll be ticking off the list as we go:

  • Google’s game of hide and seek
  • A look at behavioral biometrics
  • Comcast tells 26.5 million people, “Relax!”

Do you try to stay vigilant about making sure that apps, browsers, and big companies aren’t tracking you all the time? You might be surprised to learn that your Google settings may not be what you think they are.

Google’s game of hide and seek

If you use Google apps on your iPhone, a report by the Associated Press says that the chances are good Google is tracking and collecting information on your location data even if you think you’ve already disabled the appropriate setting. If you try to stay on top of these things, you might have gone into your phone’s settings to look for ways to disable tracking. Did you see a setting called “Location History”? Many people, possibly yourself included, toggle this setting off under the assumption that it means Google will quit following your GPS location. In reality, though, if you’ve got those apps installed on your device, the truth is that Google is probably still gathering that data.

Attention first started focusing in on this issue back in May of this year, when a researcher at UC Berkeley wrote about an odd experience she’d had on her Android device. Like many, she’d turned off the location history setting. Yet after taking a shopping trip to a Kohl’s department store, she received a request from one of her Google apps to rate her experience at the store. The only way Google could’ve known to ask her that, of course, were if it knew her location. As the Associated Press dug deeper, they found that it was possible to use app location data to track someone with ease.

Turning off “location history,” as it turns out, only gets you so far; it doesn’t mean that Google has entirely lost the ability to see where you’re going. That is because there are a whole different set of location settings hidden in another category, which Google calls “Web & App Activity .” This is data that Google uses to provide a “personalized” experience, and it includes collecting location data by default. Google says that it’s been perfectly clear about when and where it tracks you — but that’s obviously not the case. Burying the reality in some obscure location hardly counts as informing users of their options when most people won’t realize they’ve missed additional settings.

Whenever possible, we like to share stories that have solutions, or “things you can do” to take action. If you aren’t feeling too keen on the idea of letting Google see all your travels, there are things you can do beyond toggling off “Location History.” Here’s how you can make sure that you’ve absolutely obliterated any chance for sending your location history to Google by disabling the correct settings.

Enabling the “only while using” location setting for Google Maps is a good first step. If you use Safari as your web browser, you can consider switching your default web browser away from Google to something more security-minded like Duck Duck Go. If you want to really go for the “nuclear option,” you can disable location services on your phone altogether. (This piece by The Next Web) has full instructions you can follow to regain control over your location data quickly.

With all that said, it’s worth using this as an opportunity to consider the question of location privacy when using apps in general. Sometimes, letting an app access your location for a temporary service (like locating a nearby restaurant) can be very helpful, and a normal part of the app’s operation. What we don’t want, though, is for that same app to have carte blanche to log your location wherever you go. Is there some way for you to stay on top of whether telling something “don’t track me” actually works? What about staying aware of everything about you that’s being tracked?

The reality is that there are so many things out there that it would be an impossible task to know about it all; even Google collects so much different data that it would be a Hurculean effort to attempt to lock it all down. The best thing you can do is to investigate settings as thoroughly as you can, set them according to your preference for privacy, and keep an eye on the news. When stories like this one come out, you can make the right choices.

However, you can also flip some settings in iOS to give yourself a broader, more general sense of security. As mentioned, you probably don’t want apps to always to access your location. That’s why enabling “allow only when using” for app location services is a good idea. Uber, for example, has had its fair share of scandals with tracking user locations; we’ve even covered some of those stories here on The Checklist. By enabling this setting, users can give themselves more digital cover to control who sees where they are.

A look at behavioral biometrics

Location data gets most of the attention these days when it comes to ways in which apps track users, but it’s far from the only type out there. In a story reported by the New York Times, the paper shared information about how banks are using something called “behavioral biometrics” as a line of defense in their security systems. In other words, the banks are “tracking how you type, swipe, and tap” whenever you use one of their apps to make a deposit, transfer some money, or even find out the location of the nearest available ATM. On the whole, this type of tracking isn’t new, but how banks use the information does break ground.

Many websites track user activity, such as clicks, scrolling, highlights, and more. These can provide data for generating heatmaps useful for understanding how a user interacts with a website. In turn, this can provide inspiration for improving the design of an app or site by removing or changing features that users struggle with; the principle here is the same, but the outcome is different. Instead of using an aggregate of data, banks are collecting individualized interaction data. The reason? The banks say that the way you use your app is unique — with enough data points, software could have the ability to detect when it’s really you and when it isn’t. These behavioral biometrics are then used to figure out if a user is really the authorized account holder.

The NY Times reports on one incident in which these efforts worked to stop a potential thief who had gained unauthorized access to an account with the Royal Bank of Scotland. The software in place at the bank did what we described above: it captured user movements, built a profile from the data, and used that information as a baseline for a comparison with each subsequent login. Over time, the software could allegedly detect an intruder with accuracy close to 99%. In the case in question, the system logged a mouse scroll wheel event after the user logged in — an event unique to the user’s profile, as they had never done this before. Likewise, the user typed numbers in a different manner than usual.

That was enough to throw up red flags and freeze the account immediately; upon contacting the customer, it was discovered the account had indeed suffered from an unauthorized intrusion. However, the customer was spared from a huge financial loss thanks to the system the RBS had in place. So, what’s the problem, if the system works?

While on the surface behavioral biometrics might seem like a net positive, the reality is there is a great deal of uncertainty surrounding this technology — starting with who uses it. The Royal Bank of Scotland is one of the only financial institutions that chose to disclose its usage of the technology to NY Times reporters, yet it’s hard to believe something similar isn’t on the way, if not already in use, for the big international banks. That lack of transparency regarding having your data collected is one problem.

Concerned individuals in the tech industry, especially those for whom privacy is a serious concern, have raised red flags of their own about the collection of behavioral biometrics. A lawyer for the privacy-focused Electronic Frontier Foundation, for example, noted that it would not take a big leap to move from using this data for security to using it to determine sensitive or private information about a user.

The NY Times article proposes one such hypothetical scenario, where a bank or an insurer uses biometrics to detect the presence of the early stages of a neurological disorder. Based solely on this data, the conjecture goes, you could find yourself denied coverage. As the EFF’s lawyer pointed out, companies that collect lots of data inevitably look for other ways to use that data — so is it really a stretch to think security might not be the only reason companies want to use this info?

As much as we like to offer solutions, there is unfortunately not much we can do in the immediate short term about this. First, we don’t know much about who uses this technology, where, or why. Second, it’s hard to put the genie back in the bottle; this tech is now out there, and we’ll need to think about what we do in response. With that in mind, we think it’s clearly time for legislation that governs broader uses of user information such as this. We already have laws governing how we handle information like an individual’s health data — why not implement the same type of rules in this arena?

Comcast tells 26.5 million people, “Relax!”

Hopefully, you aren’t too tired of shaking your head yet — but if you are, you can throw in a facepalm to go along with our final story for this week’s discussion. Based on a story by Buzzfeed News, it appears that some very simple flaws in Comcast’s customer website could have accidentally exposed home addresses and Social Security numbers for more than 26 million customers. While Comcast says there’s nothing to worry about, the details of the story certainly make one want to feel otherwise. Neither security flaw required an enormous amount of technical knowledge to abuse, and even a novice hacker could have figured out the information given the opportunity.

The first of these flaws involves an option Comcast gives to its customers which allows them to pay their bill without the need to fully login to their account; this is a convenient, time-saving measure, but the way it was implemented left something to be desired. To do this, customers could verify themselves using address information. The site suggested four addresses, each partially obscured by asterisks, one of which would be the customer’s address, and it would only occur if Comcast’s system detected the connection coming from the home network. However, if a bad actor knew your IP address already, they could spoof it and appear to Comcast’s site to be you. Then it’s simply a matter of geolocating the IP address and finding the address in the list that corresponds to that area.

Such a vulnerability wouldn’t only expose a partial address but could serve as a weak point allowing an attacker to gain an initial foothold on an individual’s online presence. After being informed of the vulnerability, Comcast turned off this feature, requiring customers to input data on their own for verification. However, that wasn’t the end of things. There was another basic vulnerability in a separate Comcast webpage, this time a customer sign-up page used by Comcast Authorized Dealers.

To log in, a customer would need to input their billing address and the last four digits of their Social Security number. If a malicious individual had your billing address already, something trivial to acquire, they could use this form to figure out a portion of your SSN. Because there were no limits placed on the number of wrong attempts the form would accept, hackers could simply brute force their way through every possible SSN digit combination.

Eventually, such a script would find the right four numbers and log the hacker in to the system. This system is a big problem as someone could use it as a means for verifying SSN information or working towards a bigger attack. Consider all the places that ask you for your SSN as a means for verification; armed with the knowledge of your number’s last four digits, the bad guys could potentially socially engineer their way in all types of other accounts. Comcast rate-limited the page after learning about the oversight, closing this loophole.

Comcast says they don’t believe the loopholes were ever actually exploited maliciously, and they don’t believe the information was misused. Can we really know that for sure, though? Like with so many other data leaks, we can only take Comcast at their word, but it’s impossible for us to know whether someone else discovered and exploited these flaws. Is there anything you can do about it? While you can’t protect yourself specifically from a company’s oversights, you can make it harder for bad guys to know where you’re coming from online. Using a VPN is an excellent way to obscure your real IP address and gain an additional layer of security. Be sure to revisit Checklist 19: All About VPNs if you need a quick refresher.

Join our mailing list for the latest security news and deals