Checklist 114: What’s the Internet for Again?
Welcome in to another episode of The Checklist, brought to you by SecureMac. We’re diving right into the headlines again today, looking first at a new improvement to Mac security, plus a couple of the more interesting stories floating around in the world of computer security this week. So, on our list for this week, we’ve got the following stories:
- New Macs get physical with their mics
- What’s the Internet for again?
- Making laws to handle mishandled data
Let’s kick off today’s discussion with a look at an interesting new security feature built into the latest and greatest Mac hardware.
New Macs get physical with their mics
Apple is always looking for new opportunities to make its hardware and software safer, and part of that is responding to threats with new efforts that shut them out for good. That’s the case with one of the latest features reported in the most recent hardware revision for Mac computers, according to a piece from Business Insider. Both the model year 2018 MacBook Pros as well as the newly redesigned MacBook Air will now prevent the bad guys from listening in on your conversations when you least expect it by going one step further on microphone security. Now, whenever you close your laptop to turn your attention elsewhere, these new computers automatically disconnect the microphone entirely.
Wait, you might be thinking. It didn’t do that already? No, unfortunately not. Your microphone would be turned off if the computer was actually asleep or, even better, completely shut down, of course, but malware has shown in the past that it’s easy to fool users into thinking the machine is asleep. In reality, it’s awake, and the malware is secretly recording your conversations through the microphone. This was the case with the nefarious FruitFly malware that we’ve discussed several times on the show before.
Now, though, the hardware initiates a “physical” disconnection so that the microphone is completely unavailable for use whenever the lid is closed. While the Business Insider piece says that we have Apple’s special “T2” chip to thank, a specially designed processor meant strictly for security, that’s not 100% accurate. Apple does describe the new “hardware disconnect” feature in a security document about the new T2 chip, it seems like it is a separate component of the system altogether. Apple specifically notes in the document that “even the software on the T2 chip” cannot re-engage the microphone when the lid is closed.
This makes it sound like it’s somewhat similar to the light inside your refrigerator, which shuts off when the door depresses a switch inside. (What, you thought it stayed on all the time?) Somehow Apple has set up the new Mac hardware to respond to a lid close event by pulling the proverbial plug on the microphone so that it wouldn’t work even if the software could somehow gain access. It’s likely that it’s very similar to the sensor that’s used to power off the display when you close the lid. None of these steps occur with the built-in cameras, though that should be obvious — the cameras are blocked by the shell when the lid is closed anyway!
Overall, we think this is an excellent move; it’s a simple change that adds a lot of peace of mind to the machine and helps close off one potential attack route hackers can use. Business Insider, on the other hand, seems to think Apple didn’t go far enough, instead suggesting that the company should consider adding a physical switch, such as the iPhone’s silence toggle, so users can turn off the microphone when the computer is in operation with the lid open, too. Is that necessary? Probably not. After all, there is already a Mac app — called Oversight — that easily alerts you when your camera or microphone activates without your permission. Plus, a switch on the Mac would be a recipe for even more confusion, with people accidentally flipping it.
However, an interesting compromise would be to add another small LED to the new Macs, similar to the one that illuminates when your camera is in use. A small indicator that says “hey, your mic is on” could not only help users identify potential malware infections but maybe even save them a bit of hot mic embarrassment in the middle of an important business call. So in the end, we say “good!” to this story. It may be a minor change, but it’s an important improvement in the road towards better overall security. Though it may be a minor attack vector, Fruitfly proved just how invasive it can be when properly exploited. We applaud Apple for slamming the door shut here.
What’s the Internet for again?
As goes the line sung so famously in the puppet musical Avenue Q, “The Internet is for Porn” — apparently one US government employee took that adage very seriously, according to a piece published online in TechCrunch. The story goes that a government computer network suffered a severe incursion by malware. From where did it come? We’ll give you three guesses…
The initial infection stemmed from a single employee’s so-called “extensive history” of using his work computer to visit adult websites. The incident occurred in a facility operated by the US Geological Survey, the aptly-named EROS Center, a satellite facility based in South Dakota. The entire EROS Center’s computer network was infected with virulent malware after the employee apparently perused “thousands” of pornographic pages loaded up with malware. Not only that, but the employee proceeded to download reams of images to an external USB stick plugged into the computer (a serious security no-no) and his phone.
Whether it came down with the images or exploited a browser vulnerability to get onto the employee’s machine, malware made it onto the hard disk and immediately began to cause chaos in the system. It probed the network and spread throughout the facility, infecting other machines. Oh, and of course, his Android phone got its own helping of malware too. As one might expect, that employee is likely no longer helping the USGS further its mission.
Did the USGS not block the most common sites? Chances are, they did. In fact, the USGS said in owning up to the incident that it had not done enough to block adult sites from user access on government computers. What this likely means is that the employee tried to visit those sites, found they were blocked, then went and found others somewhere else. The Internet is vast, after all, and there are really quite a lot of adult content spread around out there. New sites pop up every day, and since so many people are out there looking for porn, the bad guys know it’s the perfect opportunity to dump malware onto vulnerable machines.
Of course, even if the biggest and most common sites are generally considered safe, they’ve also suffered from bouts of malvertising. Of course, the USGS also pointed out another operational failing they’ve since worked to correct: the open USB port that allowed the employee to insert an unauthorized device.
Blocking USB ports is a common practice in enterprise security, as it cuts off one of the easiest ways for malicious software to intrude on an otherwise safe network. Depending on the management software a business uses, this can be a sledgehammer or scalpel type of approach. In other words, it could block all USB devices indiscriminately, allow only USB power to flow through the port, or allow devices based on specific functions. There are also physical “data blockers” that plug into USB ports and allow USB drives to plug into them; these act as a “sandbox” to keep the content isolated.
The USGS will likely be making changes to its USB policy soon, along with updating its blacklists. With how many new sites appear every day, you might think that it would be almost impossible to maintain a reliable blacklist, but in practice, they can be pretty effective. In research for this show, we viewed several blacklists meant to keep such adult content out of the workplace, one of which contained more than 2 million entries — and it was updated all the time by the community. While it’s impossible to keep up with the pace of change completely, it’s still a useful tool.
Here’s the good news in this story: there’s no “other shoe” about to drop regarding the consequences of the infection. Since all EROS does is develop and archive satellite photos of the planet’s surface, there’s no classified information on its networks — so there was never a real threat beyond the annoyance of the infection. That said, the malware variant identified in the USGS computers does have links to ransomware attacks, so it’s a very good thing nothing more sinister developed from this incident. Let this be a cautionary tale!
Making laws to handle mishandled data
Our last story for today concerns an issue that you can do something about, although not in a completely direct manner. In the more than two years that The Checklist has aired, we’ve covered countless stories about major foibles on the part of tech giants. Semiconductor manufacturers hiding vulnerabilities and backdoors inside their chips and other weaknesses built in by mistake, for example, or credit reporting agencies, retailers, and sandwich makers losing sensitive user data by the millions. And, of course, the many times that social media giants like Facebook have lost your information, or exposed it, and so on… you get the picture.
Through it all, there’s been one common thread: there are almost never any real consequences for these huge breaches. Sure, you might get a few free years of credit monitoring and regular reports, but other than that, all we hear are well-crafted apologies and continued assurances that things will be different that time. Now, though, one member of the US Senate wants to make that change. He wants to put executives on the hook for the times when their companies fumble the ball with security and user data.
According to Engadget, the plan is one created by the Democratic senator from Oregon, Ron Wyden. Wyden’s Consumer Data Protection Act would first establish a baseline of regulations for the protection of customer information whether the company in question is Equifax or Facebook or anyone else. The law further stipulates severe consequences for those found knowingly and willfully abusing user data for profit or through poor security practices. The bill’s proposal is a big one: senior executives of companies that mishandle user information could face between 10 and 20 years in prison.
Who decides when that punishment is merited? The law would give that authority to the Federal Trade Commission, which would have somewhat broad powers intended to rein in irresponsible use of information. Along with penalties for executives, Wyden’s law would empower the FTC to levy hefty fines even for a first violation — it lists a maximum of 4% of the company’s annual revenue. Companies would also have to report on any potential data breaches regularly, and the law sets out categories of businesses (such as those making more than $1 billion annually) that would be subjected to compliance checks on a regular basis to ensure no mishandling has occurred.
So, is this a good idea? On the one hand, the jail penalties seem a bit steep, even with due process in place to ensure that the penalties aren’t misapplied. However, that is something that could change during the legislative process. A significant financial penalty is often a very good incentive for companies to follow the rules, and finally putting into place guidelines that actually have teeth behind them seems like a very good step forward. After all, we already protect patient data stringently through HIPAA; why shouldn’t all sensitive user data on the web require the same level of protection, especially when it often links to important things such as bank accounts?
There is one more aspect to the potential legislation that’s worthy of note, and that is the implementation of a “Do Not Track” list, which would work somewhat similarly to the concept of the “Do Not Call” list. You sign up, and once you’re on the list, the law legally prevents companies from sharing their data on you with any third-parties. That means no targeted ads, either. However, companies would gain the right charge Do Not Track customers a fee for the privilege and to offset any potential money lost from selling to advertisers. This section also includes a provision very similar to one found in Europe’s recently implemented GDPR, in which any user would have the right to request the chance to review the data a company has collected about them. It would include when the data was sold, to whom it was sold to, and more.
On its face, this seems like a good idea — the idea of the Do Not Track list, anyway. Trading your money in exchange for not having your data sold to the highest bidder, though? That’s not a good idea. Not only does it make it mean that several previously “free” services will suddenly gain a “paid” option, but many users won’t want to shoulder the burden of paying anything — especially not for a site they already use frequently. That makes this a recipe for users clicking “OK” to grant the companies the right to use their information merely because they see a dollar sign and go the other way.
Overall, it’s unlikely the law gains much traction in the upper chamber of the US legislature. That said, it’s a good step forward, and perhaps future legislative efforts will model themselves on this version. Companies such as DuckDuckGo have pledged their support to the law, of course, but those with huge troves of user data lying around — such as Facebook or Google — will likely expend substantial amounts of money in lobbying against the law were it to gain traction. When these efforts do arise, though, lending your voice in support to better practices for protecting your data is an important contribution.
For now, though, we’ve reached the conclusion of our discussion in today’s episode. Do you want to revisit a recent episode, or want to get better acquainted with the current state of the security world? Our archives are always open, with every episode available right here with both its complete audio and in-depth show notes, plus all the links you need to follow a story all the way down to its roots.
Have questions about a story we discussed today, like Apple’s T2 security chip, or did you recently spot a headline that sounds like it’d make for a good story? Maybe you just have some plain old questions and concerns, such as the ones that inspired our recent Hotel Wi-Fi How-To episode. Whatever the case may be, you can send an email to Checklist@SecureMac.com with your thoughts and ideas. We love hearing from listeners, so keep those emails coming!