The Checklist Podcast

SecureMac presents The Checklist. Each week, Nicholas Raba, Nicholas Ptacek, and Ken Ray hit security topics for your Mac and iOS devices. From getting an old iPhone, iPad, iPod, Mac, and other Apple gear ready to sell to the first steps to take to secure new hardware, each show contains a set of easy to follow steps meant to keep you safe from identity thieves, hackers, malware, and other digital downfalls. Check in each Thursday for a new Checklist!

Hacking Your Health

Aired on August 17, 2017

Today, we’re looking at the way technology has changed medicine, including the threats the healthcare industry faces, and what can be done to protect patient safety.

Technology seems to grow by leaps and bounds almost every day. Those changes touch every aspect of our lives. We’re not just seeing advances that make the smartphone in your pocket faster or the computer at your desk more powerful — we’re changing the way we approach our health, too. The healthcare industry digitizes its files now, and new medical devices appear every year. It’s no surprise then that the hackers have followed this stream of innovation, eager for new opportunities to exploit systems and hurt others.

  • Why did medicine go digital?
  • Security concerns for medical devices
  • Device security problems in hospitals
  • Hospitals held ransom
  • The future of digital security in the medical field

Why did medicine go digital?

It might seem like ancient history, but it wasn’t so long ago that medical record keeping was still paper-based, with files taking up massive amounts of space. Today, many practices have put paper in the past. Your doctor might still break out his prescription pad from time to time, but more commonly you just head straight to your pharmacy. It’s faster and more convenient — and those are only two of the reasons to make the switch to digital patient records and medical data. The change has been rapid though, and at least in the United States, it took federal legislation to get things started back in 1996.

You’ve probably heard of HIPAA before — at least in the context of patient privacy and the ways it restricts how doctors can share what is known as protected health information, or PHI. That’s just a part of what HIPAA did; it also established a framework for electronic communications in the medical field and began encouraging the changeover to electronic health records, or EHRs. HIPAA further contained a “Security Rule” for further governance of this sensitive data. The government laid out three ways PHI needed protection: technically, physically, and on the administrative level. Technical security measures included mandatory encryption, in-depth authentication procedures, and more.

Later, in 2009, the US Congress also passed legislation called HITECH, the Health Information Technology for Economic and Clinical Health Act. This law came about in response to growing threats to health data and widespread problems of misuse and poor security. Not only did HITECH clarify more about what responsibilities medical entities have with respect to patient data, but it instituted a mandatory notification period for data breaches and set the stage for harsher penalties and enforcement of violations.

There are clear reasons to transition to electronic health records, many of which were certainly inevitable — it was only a matter of time before computers ended up everywhere in the medical field. Obviously, sharing information between doctors is important, and EHRs have facilitated better care and improved record keeping. That doesn’t mean the introduction of HIPAA or even the HITECH Act went without objection. Quite the contrary, in fact: many people raised concerns about the potential privacy problems. These concerns are why there is no centralized medical record system in the US.

That doesn’t mean some centralization doesn’t exist — major hospitals, health networks, and insurers collect information on hundreds of thousands to tens of millions of patients. The entire point of the Electronic Health Record is to facilitate long-term information gathering for sharing among all of a patient’s care providers. While there’s no central database, there are plenty of prime targets out there for the bad guys to hit.

It’s not just your doctor’s office that’s at risk. Think of all the many other places that could handle your health records. Payment processors, labs, hospital partners—they all might end up with a look at your records at some point, especially if you receive any major treatment. That also makes it difficult to pinpoint the source of a breach at times. Let’s pause a second to answer an important question. Why do hackers want to get their hands on your electronic health records anyway?

More often than not, they don’t care about someone’s diabetes diagnosis or their family history of male pattern baldness — though some crafty identity thieves could use this information to conduct medical fraud. No, there’s a good reason that EHRs trade for much higher prices on Dark Web markets than some other type of personal info, like your credit card numbers. They contain vital personal information that rarely, if ever, changes.

In the US, EHRs hold information like your Social Security number, billing address, and other relevant administrative information. In other words, it’s the perfect target for identity thieves or hackers looking to make a quick buck. In one experiment a few years back, researchers created a fake trove of about 1,000 health records and put them up on the Dark Web. Within no time at all, the records crisscrossed the globe, ending up in 22 countries. Some even potentially ended up in the hands of known cyber crime organizations. That kind of rapid spread showcases just why strong security is an absolute must for health information in a digital age.

Unfortunately, a lax approach to security still exists in many major corporations and hospitals. Big-name insurers and large hospitals have suffered breaches that eventually exposed millions of consumer health records. Health insurance giant Anthem was forced to disclose in 2015 that a hacking group compromised its servers and stole nearly 80 million medical records — completely unencrypted.

As an insurer, HIPAA did not require Anthem to encrypt this information. As a result, tens of millions of its customers must contend with the potential long-term problems. The attack against Anthem used an advanced and detailed scheme involving a malware campaign, and the same group may have stolen more than a million records from another insurer later that same year.

As we mentioned, HITECH includes provisions for reporting security breaches and notifying affected individuals as soon as possible. Even so, it might be a case of closing the barn door long after the horses have left; your medical history isn’t something you can simply swap out like you would with a new credit card number. Once these records have escaped into the wild, they’re out there for good, and a proactive stance against identity theft becomes a must.

Sometimes the security flaws in medical systems are mind-boggling to behold. Just this year, a major medical company, Molina Healthcare, was found to have a critical flaw in its web systems. With just one URL to a patient’s record, anyone could change the address and view any other patient’s information. That’s not just a threat to patients, it’s incompetent design — and that lack of regard for security extends into areas besides our medical data.

Security concerns for medical devices

It’s not just health records that are at risk from attack, though they are far and away the most actively exploited targets. We don’t use digital technology just on our desks or in the palms of our hands — we use it inside our bodies, too. From life-saving pacemakers that closely monitor a patient’s heart rate and vitals to insulin pumps that deliver hassle-free dosing to diabetic patients, there’s plenty to hack within the human body.

Hacking a pacemaker? Sabotaging personal medical devices to serve a malicious purpose? These things sound like something out of the plot to a primetime TV drama. Rather than being science fiction, though, this is our reality. The fact is, there are potentially millions of medical devices out there that feature glaring security holes. How could that be? Well, the answer is the same as it would be if you were to pose that question about why so many people suffer from easily preventable malware infections.

For many of these medical device manufacturers, security is not a front-line concern. We might even speculate that there’s a bit of the “It can’t happen to me” attitude at play in these situations. Being first on the market with a new or innovative device typically takes priority, with security efforts as an afterthought. It’s more profitable to move to market with something like an insulin pump that reports long-term glucose levels before your competitor. So how do these security lapses translate into threats?

Let’s use the insulin pump example first. There’s a vibrant community of white hat hacking surrounding insulin pumps, as people with diabetes look for ways to tweak their operation to suit personal needs. That also means others are looking at potential ways to disrupt their operation. One model, distributed by conglomerate Johnson & Johnson, uses a small wireless remote to command dosing on the pump.

The company had to disclose that these devices had a vulnerability that could allow rogue signals to operate the device. Johnson & Johnson downplayed the severity of the problem, and it is true that it would require substantial effort and expertise to create the setup necessary. The hack requires a powerful radio antenna and knowledge of the command codes accepted by the device. By getting within 25 feet of the patient, one could broadcast the correct code to the pump to induce operation. In a worst-case scenario, this could lead to an insulin overdose and death.

This attack was unique in that it did not require knowledge of the particular device serial number — in other words, it would work on any of the affected devices. Previous potential attacks were detailed in a talk at the Black Hat security conference and relied on using device-specific remotes synced to a serial number. In 2011, a well-known device hacker, Jack Barnaby, also demonstrated the ability to control pumps over the air with a similar method. Since none of the devices use encryption, it’s not a challenge to communicate with the equipment.

Similar problems exist for cardiac patients with pacemakers. As recently as January of this year, the FDA issued warnings to patients with pacemakers using Wi-Fi hardware from a particular medical manufacturer. With the right equipment and knowledge, a hacker could instruct this hardware to communicate with the pacemaker to deliver a potentially fatal shock, or to increase the irregularity of someone’s heartbeat. The FDA published guidelines on how quickly companies must report and patch vulnerabilities, but concerns linger. It’s not hard to see that there’s more than one way for a determined malicious party to cause chaos with medical hacks.

Device security problems in hospitals

Visit any hospital, and it’s almost immediately apparent how much technology is an integral part of their operations. From computers on mobile carts wheeled around to patients to gather billing information to the battery of digital equipment that saves and sustain lives, there are computers everywhere. Every hospital bed in the US might have up to 15 devices connected.

We know what that means, too — where there are computers, there will be vulnerabilities. The issues we’ve just discussed about individual medical devices end up magnified by many times in a hospital setting. The risk is present there as well, and there are many more targets for a malicious attack than there are out on the street.

Patients with pacemakers and those hooked up to insulin machines inside hospitals are just as vulnerable to security threats, though someone is unlikely to get as close as they need to execute an attack. The real threat here is broader: it is the fact that the same lackadaisical attitude towards security is apparent across a wide spectrum of devices.

Back in 2012, a security researcher named Scott Erven was given permission to investigate all a hospital’s digital systems. His mission: find out where there were security holes and determine some ways to mitigate the risk. What he found was surprising — even shocking! Almost every device in the hospital was, to some degree, hackable.

The severity of the problems ranged from hacking defibrillators to commanding drug infusion pumps to send the wrong medication doses to a patient. Even patient X-rays were accessible to someone who knew where to look. Erven found many devices that did not require authentication to use at all, or which relied on easy to guess default passwords like — of course — “password.”

Even if these devices don’t boast the best security, how is it that they are all so easy to break into and disrupt? While most of the hardware might not connect directly to the Internet, they do link up to the hospital’s intranet — which is, by extension, connected to the Internet somewhere along the line. This network connectivity is necessary not just to share test results or data from one department to another, but for creating a patient’s health record.

That’s right: we’re talking about the same EHRs that hackers want to steal. Recording this information is a necessity, but it also opens a door for someone to probe the network for weak devices. Coupled with the vulnerabilities endemic to much of the hardware hospitals use and we have the potential for huge security nightmares on our hands.

It doesn’t help that manufacturers continue to deny problems until they are too clear to ignore. In 2015, IV pump creator Hospira issued denials about claims that its products were vulnerable to disruptive operations, even insisting the hardware design itself made a hack impossible. That wasn’t true, and when researchers demonstrated that the pumps were open to attack from the outside, the FDA told hospitals to pull the devices ASAP. Patient safety is worth the hassle, but there shouldn’t need to be such a problem in the first place — especially not across so many systems.

Hospitals held ransom

So, the medical industry already needs to cope with the bad guys targeting our health data and trying to hack the devices we use to distribute medicine and stay healthy. That’s still not the whole picture. Hospitals aren’t just using more equipment than ever. Since many of these systems end up ultimately accessible via the Internet, hospitals have a big “bullseye” on them. They’re major targets for malware, and in recent years we’ve seen a huge spike in ransomware, holding hospital systems and their data hostage.

This threat became especially evident in 2016 when ransomware incidents spiked across the globe. Suddenly, malware authors knew that slamming encryption down onto vulnerable systems meant an enormous potential for profit. In February of last year, a hospital in California had its systems locked down by ransomware attackers that demanded tons of money, eventually settling for a $17,000 ransom. The hospital paid, unsure of what else to do. They recovered their systems, but not all are so lucky.

Later the same year, another hospital in Kansas was also struck by ransomware. The hospital opted to pay the ransom, an undisclosed amount — only to run into an all-too-familiar scenario. After paying the first ransom, they did not receive the key to decrypt all their files. Instead, they received a second demand for more money. They declined to pay this time, and instead began the arduous task of recovering their systems from scratch.

At the time, it seemed like these problems were mostly localized, but the potential for a widespread attack was evident. Many warned of the possibility, but the threat was not taken as seriously as it perhaps should have been. At least, not until a few months ago, when the WannaCry ransomware attack spread like wildfire around the globe, making headlines especially when it disrupted the UK’s National Health Service.

We covered some of the effects of the WannaCry attack in an earlier episode of The Checklist, but we didn’t go into too much detail about the type of systems it infected within the NHS hospitals. It wasn’t just check-in desk computers or those at the nurse’s station. WannaCry infected some pretty serious and vital equipment, derailing schedules and preventing patients from undergoing necessary tests. Besides forcing doctors back to tedious pen and paper record keeping and locking medical staff out of critical files, WannaCry hit computers attached to critical equipment like MRI machines. How could that happen?

Many of these high-tech medical devices may be marvels of engineering, but they run often-outdated software. The MRI machines, for example, still use Windows XP — a prime candidate for the EternalBlue exploit that WannaCry used. These aren’t just standalone computers, either; they’re systems embedded into the MRI hardware itself. That means they’re not just difficult to update to newer operating systems; they’re extraordinarily difficult to replace outright. The cost alone would be astronomical.

The best practice might be to keep these devices as isolated on the network as possible. It’s clear that hasn’t happened yet, not in many hospitals, and that is why we end up with these types of situations. As a further complication, many smaller medical establishment rely on purchasing older or used equipment — which might come bundled with this old and out-of-date software.

Thankfully, no one died because of the WannaCry attack, but it caused delay to tens of thousands of important appointments and procedures, and problems persisted for weeks after the attack subsided. The disruption alone is dangerous enough, but it’s easy to imagine a scenario where fatalities could occur due to a malware attack on a hospital — either intentionally or as collateral damage. It’s a sobering thought, but we can’t just be afraid of these threats. We must confront them head-on.

The future of digital security in the medical field

It’s obvious not just to those within the medical industry but to many outside observers as well, that we face a serious security problem here. There’s no way to “un-ring” the bell of progress regarding medical technology; the field will continue to embrace it going into the future, and we’ll all undoubtedly benefit from many of those advancements. Those leaps can’t come at the expense of our safety, security, and privacy, however. With ransomware on the rise, vulnerable devices, and authorities that repeatedly issue warnings about the threat faced by medical hacking, it’s easy to feel like the situation is bleak or even hopeless. The good news is that it isn’t — not by a long shot. The computer security world, and even the average consumer, will play a crucial role in keeping us safe in the coming years.

First on the agenda: awareness. You can’t protect against a threat you don’t know about, or which you’ve dismissed as “not a problem.” Hopefully, the WannaCry attack was a wake-up call to the medical world that these issues are real and they aren’t going away. However, more education is just as important. With so many different attack vectors out there, it’s important to keep informing professionals of the risks they face.

Some young researchers are already well underway with efforts to do just that. This year saw the first CyberMed summit, an attempt to bring together healthcare professionals for education on the digital threats they face. Not only is it important for manufacturers to take steps to reduce the issues with their products, but doctors should understand how to cope with the problems they may face as well. Spreading awareness at CyberMed took the form of a simulated hack against a hospital to demonstrate the potential for disruption.

Efforts on the part of white hat hackers, like Scott Erven, will play a crucial role as well. With the help of ethical hackers who want to help correct problems, we can uncover serious vulnerabilities before the bad guys. That gives the pros time to develop and implement solutions.. For now, though, we’ve only just begun to address the future of security and privacy in the medical world. There is plenty of reason to be hopeful, but also there is a long road ahead to a truly secure medical environment.

The public has a part to play, too, whether it’s through boycotts of companies that don’t put patient privacy and security first or by advocating for more robust systems that aren’t easily hackable in the first place. Consumer pressure can accomplish a lot, especially over the long term. With more consumers “in the know” about the risks inherent in their devices, pressure can travel back up the chain to hopefully spur change within corporate practices.

It’s clear, especially in the wake of this year’s spate of ransomware attacks, that these problems aren’t going away. Digital innovation in the medical space will continue, too, and we’ll surely see exciting developments — hopefully, hand in hand with improvements in encryption and overall security like some of those we discussed today.

That’s all we have for you on The Checklist this week. We hope you’ve gained some interesting new perspective on the intersections between health and hacking. Join us again next Thursday for another interesting discussion.

Do you have a topic you’d like to see us cover in a future episode, or a security question in need of an answer? If you have anything to ask us, send us an email at checklist@securemac.com!

Share on Facebook3Tweet about this on TwitterShare on Google+0Email this to someonePrint this page