Healthcare delivery organizations (HDOs) have been busy increasing their network and systems security in the last year, though there is still much room for improvement, according to Forescout researchers.
This is the good news: the percentage of devices running Windows unsupported operating systems fell from 71% in 2019 to 32% in 2020 and there have been improvements when it comes to timely patching and network segmentation.
The bad news? Some network segmentation issues still crop up and HDOs still use insecure protocols for both medical and non-medical network communications, as well as for external communications.
Based on two data sources – an analysis of network traffic from five large hospitals and clinics and the Forescout Device Cloud (containing data for some 3.3 million devices in hundreds of healthcare networks) – the researchers found that, between April 2019 and April 2020:
- The percentage of devices running versions of Windows OS that will be supported for more than a year jumped from 29% to 68% and the percentage of devices running Windows OS versions supported via ESU fell from 71% to 32%. Unfortunately, the percentage of devices running Windows OSes like Windows XP and Windows Server 2003 remained constant (though small)
- There was a decided increase in network segmentation
Unfortunately, most network segments (VLANs) still have a mix of healthcare devices and IT devices or healthcare equipment, personal, and OT devices, or mix sensitive and vulnerable devices.
As far as communication protocols are concerned, they found that:
- 4 out of the 5 HDOs were communicating between public and private IP addresses using a medical protocol, HL7, that transports medical information in clear text
- 2 out of the 5 HDOs allowed medical devices to communicate over IT protocols with external servers reachable from outside the HDO’s perimeter
- All HDOs used obsolete versions of communication protocols, internally and externally (e.g., SSLv3, TLSv1.0, and TLSv1.1, SNMP v1 and 2, NTP v1 and 2, Telnet)
- Many of the medical and proprietary protocols used by medical equipment lack encryption and authentication, or don’t enforce its usage (e.g., HL7, DICOM, POCT01, LIS02). OT and IoT devices in use also have a similar problem
That’s all a big deal, because attacks exploiting these security vulnerabilities could do a lot of damage, including stealing patients’ information, altering it, disrupting the normal behavior of medical devices, disrupting the normal functioning of the entire organization (e.g., via a ransomware attack), etc.
Defense strategies for better healthcare network security
The researchers advised HDOs’ cyber defenders to:
- Find a way to “see” all the devices on the network, whether they comply with company policies, and detect malicious network behavior they may exhibit
- Identify and remediate weak and default passwords
- Map the network flow of existing communications to help identify unintended external communications, prevent medical data from being exposed publicly, and to detect the use of insecure protocols
- Improve segmentation of devices (e.g., isolate fragile legacy applications and operating systems, segment groups of devices according to their purpose, etc.)
“Whenever possible, switch to using encrypted versions of protocols and eliminate the usage of insecure, clear-text protocols such as Telnet. When this is not possible, use segmentation for zoning and risk mitigation,” they noted.
They also warned about the danger of over-segmentation.
“Segmentation requires well-defined trust zones based on device identity, risk profiles and compliance requirements for it to be effective in reducing the attack surface and minimizing blast radius. Over-segmentation with poorly defined zones simply increases complexity without tangible security benefits,” they concluded.
71% of healthcare and medical apps have at least one serious vulnerability that could lead to a breach of medical data, according to Intertrust.
The report investigated 100 publicly available global mobile healthcare apps across a range of categories—including telehealth, medical device, health commerce, and COVID-tracking—to uncover the most critical mHealth app threats.
Cryptographic issues pose one of the most pervasive and serious threats, with 91% of the apps in the study failing one or more cryptographic tests. This means the encryption used in these medical apps can be easily broken by cybercriminals, potentially exposing confidential patient data, and enabling attackers to tamper with reported data, send illegitimate commands to connected medical devices, or otherwise use the application for malicious purposes.
Bringing medical apps security up to speed
The study’s overall findings suggest that the push to reshape care delivery under COVID-19 has often come at the expense of mobile application security.
“Unfortunately, there’s been a history of security vulnerabilities in the healthcare and medical space. Things are getting a lot better, but we still have a lot of work to do.” said Bill Horne, General Manager of the Secure Systems product group and CTO at Intertrust.
“The good news is that application protection strategies and technologies can help healthcare organizations bring the security of their apps up to speed.”
The report on healthcare and medical mobile apps is based on an audit of 100 iOS and Android applications from healthcare organizations worldwide. All 100 apps were analyzed using an array of static application security testing (SAST) and dynamic application security testing (DAST) techniques based on the OWASP mobile app security guidelines.
- 71% of tested medical apps have at least one high level security vulnerability. A vulnerability is classified as high if it can be readily exploited and has the potential for significant damage or loss.
- The vast majority of medical apps (91%) have mishandled and/or weak encryption that puts them at risk for data exposure and IP (intellectual property) theft.
- 34% of Android apps and 28% of iOS apps are vulnerable to encryption key extraction.
- The majority of mHealth apps contain multiple security issues with data storage. For instance, 60% of tested Android apps stored information in SharedPreferences, leaving unencrypted data readily readable and editable by attackers and malicious apps.
- When looking specifically at COVID-tracking apps, 85% leak data.
- 83% of the high-level threats discovered could have been mitigated using application protection technologies such as code obfuscation, tampering detection, and white-box cryptography.
In a recently released report by the UK National Cyber Security Centre (NCSC), whose findings have been backed by Canada’s Communications Security Establishment (CSE) and the US NSA and CISA (Cybersecurity and Infrastructure Security Agency), the agency has warned about active cyber attacks targeting biomedical organizations that are involved in the development of a COVID-19 vaccine.
On Friday, BitSight researchers shared the results of a study that looked for detectable security issues at a number of companies who play a big role in the global search for a vaccine, and found compromised systems, open ports, vulnerabilities and web application security issues.
Biomedical orgs under attack
The report details recent tactics, techniques and procedures (TTPs) used by APT29 (aka “Cozy Bear”), which the NCSC and the CSE believe to be “almost certainly part of the Russian intelligence services.”
The agencies believe that the group is after information and intellectual property relating to the development and testing of COVID-19 vaccines.
“In recent attacks (…), the group conducted basic vulnerability scanning against specific external IP addresses owned by the organisations. The group then deployed public exploits against the vulnerable services identified,” the report states.
Among the flaws exploited by the group are CVE-2019-19781 (affecting Citrix’s Application Delivery Controller (ADC) and Gateway), CVE-2019-11510 and CVE-2018-13379 (affecting Pulse Secure VPN endpoints and Fortigate SSL VPN installations, respectively) and CVE-2019-9670 (affecting the Synacor Zimbra Collaboration Suite).
The group also uses spear-phishing to obtain authentication credentials to internet-accessible login pages for target organizations.
After achieving persistence through additional tooling or legitimate credentials, APT 29 uses custom malware (WellMess and WellMail) to execute arbitrary shell commands, upload and download files, and run commands or scripts with the results being sent to a hardcoded Command and Control server. They also use some malware (SoreFang) that has been previously used by other hacking groups.
The report did not identify the targeted organizations nor did it say whether the attacks were successful and whether any information and IP has been stolen.
Biomedical orgs open to cyber attacks
As many security researchers pointed out, Russian cyber espionage groups aren’t the only ones probing these targets, so these organizations should ramp up their security efforts.
BitSight researchers have recently searched for security issues that attackers might exploit. They’ve looked at 17 companies of varying size that are involved in the search for a COVID-19 vaccine, and found:
- 25 compromised or potentially compromised machines (systems running malware/bots, potentially unwanted applications, spam-sending machines and computers behaving in abnormal ways) in the past year
- A variety of open ports (i.e., exposed insecure services that should be never exposed outside of a company’s firewall): Telnet, Microsoft RDP, printers, SMB, exposed databases, VNC, etc., which can become access points into a company’s network
- Vulnerabilities. “14 of the 17 companies have vulnerabilities and six of them have very serious vulnerabilities (CVSS score > 9). 10 companies have more than 10 different active vulnerabilities.”
- 30 web application security issues (e.g., insecure authentication via HTTP, insecure redirects from HTTPS to HTTP, etc.) that could be exploited by attackers to eavesdrop on and capture sensitive data, such as credentials, corporate email, and customer data.
“These findings are not abnormal when compared to other groups of large companies (e.g. the Fortune 1000), but given the heightened threat environment, they do provide cause for concern,” the researchers pointed out.
“It only takes a misconfigured piece of software, an inadvertently exposed port, or an insecure remote office network for a hacker to gain entry to systems that store scientific research, intellectual property, and the personal data of subjects involved in clinical trials.”
As Head of Research at CyberMDX, Elad Luz gathers and analyzes information on a variety of connected healthcare devices in order to improve the techniques used to protect them and/or report about their security issues to vendors. The research includes analyzing protocols, reverse engineering software, and conducting vulnerability tests.
Healthcare organizations are increasingly experiencing IoT-focused cyberattacks. What is the realistic worst-case scenario when it comes to such attacks?
The first and most important risk to bear in mind and protect against in our space is always patient risk. In a place like hospital, this may happen on different levels. Care critical devices that are directly connected to patients like infusion pumps, ventilation, anesthesia, patient monitoring and such obviously represent the most critical endpoints from a security perspective. Compromises to those devices can cause serious immediate effects.
After care critical devices, the next most critical line of defense should be drawn around diagnostic machines like radiology devices, or lab devices that can also result in situations of serious short term negative impact. Beyond that, you have to account for care adjacent devices that pose near term risk, such as connected sterilization machines and medication dispensers. Even devices that have only little to do with the medical flow but are still necessary for the hospital to operate — like wireless tags, access controls, connected washers may affect the responsiveness of the medical staff which may later affect patient health.
It’s been cited ad nauseam and for good reason — but the WannaCry attacks immediately come to mind as a really poignant example of how even administrative devices being compromised can result in patient harm. And that threat hasn’t gone away in the last 3 years since WannaCry. Just in 2019, a truly astonishing 759 ransomware attacks were launched against healthcare organizations. Of those, at least 10 forced hospitals to turn away patients due to an impaired ability to deliver care. In fact, there’s a very serious impact on care even when hospitals don’t need to turn away patients.
When researchers measured the effects of cyber attacks on patient safety they found an operational ripple effect that added — on average — 2.7 minutes to medical response times. In a health emergency like a heart attack, minutes are often the difference between life and death. To wit, the same report noted a 3.6% increase in cardiac event fatalities at hospitals that had recently suffered cyberattacks. In other words, all other things being equal, for every 30 cardiac event patients admitted to a cyber-exploited hospital, statistically, one patient who would have survived elsewhere will be lost.
How do the complex medical device supply and value chains ultimately impact the security of connected devices in the healthcare industry?
Because of the complex medical device supply and value chains, it’s not always clear who should take responsibility for security best practices. While hospital administrators tend to think device manufacturers should be responsible for the security of their devices — which if not designed securely can hardly be operated securely — device manufacturers think the responsibility lies with the hospitals who create the network conditions that largely define the attack surface. This gap in expectations makes effective medical device security all the more difficult.
It’s important that security be considered at the earliest stages and built into medical technology research, development, procurement, deployment, and management processes. This means not only thinking about security, but also testing for it so that potential issues can be identified and addressed before they graduate into real-world problems. That applies equally to medical device stakeholders in the pre-market and post-market — manufacturers and hospitals.
Today, the type of testing required is woefully neglected by both sides of the market, with only 9% of manufacturers and 5% of users say they test medical devices at least annually.
What are the main challenges when it comes to vulnerability research of medical devices?
From a purely research perspective, there are challenges to do with access. For example, device procurement costs that can be prohibitively expensive, laws and policies that prevent vendors from selling to non-hospitals, sometimes difficult-to-accommodate spatial prerequisites, as well as installation, configuration, and calibration complexities, or even networking codependencies.
From a slightly less tactical perspective, looking more at strategy and the bigger picture, the research is only valuable insofar as it manages to improve the industry’s security. To that point, challenges can sometimes come in how vendors relate to researchers — if the relationship becomes adversarial, it will be difficult for both sides to work together to actually improve security. Of course, we need to also think about the facts on the ground in hospitals. Even if the researchers and vendors do everything right on their end, it doesn’t guarantee a positive outcome if hospitals continue using vulnerable devices without implementing patches or other mitigations.
So, there are definitely challenges in trilaterally coordinating positive real-world impact. And with the worst-case scenario for our industry always revolving around cases of cyber-physical harm, a severity scoring system (CVSS) that fundamentally ignores physical impact, the system itself may do a disservice in misrepresenting and poorly prioritizing the risks.
It’s imperative that all the stakeholders be able to come together, share a clearly understood frame of reference and common objectives in dialing down the real-world risk exposure.
What does this type of research entail? Were you surprised by some of the findings?
Our research methodology involves some proprietary technology and tactics that I can’t discuss, but the parts that I can talk about normally begin with data collection and good old fashioned detective work.
We break down and reverse-engineer the communication protocols used by medical devices, we analyze device network behavior, we crawl the internet and scraping device references, we dig into MDS² files, we use a good amount of inductive reasoning, trial & error, and “poking around” in the lab to follow the breadcrumbs and build the investigation.
When we “crack” a case open and discover a previously undocumented security issue, we’re often surprised by things like lack of authentication, hard-coded credentials, and other vulnerabilities that are caused less by human error and more by bad or lazy design decisions taken.
What’s your take on responsible disclosure? What can be done to safeguard users in case a vendor is not responsive to vulnerability reports?
Cybersecurity is still fairly new and somewhat unfamiliar territory to most healthcare organizations. In fact, the whole industry is still working on getting its arms around it, and that goes to national oversight bodies and institutionalized safeguards as well. The process is still not perfectly standardized or very granularly governed. There may not be official rules dictating who is informed of what, what controls are applied to whom, who has influence over bottom line determinations, and what can be said to whom for every stage in the process.
Similarly, the factors governing the timeline for disclosure can be somewhat opaque and, from an institutional perspective, the guiding logic for disclosure is not always clear. So, if you’re dealing with a cooperative vendor you might expect that CISA — the division of homeland security responsible for overseeing the disclosure process for matters of public infrastructure — would withhold disclosure until patches can be developed and issued for the vulnerability in question. Yet, that’s not always the case. I think it’s important that we not lose sight of the forest for the trees or reduce the task of vulnerability management to items on a static checklist. We need to maintain a view of the mission: making healthcare safer and more secure.
That said, the fact is that more often than not, the process works as designed; and improvements are being introduced all the time. So I think, all in all, responsible disclosure is very important to the long-term security health of the industry. I also think it will only get better as lessons are learned and CISA collaborates more closely with other bodies like the FDA.
To your second question, I think we should concern ourselves less with how users can protect themselves from an unresponsive vendor, and more with how the public, the demand side of the market, researchers, and national oversight bodies can work together to apply pressure as needed to make sure that vendors are always responsive to matters of cybersecurity.
What advice would you give to a healthcare CISO that wants to make sure the connected devices in use in the organization are as secure as possible?
There is obviously a need for an automated tool to do that. Otherwise we are talking about nonstop work of securing thousands of devices, tens of different models and deployments, each requiring its own permissions and rules, in an ever-changing environment both inside the hospital (new assets get connected, old ones disconnected) and outside (new threats and vulnerabilities are published).
The best option would be using a solution that is tailor-made for medical centers, which is what we do at CyberMDX. Our solution is already familiar with a huge collection of medical devices and their unique protocols and our researchers are always working to lock down vulnerabilities you don’t even know you have. We are THE experts when it comes to cybersecurity and clinical connectivity.
How do you expect the security of IoT medical devices to evolve in the near future?
As IoT continues to connect everyday devices, I think we’ll find, especially in the medical field, that the most basic and relied upon devices will quickly become our biggest liabilities from a security perspective. Some evidence of this trend can we seen in the recent MDhex vulnerabilities that revealed a number of products in the popular CARESCAPE family of patient monitoring devices to be extremely vulnerable to cyber sabotage.
The problem is that all of a sudden manufacturers are expected to be experts in something — cybersecurity — that they’ve barely had to consider until now. It’s challenging for the manufacturers because the largest variety and best quality of agent-based security solutions reside on Windows and Linux-based devices, and require frequent updates to be relevant. Meeting those requirements is usually challenging in IoT embedded devices. Therefore I expect organizations to rely more and more on centralized, third-party provided agentless solutions that monitor the network traffic and introduce security features.
1.19 billion confidential medical images are now freely available on the internet, according to Greenbone’s research into the security of Picture Archiving and Communication Systems (PACS) servers used by health providers across the world to store images of X-rays as well as CT, MRI and other medical scans. US: 786 million medical images identified That’s a 60% increase from the finding between July and September 2019, and includes details of patient names, reason for examination, … More
The post 1.19 billion confidential medical images available on the internet appeared first on Help Net Security.