How to build up cybersecurity for medical devices

Manufacturing medical devices with cybersecurity firmly in mind is an endeavor that, according to Christopher Gates, an increasing number of manufacturers is trying to get right.

cybersecurity medical devices

Healthcare delivery organizations have started demanding better security from medical device manufacturers (MDMs), he says, and many have have implemented secure procurement processes and contract language for MDMs that address the cybersecurity of the device itself, secure installation, cybersecurity support for the life of the product in the field, liability for breaches caused by a device not following current best practice, ongoing support for events in the field, and so on.

“For someone like myself who has been focused on cybersecurity at MDMs for over 12 years, this is excellent progress as it will force MDMs to take security seriously or be pushed out of the market by competitors who do take it seriously. Positive pressure from MDMs is driving cybersecurity forward more than any other activity,” he told Help Net Security.

Gates is a principal security architect at Velentium and one of the authors of the recently released Medical Device Cybersecurity for Engineers and Manufacturers, a comprehensive guide to medical device secure lifecycle management, aimed at engineers, managers, and regulatory specialists.

In this interview, he shares his knowledge regarding the cybersecurity mistakes most often made by manufacturers, on who is targeting medical devices (and why), his view on medical device cybersecurity standards and initiatives, and more.

[Answers have been edited for clarity.]

Are attackers targeting medical devices with a purpose other than to use them as a way into a healthcare organization’s network?

The easy answer to this is “yes,” since many MDMs in the medical device industry perform “competitive analysis” on their competitors’ products. It is much easier and cheaper for them to have a security researcher spend a few hours extracting an algorithm from a device for analysis than to spend months or even years of R&D work to pioneer a new algorithm from scratch.

Also, there is a large, hundreds-of-millions-of-dollars industry of companies who “re-enable” consumed medical disposables. This usually requires some fairly sophisticated reverse-engineering to return the device to its factory default condition.

Lastly, the medical device industry, when grouped together with the healthcare delivery organizations, constitutes part of critical national infrastructure. Other industries in that class (such as nuclear power plants) have experienced very directed and sophisticated attacks targeting safety backups in their facilities. These attacks seem to be initial testing of a cyber weapon that may be used later.

While these are clearly nation-state level attacks, you have to wonder if these same actors have been exploring medical devices as a way to inhibit our medical response in an emergency. I’m speculating: we have no evidence that this has happened. But then again, if it has happened there likely wouldn’t be any evidence, as we haven’t been designing medical devices and infrastructure with the ability to detect potential cybersecurity events until very recently.

What are the most often exploited vulnerabilities in medical devices?

It won’t come as a surprise to anyone in security when I say “the easiest vulnerabilities to exploit.” An attacker is going to start with the obvious ones, and then increasingly get more sophisticated. Mistakes made by developers include:

Unsecured firmware updating

I personally always start with software updates in the field, as they are so frequently implemented incorrectly. An attacker’s goal here is to gain access to the firmware with the intent of reverse-engineering it back into easily-readable source code that will yield more widely exploitable vulnerabilities (e.g., one impacting every device in the world). All firmware update methods have at least three very common potential design vulnerabilities. They are:

  • Exposure of the binary executable (i.e., it isn’t encrypted)
  • Corrupting the binary executable with added code (i.e., there isn’t an integrity check)
  • A rollback attack which downgrades the version of firmware to a version with known exploitable vulnerabilities (there isn’t metadata conveying the version information).

Overlooking physical attacks

Physical attack can be mounted:

  • Through an unsecured JTAG/SWD debugging port
  • Via side-channel (power monitoring, timing, etc.) exploits to expose the values of cryptographic keys
  • By sniffing internal busses, such as SPI and I2C
  • Exploiting flash memory external to the microcontroller (a $20 cable can get it to dump all of its contents)

Manufacturing support left enabled

Almost every medical device needs certain functions to be available during manufacturing. These are usually for testing and calibration, and none of them should be functional once the device is fully deployed. Manufacturing commands are frequently documented in PDF files used for maintenance, and often only have minor changes across product/model lines inside the same manufacturer, so a little experimentation goes a long way in letting an attacker get access to all kinds of unintended functionality.

No communication authentication

Just because a communications medium connects two devices doesn’t mean that the device being connected to is the device that the manufacturer or end-user expects it to be. No communications medium is inherently secure; it’s what you do at the application level that makes it secure.

Bluetooth Low Energy (BLE) is an excellent example of this. Immediately following a pairing (or re-pairing), a device should always, always perform a challenge-response process (which utilizes cryptographic primitives) to confirm it has paired with the correct device.

I remember attending an on-stage presentation of a new class II medical device with a BLE interface. From the audience, I immediately started to explore the device with my smartphone. This device had no authentication (or authorization), so I was able to perform all operations exposed on the BLE connection. I was engrossed in this interface when I suddenly realized there was some commotion on stage as they couldn’t get their demonstration to work: I had accidentally taken over the only connection the device supported. (I then quickly terminated the connection to let them continue with the presentation.)

What things must medical device manufacturers keep in mind if they want to produce secure products?

There are many aspects to incorporating security into your development culture. These can be broadly lumped into activities that promote security in your products, versus activities that convey a false sense of security and are actually a waste of time.

Probably the most important thing that a majority of MDMs need to understand and accept is that their developers have probably never been trained in cybersecurity. Most developers have limited knowledge of how to incorporate cybersecurity into the development lifecycle, where to invest time and effort into securing a device, what artifacts are needed for premarket submission, and how to proper utilize cryptography. Without knowing the details, many managers assume that security is being adequately included somewhere in their company’s development lifecycle; most are wrong.

To produce secure products, MDMs must follow a secure “total product life cycle,” which starts on the first day of development and ends years after the product’s end of life or end of support.

They need to:

  • Know the three areas where vulnerabilities are frequently introduced during development (design, implementation, and through third-party software components), and how to identify, prevent, or mitigate them
  • Know how to securely transfer a device to production and securely manage it once in production
  • Recognize an MDM’s place in the device’s supply chain: not at the end, but in the middle. An MDMs cybersecurity responsibilities extend up and down the chain. They have to contractually enforce cybersecurity controls on their suppliers, and they have to provide postmarket support for their devices in the field, up through and after end-of-life
  • Ccreate and maintain Software Bills of Materials (SBOMs) for all products, including legacy products. Doing this work now will help them stay ahead of regulation and save them money in the long run.

They must avoid mistakes like:

  • Not thinking that a medical device needs to be secured
  • Assuming their development team ‘can’ and ‘is’ securing their product
  • Not designing-in the ability to update the device in the field
  • Assuming that all vulnerabilities can be mitigated by a field update
  • Only considering the security of one aspect of your design (e.g., its wireless communication protocol). Security is a chain: for the device to be secure, all the links of the chain need to be secure. Attackers are not going to consider certain parts of the target device ‘out of bounds’ for exploiting.

Ultimately, security is about protecting the business model of an MDM. This includes the device’s safety and efficacy for the patient, which is what the regulations address, but it also includes public opinion, loss of business, counterfeit accessories, theft of intellectual property, and so forth. One mistake I see companies frequently make is doing the minimum on security to gain regulatory approval, but neglecting to protect their other business interests along the way – and those can be very expensive to overlook.

What about the developers? Any advice on skills they should acquire or brush up on?

First, I’d like to take some pressure off developers by saying that it’s unreasonable to expect that they have some intrinsic knowledge of how to implement cybersecurity in a product. Until very recently, cybersecurity was not part of traditional engineering or software development curriculum. Most developers need additional training in cybersecurity.

And it’s not only the developers. More than likely, project management has done them a huge disservice by creating a system-level security requirement that says something like, “Prevent ransomware attacks.” What is the development team supposed to do with that requirement? How is it actionable?

At the same time, involving the company’s network or IT cybersecurity team is not going to be an automatic fix either. IT Cybersecurity diverges from Embedded Cybersecurity in many respects, from detection to implementation of mitigations. No MDM is going to be putting a firewall on a device that is powered by a CR2032 battery anytime soon; yet there are ways to secure such a low-resource device.

In addition to the how-to book we wrote, Velentium will soon offer training available specifically for the embedded device domain, geared toward creating a culture of cybersecurity in development teams. My audacious goal is that within 5 years every medical device developer I talk to will be able to converse intelligently on all aspects of securing a medical device.

What cybersecurity legislation/regulation must companies manufacturing medical devices abide by?

It depends on the markets you intend to sell into. While the US has had the Food and Drug Administration (FDA) refining its medical device cybersecurity position since 2005, others are more recent entrants into this type of regulation, including Japan, China, Germany, Singapore, South Korea, Australia, Canada, France, Saudi Arabia, and the greater EU.

While all of these regulations have the same goal of securing medical devices, how they get there is anything but harmonized among them. Even the level of abstraction varies, with some focused on processes while others on technical activities.

But there are some common concepts represented in all these regulations, such as:

  • Risk management
  • Software bill of materials (SBOM)
  • Monitoring
  • Communication
  • “Total Product Lifecycle”
  • Testing

But if you plan on marketing in the US, the two most important document should be FDA’s:

  • 2018 – Draft Guidance: Content of Premarket Submissions for Management of Cybersecurity in Medical Devices
  • 2016 – Final Guidance: Postmarket Management of Cybersecurity in Medical Devices (The 2014 version of the guidance on premarket submissions can be largely ignored, as it no longer represents the FDA’s current expectations for cybersecurity in new medical devices).
What are some good standards for manufacturers to follow if they want to get cybersecurity right?

The Association for the Advancement of Medical Instrumentation’s standards are excellent. I recommend AAMI TIR57: 2016 and AAMI TIR97: 2019.

Also very good is the Healthcare & Public Health Sector Coordinating Council’s (HPH SCC) Joint Security Plan. And, to a lesser extent, the NIST Cyber Security Framework.

The work being done at the US Department of Commerce / NTIA on SBOM definition for vulnerability management and postmarket surveillance is very good as well, and worth following.

What initiatives exist to promote medical device cybersecurity?

Notable initiatives I’m familiar with include, first, the aforementioned NTIA work on SBOMs, now in its second year. There are also several excellent working groups at HSCC, including the Legacy Medical Device group and the Security Contract Language for Healthcare Delivery Organizations group. I’d also point to numerous working groups in the H-ISAC Information Sharing and Analysis Organization (ISAO), including the Securing the Medical Device Lifecycle group.

And I have to include the FDA itself here, which is in the process of revising its 2018 premarket draft guidance; we hope to see the results of that effort in early 2021.

What changes do you expect to see in the medical devices cybersecurity field in the next 3-5 years?

So much is happening at high and low levels. For instance, I hope to see the FDA get more of a direct mandate from Congress to enforce security in medical devices.

Also, many working groups of highly talented people are working on ways to improve the security posture of devices, such as the NTIA SBOM effort to improve the transparency of software “ingredients” in a medical device, allowing end-users to quickly assess their risk level when new vulnerabilities are discovered.

Semiconductor manufacturers continue to give us great mitigation tools in hardware, such as side-channel protections, cryptographic accelerators, virtualized security cores. Trustzone is a great example.

And at the application level, we’ll continue to see more and better packaged tools, such as cryptographic libraries and processes, to help developers avoid cryptography mistakes. Also, we’ll see more and better process tools to automate the application of security controls to a design.

HDOs and other medical device purchasers are better informed than ever before about embedded cybersecurity features and best practices. That trend will continue, and will further accelerate demand for better-secured products.

I hope to see some effort at harmonization between all the federal, state, and foreign regulations that have been recently released with those currently under consideration.

One thing is certain: legacy medical devices that can’t be secured will only go away when we can replace them with new medical devices that are secure by design. Bringing new devices to market takes a long time. There’s lots of great innovation underway, but really, we’re just getting started!

Swiss-Swedish Diplomatic Row Over Crypto AG

Swiss-Swedish Diplomatic Row Over Crypto AG

Previously I have written about the Swedish-owned Swiss-based cryptographic hardware company: Crypto AG. It was a CIA-owned Cold War operation for decades. Today it is called Crypto International, still based in Switzerland but owned by a Swedish company.

It’s back in the news:

Late last week, Swedish Foreign Minister Ann Linde said she had canceled a meeting with her Swiss counterpart Ignazio Cassis slated for this month after Switzerland placed an export ban on Crypto International, a Swiss-based and Swedish-owned cybersecurity company.

The ban was imposed while Swiss authorities examine long-running and explosive claims that a previous incarnation of Crypto International, Crypto AG, was little more than a front for U.S. intelligence-gathering during the Cold War.

Linde said the Swiss ban was stopping “goods” — which experts suggest could include cybersecurity upgrades or other IT support needed by Swedish state agencies — from reaching Sweden.

She told public broadcaster SVT that the meeting with Cassis was “not appropriate right now until we have fully understood the Swiss actions.”

Sidebar photo of Bruce Schneier by Joe MacInnis.

Working together to secure our expanding connected health future

Securing medical devices is not a new challenge. Former Vice President Cheney, for example, had the wireless capabilities of a defibrillator disabled when implanted near his heart in 2007, and hospital IT departments and health providers have for years secured medical devices to protect patient data and meet HIPAA requirements.

connected health

With the expansion of security perimeters, the surge in telehealth usage (particularly during COVID-19), and proliferation in the number and types of connected technologies, healthcare cybersecurity has evolved into a more complex and urgent effort.

Today, larger hospital systems have approximately 350,000+ medical devices running simultaneously. On top of this, millions of additional connected devices are maintained by the patients themselves. Over the next 10 years, it’s estimated the number of connected medical devices could increase to roughly 50 billion, driven by innovations such as 5G, edge computing, and more. This rise in connectivity has increased the threat of cyberattacks not just to patient data, but also patient safety. Vulnerabilities in healthcare technology (e.g., an MRI machine or pacemaker) can lead to patient harm if diagnoses are delayed or the right treatments don’t get to the right people.

What can the healthcare industry do to strengthen their defenses today? How can they lay the groundwork for more secure devices and networks tomorrow?

The challenges are interconnected. The solutions cannot be siloed, and collaboration between manufacturers, doctors, healthcare delivery organizations and regulators is more critical now than ever before.

Device manufacturers: Integrating security into product design

Many organizations view medical device cybersecurity as protecting technology while it is deployed as part of a local network. Yet medical devices also need to be designed and developed with mobile and cloud security in mind, with thoughtful consideration about the patient experience. It is especially important we take this step as medical technology moves beyond the four walls of the hospital and into the homes of patients. The connected device itself needs to be secure, as opposed to the network surrounding the device.

We also need greater visibility and transparency across the medical device supply chain—a “software bill of materials.” The multicomponent nature of many medical products, such as insulin pumps or pacemakers, make the final product feel like a black box: hospitals and users know what it’s intended to do, but they don’t have much understanding about the individual components that make everything work. That makes it difficult to solve cybersecurity problems as they arise.

According to the 2019 HIMSS Cybersecurity Survey, just over 15% of significant security issues were initially started through either medical device problems in hospitals or vendor medical devices. As a result, some of these issues led to ransomware attacks exposing vulnerabilities, as healthcare providers and device makers scrambled to figure out just which of the products were at risk, while their systems were under threat. A software bill of materials would have helped them respond quickly to security, license, and operational risks.

Healthcare delivery organizations: Prioritizing preparedness and patient education

Healthcare providers, for their part, need to strengthen their threat awareness and preparedness, thinking about security from device procurement all the way to the sunsetting of legacy devices, which can extend over years and decades.

It’s currently not uncommon for healthcare facilities to use legacy technology that is 15 to 20 years old. Many of these devices are no longer supported and their security doesn’t meet the baseline of today’s evolving threats. However, as there is no replacement technology that serves the same functions, we need to provide heightened monitoring of these devices.

Threat modeling can help hospitals and providers understand their risks and increase resilience. Training and preparedness exercises are imperative in another critical area of cybersecurity: the humans operating the devices. Such exercises can put doctors, for instance, in an emergency treatment scenario with a malfunctioning device, and the discussions that follow provide valuable opportunities to educate, build awareness of, and proactively prepare for cyber threats.

Providers might consider “cybersecurity informed consent” to educate patients. When a patient signs a form before a procedure that acknowledges potential risks like infection or side effects, cyber-informed consent could include risks related to data breaches, denial of service attacks, ransomware, and more. It’s an opportunity to both manage risk and engage patients in conversations about cybersecurity, increasing trust in the technology that is essential for their health.

Regulators: Connecting a complex marketplace

The healthcare industry in the US is tremendously complex, comprised of hundreds of large healthcare systems, thousands of groups of physician practices, public and private payers, medical device manufacturers, software companies, and so on.

This expanding healthcare ecosystem can make it difficult to coordinate. Groups like the Food & Drug Administration (FDA) and the Healthcare Sector Coordinating Council have been rising to the challenge.

They’ve assembled subgroups and task forces in areas like device development and the treatment of legacy technologies. They’ve been reaching out to hospitals, patients, medical device manufacturers, and others to strengthen information-sharing and preparedness, to move toward a more open, collaborative cybersecurity environment.

Last year, the FDA issued a safety communication to alert health care providers and patients about cybersecurity vulnerabilities identified in a wireless telemetry technology used for communication that impacted more than 20 types of implantable cardiac devices, programmers, and home monitors. Later in 2019, the same device maker recalled thousands of insulin pumps due to unpatchable cyber vulnerabilities.

These are but two examples of many that demonstrate not only the impact of cybersecurity to patient health but to device makers and the healthcare system at large. Connected health should give patients access to approved technologies that can save lives without introducing risks to patient safety.

As the world continues to realize the promise of connected technologies, we must monitor threats, manage risks, and increase our network resilience. Working together to incorporate cybersecurity into device design, industry regulations, provider resilience, and patient education are where we should start.

Contributing author: Shannon Lantzy, Chief Scientist, Booz Allen Hamilton.

Public cloud IT infrastructure spending exceeds that for non-cloud IT infrastructure

Vendor revenue from sales of IT infrastructure products (server, enterprise storage, and Ethernet switch) for cloud environments, including public and private cloud, increased 34.4% year over year in the second quarter of 2020 (2Q20), according to IDC. Investments in traditional, non-cloud, IT infrastructure declined 8.7% year over year in 2Q20.

Public cloud IT infrastructure spending

These growth rates show the market response to major adjustments in business, educational, and societal activities caused by the COVID-19 pandemic and the role IT infrastructure plays in these adjustments.

Across the world, there were massive shifts to online tools in all aspects of human life, including collaboration, virtual business events, entertainment, shopping, telemedicine, and education. Cloud environments, and particularly public cloud, were a key enabler of this shift.

Spending on public cloud IT infrastructure increased 47.8% year over year in 2Q20, reaching $14.1 billion and exceeding the level of spend on non-cloud IT infrastructure for the first time. Spending on private cloud infrastructure increased 7% year over year in 2Q20 to $5 billion with on-premises private clouds accounting for 64.1% of this amount.

Hardware infrastructure market reaching the tipping point

The hardware infrastructure market has reached the tipping point and cloud environments will continue to account for an increasingly higher share of overall spending.

While IDC increased its forecast for both cloud and non-cloud IT spending for the full year 2020, investments in cloud IT infrastructure are still expected to exceed spending on non-cloud infrastructure, 54.8% to 45.2%.

Most of the increase in spending will be driven by public cloud IT infrastructure, which is expected to slow in 2H20 but increase by 16% year over year to $52.4 billion for the full year.

Spending on private cloud infrastructure will also experience softness in the second half of the year and will reach $21.5 billion for the full year, an increase of just 0.3% year over year.

As of 2019, the dominance of cloud IT environments over non-cloud already existed for compute platforms and Ethernet switches while the majority of newly shipped storage platforms were still residing in non-cloud environments.

Starting in 2020, with increased investments from public cloud providers on storage platforms, this shift will remain persistent across all three technology domains.

Compute platforms to remain the largest segment of spending

Within cloud deployment environments in 2020, compute platforms will remain the largest segment (50.9%) of spending at $37.7 billion while storage platforms will be the fastest growing segment with spending increasing 21.2% to $27.8 billion, and the Ethernet switch segment will grow 3.9% year over year to $8.5 billion.

Spending on cloud IT infrastructure increased across all regions in 2Q20 with the two largest regions, China and the U.S., delivering the highest annual growth rates at 60.5% and 36.9% respectively. In all regions except Central & Eastern Europe and the Middle East & Africa, growth in public cloud infrastructure exceeded growth in private cloud IT.

At the vendor level, the results were mixed. Inspur more than doubled its revenue from sales to cloud environments, climbing into a tie for the second position in the vendor rankings while the group of original design manufacturers (ODM Direct) grew 63.6% year over year. Lenovo’s revenue exceeded $1 billion, growing at 49.3% year over year.

Long term, spending on cloud IT infrastructure is expected to grow at a five-year compound annual growth rate (CAGR) of 10.4%, reaching $109.3 billion in 2024 and accounting for 63.6% of total IT infrastructure spend. Public cloud datacenters will account for 69.4% of this amount, growing at a 10.9% CAGR.

Spending on private cloud infrastructure will grow at a CAGR of 9.3%. Spending on non-cloud IT infrastructure will rebound after 2020 but will continue to decline overall with a CAGR of -1.6%.

HP expands its Bug Bounty Program to focus on office-class print cartridge security vulnerabilities

HP has expanded its Bug Bounty Program to focus specifically on office-class print cartridge security vulnerabilities. The program underscores HP’s commitment to delivering defense- in-depth across all aspects of printing—including supply chain, cartridge chip, cartridge packaging, firmware and printer hardware.

HP Bug Bounty Program

As part of this program, HP has engaged with Bugcrowd to conduct a three-month program in which four professional white hat hackers have been challenged to identify vulnerabilities in HP Original print cartridges. If any of the hackers are successful, HP will award an extra $10,000 per vulnerability in addition to their base fee.

“Today, bad actors aiming to exploit printers with sophisticated malware pose an ever-present and growing threat to businesses and individuals alike,” said Shivaun Albright, HP Chief Technologist for Print Security.

“HP is committed to staying ahead of these issues by proactively hiring some of the brightest cybersecurity experts to help us uncover potential risks so they can be fixed before any harm is done.”

Over the past few years, there’s been a rise in attacks of embedded system technologies, which are often shared across connected devices and include PC firmware/BIOS as well as printer firmware.

Quocirca’s Print Security 2019 report revealed that 59 percent of businesses reported a print-related data loss in the past year. COVID-19 has only added new complexities, as many employees increased their remote printing practices, triggering even more potential vulnerabilities for their employers.

HP had engaged in Bug Bounty programs over the years to complement and extend the company’s own rigorous penetration testing. While white hat hacking is a widespread practice throughout the technology industry, HP has been a pioneer in expanding this program to printers, an oftentimes overlooked attack vector. For example, in 2018, HP launched the industry’s first print security Bug Bounty Program.

“HP has been a leader in print security for many years now, establishing new industry cybersecurity standards and garnering praise from third-party security testing labs for having some of the most secure printers,” said Mark Vena, senior analyst, Moor Insights & Strategies.

“Leadership in this area, particularly focused on secure hardware features and a firmware-based approach with imaging devices, could not come at a better time.”

In our increasingly connected world, any connected device can become an avenue of attack for hackers. Keeping up requires continuous investment and dedicated research. That’s why HP is committed to pursuing focused and rigorous testing, both internally and with third parties, to better protect its customers and partners.

Use an NVIDIA GPU? Check whether you need security updates

NVIDIA has released security updates for the NVIDIA GPU Display Driver and the NVIDIA Virtual GPU Manager that fix a variety of serious vulnerabilities.

NVIDIA GPU security updates

The driver security update should be implemented by users of the company’s desktop, workstation and data center GPUs, while the vGPU software update is available for the Virtual GPU Manager component on Citrix Hypervisor, VMware vSphere, Red Hat Enterprise Linux KVM, and Nutanix AHV enterprise virtualization solutions.

NVIDIA GPU Display Driver security updates

Four security holes have been plugged in the Display Driver:

  • CVE‑2020‑5979 affects the Control Panel component and may lead to privilege escalation
  • CVE‑2020‑5980 affects multiple components and may lead to code execution or DOS
  • CVE‑2020‑5981 affects the DirectX11 user mode driver and can, according to NVIDIA, lead to DoS
  • CVE‑2020‑5982 affects the kernel mode layer and can lead to DoS.
CVE‑2020‑5980

CVE‑2020‑5980 was unearthed by Andy Gill of Pen Test Partners and the discovery detailed in a blog post published on Thursday.

The vulnerability allows for DLL hijacking, i.e., exploitation of execution flow of an application via external DLLs.

“If a vulnerable application is configured to run at a higher privilege level, then the malicious DLL that is loaded will also be executed at a higher level, thus achieving escalation of privilege. Often the application will behave no differently because malicious DLLs may also be configured to load the legitimate DLLs they were meant to replace or where a DLL doesn’t exist,” Gill explained.

CVE‑2020‑5981

CVE‑2020‑5981 was discovered by Piotr Bania of Cisco Talos. The CVE number covers multiple vulnerabilities and, Cisco claims, they could be exploited to achieve remote code execution (and not just DoS).

“An adversary could exploit these vulnerabilities by supplying the user with a malformed shader, eventually allowing them to execute code on the victim machine. These bugs could also allow the attacker to perform a guest-to-host escape through Hyper-V RemoteFX on Windows machines,” they say.

Users are advised to check which NVIDIA display driver version is currently installed on their system(s) and update it if necessary (updates are available from here).

NVIDIA vGPU Software security updates

Vulnerabilities CVE‑2020‑5983 to CVE‑2020‑5989 are found in the vGPU plugin and could lead to DoS, information disclosure, code execution, tampering, and privilege escalation.

Users are advised to upgrade to vGPU Software versions 11.1, 10.4, or 8.5 – updates are available through the NVIDIA Licensing Portal.

Hardware security: Emerging attacks and protection mechanisms

Maggie Jauregui’s introduction to hardware security is a fun story: she figured out how to spark, smoke, and permanently disable GFCI (Ground Fault Circuit Interrupter – the two button protections on plugs/sockets that prevent you from electrocuting yourself by accident with your hair dryer) wirelessly with a walkie talkie.

hardware security challenges

“I could also do this across walls with a directional antenna, and this also worked on AFCI’s (Arc Fault Circuit Interrupts – part of the circuit breaker box in your garage), which meant you could drive by someone’s home and potentially turn off their lights,” she told Help Net Security.

This first foray into hardware security resulted in her first technical presentation ever at DEF CON and a follow up presentation at CanSecWest about the effects of radio waves on modern platforms.

Jauregui says she’s always been interested in hardware. She started out as an electrical engineering major but switched to computer science halfway through university, and ultimately applied to be an Intel intern in Mexico.

“After attending my first hackathon — where I actually met my husband — I’ve continued to explore my love for all things hardware, firmware, and security to this day, and have been a part of various research teams at Intel ever since,” she added. (She’s currently a member of the corporation’s Platform Armoring and Resilience team.)

What do we talk about when we talk about hardware security?

Computer systems – a category that these days includes everything from phones and laptops to wireless thermostats and other “smart” home appliances – are a combination of many hardware components (a processor, memory, i/o peripherals, etc.) that together with firmware and software are capable of delivering services and enabling the connected data centric world we live in.

Hardware-based security typically refers to the defenses that help protect against vulnerabilities targeting these devices, and it’s main focus it to make sure that the different hardware components working together are architected, implemented, and configured correctly.

“Hardware can sometimes be considered its own level of security because it often requires physical presence in order to access or modify specific fuses, jumpers, locks, etc,” Jauregui explained. This is why hardware is also used as a root of trust.

Hardware security challenges

But every hardware device has firmware – a tempting attack vector for many hackers. And though the industry has been making advancements in firmware security solutions, many organizations are still challenged by it and don’t know how to adequately protect their systems and data, she says.

She advises IT security specialists to be aware of firmware’s importance as an asset to their organization’s threat model, to make sure that the firmware on company devices is consistently updated, and to set up automated security validation tools that can scan for configuration anomalies within their platform and evaluate security-sensitive bits within their firmware.

“Additionally, Confidential Computing has emerged as a key strategy for helping to secure data in use,” she noted. “It uses hardware memory protections to better isolate sensitive data payloads. This represents a fundamental shift in how computation is done at the hardware level and will change how vendors can structure their application programs.”

Finally, the COVID-19 pandemic has somewhat disrupted the hardware supply chain and has brought to the fore another challenge.

“Because a computing system is typically composed of multiple components from different manufacturers, each with its own level of scrutiny in relation to potential supply chain attacks, it’s challenging to verify the integrity across all stages of its lifecycle,” Jauregui explained.

“This is why it is critical for companies to work together on a validation and attestation solution for hardware and firmware that can be conducted prior to integration into a larger system. If the industry as a whole comes together, we can create more measures to help protect a product through its entire lifecycle.”

Achieving security in low-end systems on chips

The proliferation of Internet of Things devices and embedded systems and our reliance on them should make the security of these systems extremely important.

As they commonly rely on systems on chips (SoCs) – integrated circuits that consolidate the components of a computer or other electronic system on a single microchip – securing these devices is a different proposition than securing “classic” computer systems, especially if they rely on low-end SoCs.

Jauregui says that there is no single blanket solution approach to implement security of embedded systems, and that while some of the general hardware security recommendations apply, many do not.

“I highly recommend readers to check out the book Demystifying Internet of Things Security written by Intel scientists and Principal Engineers. It’s an in depth look at the threat model, secure boot, chain of trust, and the SW stack leading up to defense-in-depth for embedded systems. It also examines the different security building blocks available in Intel Architecture (IA) based IoT platforms and breaks down some the misconceptions of the Internet of Things,” she added.

“This book explores the challenges to secure these devices and provides suggestions to make them more immune to different threats originating from within and outside the network.”

For those security professionals who are interested in specializing in hardware security, she advises being curious about how things work and doing research, following folks doing interesting things on Twitter and asking them things, and watching hardware security conference talks and trying to reproduce the issues.

“Learn by doing. And if you want someone to lead you through it, go take a class! I recommend hardware security classes by Joe FitzPatrick and Joe Grand, as they are brilliant hardware researchers and excellent teachers,” she concluded.

Vendor revenue in the worldwide server market grew 19.8% year over year

According to the IDC Worldwide Quarterly Server Tracker, vendor revenue in the worldwide server market grew 19.8% year over year to $24.0 billion during the second quarter of 2020. Worldwide server shipments grew 18.4% year over year to nearly 3.2 million units in 2Q20.

vendor revenue server market

In terms of server class, volume server revenue was up 22.1% to $18.7 billion, while midrange server revenue declined 0.4% to about $3.3 billion and high-end systems grew by 44.1% to $1.9 billion.

“Global demand for enterprise servers was strong during the second quarter of 2020,” said Paul Maguranis, senior research analyst, Infrastructure Platforms and Technologies at IDC. “We certainly see areas of reduced spending, but this was offset by investments made by large cloud builders and enterprises targeting solutions that support shifting infrastructure needs caused by the global pandemic. Investments in Asia/Pacific were also particularly strong, growing 31% year over year.”

The worldwide server market ended 2Q20 with a statistical tie between, and Dell Technologies for the number 1 position. HPE/New H3C Group finished the quarter with market share of 14.9% while Dell Technologies captured a 13.9% share of worldwide revenues. Inspur/Inspur Power Systems took third place with 10.5% share and impressive 77% year-over-year growth.

Lenovo and IBM tied for fourth with 6.1% and 6.0% share, respectively. The ODM Direct group of vendors accounted for 28.8% of total server revenue at $6.9 billion with year-over-year growth of 63.4% and delivered 34.4% of all units shipped during the quarter.

On a geographic basis, the Asia/Pacific region performed very well this quarter growing at a combined 31.%. China outperformed the competitive set, growing 39.8% year over year, followed by Japan at 24.9%, and the rest of the region (Asia/Pacific excluding Japan and China) at 13.4%. The United States also grew 25.0% year over year while Canada declined 11.2%. Latin America was able to grow 15.6% while Europe, the Middle East and Africa (EMEA) declined 5.8% year over year.

Revenue generated from x86 servers decreased 17.4% in 2Q20 to $21.6 billion. Non-x86 servers grew revenues 47.4% year over year to around $2.4 billion.

Worldwide AI spending to reach more than $110 billion in 2024

Global spending on AI is forecast to double over the next four years, growing from $50.1 billion in 2020 to more than $110 billion in 2024.

ai spending forecast 2024

According to IDC, spending on AI systems will accelerate over the next several years as organizations deploy artificial intelligence as part of their digital transformation efforts and to remain competitive in the digital economy. The compound annual growth rate (CAGR) for the 2019-2024 period will be 20.1%.

“Companies will adopt AI — not just because they can, but because they must,” said Ritu Jyoti, Program VP, Artificial Intelligence at IDC.

“AI is the technology that will help businesses to be agile, innovate, and scale. The companies that become ‘AI powered’ will have the ability to synthesize information (using AI to convert data into information and then into knowledge), the capacity to learn (using AI to understand relationships between knowledge and apply the learning to business problems), and the capability to deliver insights at scale (using AI to support decisions and automation).”

Two of the leading drivers for AI adoption are delivering a better customer experience and helping employees to get better at their jobs. This is reflected in the leading use cases for AI, which include automated customer service agents, sales process recommendation and automation, automated threat intelligence and prevention, and IT automation. Combined, these four use cases will represent nearly a third of all AI spending this year. Some of the fastest growing use cases are automated human resources, IT automation, and pharmaceutical research and discovery.

AI spending forecast by industry

The two industries that will spend the most on AI solutions throughout the forecast are retail and banking. The retail industry will largely focus its AI investments on improving the customer experience via chatbots and recommendation engines while banking will include spending on fraud analysis and investigation and program advisors and recommendation systems.

Discrete manufacturing, process manufacturing, and healthcare will round out the top 5 industries for AI spending in 2020. The industries that will see the fastest growth in AI spending over the 2020-2024 forecast are media, federal/central government, and professional services.

COVID-19 caused a slowdown in AI investments across the transportation industry as well as the personal and consumer services industry, which includes leisure and hospitality businesses. These industries will be cautious with their AI investments in 2020 as their focus will be on cost containment and revenue generation rather than innovation or digital experiences,” said Andrea Minonne, senior research analyst, Customer Insights & Analysis, IDC.

“On the other hand, AI has played a role in helping societies deal with large-scale disruptions caused by quarantines and lockdowns. Some european governments have partnered with AI start-ups to deploy AI solutions to monitor the outcomes of their social distancing rules and assess if the public was complying with rules. Also, hospitals across Europe are using AI to speed up COVID-19 diagnosis and testing, to provide automated remote consultations, and to optimize capacity at hospitals.”

“In the short term, the pandemic caused supply chain disruptions and store closures with continued impact expected to linger into 2021 and the outyears. For the most impacted industries, this has caused some delays in AI deployments,” said Stacey Soohoo, research manager, Customer Insights & Analysis, IDC.

“Elsewhere, enterprises have seen a silver lining in the current situation: an opportunity to become more resilient and agile in the long run. Artificial intelligence continues to be a key technology in the road to recovery for many enterprises and adopting artificial intelligence will help many to rebuild or enhance future revenue streams and operations.”

Software, hardware and geographical trends

Software and services will each account for a little more than one third of all AI spending this year with hardware delivering the remainder. The largest share of software spending will go to AI applications ($14.1 billion) while the largest category of services spending will be IT services ($14.5 billion).

Servers ($11.2 billion) will dominate hardware spending. Software will see the fastest growth in spending over the forecast period with a five-year CAGR of 22.5%.

On a geographic basis, the United States will deliver more than half of all AI spending throughout the forecast, led by the retail and banking industries. Western Europe will be the second largest geographic region, led by banking, retail, and discrete manufacturing.

China will be the third largest region for AI spending with state/local government, banking, and professional services as the leading industries. The strongest spending growth over the five-year forecast will be in Japan (32.1% CAGR) and Latin America (25.1% CAGR).

PinK: A new way of implementing a key-value store in SSDs

As web services, cloud storage, and big-data services continue expanding and finding their way into our lives, the gigantic hardware infrastructures they rely on–known as data centers – need to be improved to keep up with the current demand.

key-value store

One promising solution for improving the performance and reducing the energy load associated with reading and writing large amounts of data is to confer storage devices with some computational capabilities and offload part of the data read/write process from CPUs.

A new way of implementing a key-value store

In a recent study, researchers from Daegu Gyeongbuk Institute of Science and Technology (DGIST), Korea, describe a new way of implementing a key-value store in solid state drives (SSDs), which offers many advantages over a more widely used method.

A key-value store (also known as key-value database) is a way of storing, managing, and retrieving data in the form of key-value pairs. The most common way to implement one is through the use of a hash function, an algorithm that can quickly match a given key with its associated stored data to achieve fast read/write access.

One of the main problems of implementing a hash-based key-value store is that the random nature of the hash function occasionally leads to long delays (latency) in read/write operations. To solve this problem, the researchers from DGIST implemented a different paradigm, called “log-structured merge-tree (LSM).” This approach relies on ordering the data hierarchically, therefore putting an upper bound on the maximum latency.

Letting storage devices compute some operations by themselves

In their implementation, nicknamed “PinK,” they addressed the most serious limitations of LSM-based key-value stores for SSDs. With its optimized memory use, guaranteed maximum delays, and hardware accelerators for offloading certain sorting tasks from the CPU, PinK represents a novel and effective take on data storage for SSDs in data centers.

Professor Sungjin Lee, who led the study, remarks: “Key-value store is a widely used fundamental infrastructure for various applications, including Web services, artificial intelligence applications, and cloud systems. We believe that PinK could greatly improve the user-perceived performance of such services.”

So far, experimental results confirm the performance gains offered by this new implementation and highlight the potential of letting storage devices compute some operations by themselves.

“We believe that our study gives a good direction of how computational storage devices should be designed and built and what technical issues we should address for efficient in-storage computing,” Prof Lee concludes.

The evolution of IoT asset tracking devices

Asset tracking is one of the highest growth application segments for the Internet of Things (IoT). According to a report by ABI Research, asset tracking device shipments will see a 51% year-on-year device shipment growth rate through 2024.

IoT asset tracking

Expanding LPWAN coverage, technological maturity, and the associated miniaturization of sophisticated devices are key to moving asset tracking from traditionally high-value markets to low-value high-volume markets, which will account for most of the tracker connection and shipment numbers.

“Hardware devices for the asset tracking market are primarily dominated by the need to balance power consumption, form factor, and device cost. Balance and compromise between these three must be achieved based on the use-case and are dictated by the business case and possible return on investment for the customer,” said Tancred Taylor, Research Analyst at ABI Research.

“As these constraints are marginalized by greater volumes of adoption, by emerging technologies like eSIM or System-on-Chip, and by increasingly low-power components and connectivity, so too will the limitations on the business case.”

OEMs diversifying their hardware offerings

Expanding technological and network foundations drive the number of use-cases, and OEMs are responding by diversifying their hardware offerings. Some OEMs such as CoreKinect, Particle, Mobilogix, or Starcom Systems are innovating in this space by taking a reference-architecture or modular approach to device design for personalized solutions. Others are going to market with off-the-shelf or vertically-focused devices for quickly scalable deployments – such as BeWhere, Roambee, Sony, or FFLY4U.

Early adoption of asset tracking was in the fleet, container, and logistics industries to provide basic data on the location and condition of assets in transit. The total addressable market for these industries remains extensive, particularly as the solutions trickle down from the largest enterprises to small- and medium-sized companies.

Increased device functionality combined with component miniaturization is key to driving the next generation of low-cost tracking devices. This will enable granular tracking at the pallet, package, or item level, and open new markets and device categories, such as disposable trackers. Emerson, Sensitech, CoreKinect, and Bayer are among companies driving innovation in this field.

Product innovation accompanied by variations in business models

Innovation among product offerings is accompanied by variations in business models and go-to-market approaches. Mobile Network Operators (MNOs) are playing a significant role in driving adoption through increased verticalization, with Verizon, AT&T, and Orange among those offering subscription models for end-to-end solutions – comprising device, connectivity, software, and managed service offerings.

This model is additionally gaining traction among OEMs, with Roambee an early adopter for a subscription-only model, and others such as Mobilogix following suit. This service-based model will gain additional traction as OEMs move down the value-chain by developing in-house capabilities or partner networks to simplify the ecosystem and consumer’s solution.

“While there is extensive work to be done on the hardware side to make low-cost trackers that can be simply attached to any ‘thing’, many OEMs are shifting from a hardware-only model to more of a consultative approach to a customer’s requirements and deliver personalized end-to-end solutions. Flexibility, simplicity, and cost are crucial to gain enterprise traction,” Taylor concluded.

ATM makers fix flaws allowing illegal cash withdrawals

ATM manufacturers Diebold Nixdorf and NCR have fixed a number of software vulnerabilities that allowed attackers to execute arbitrary code with or without SYSTEM privileges, and to make illegal cash withdrawals by committing deposit forgery and issueing valid commands to dispense currency.

ATM illegal cash withdrawals

About the vulnerabilities

“Diebold Nixdorf ProCash 2100xe USB ATMs running Wincor Probase version 1.1.30 do not encrypt, authenticate, or verify the integrity of messages between the cash and check deposit module (CCDM) and the host computer. An attacker with physical access to internal ATM components can intercept and modify messages, such as the amount and value of currency being deposited, and send modified messages to the host computer,” the CERT Coordination Center at Carnegie Mellon University explained the root of CVE-2020-9062.

A deposit forgery attack starts with the attacker depositing actual currency and modifying messages from the CCDM to the host computer to indicate a greater amount or value than was actually deposited, and ends with the attacker making a withdrawal of this artificially increased amount or value of currency (at an ATM operated by a different financial institution).

A similar vulnerability (CVE-2020-10124) with the same attack potential has been found in NCR SelfServ ATMs running APTRA XFS 04.02.01 and 05.01.00: the software does not encrypt, authenticate, or verify the integrity of messages between the bunch note accepter (BNA) and the host computer.

Two additional flaws (CVE-2020-10125 and CVE-2020-10126), stemming from the software’s poor implementation of certificates to validate BNA software updates and improper validation of the softare updates for the BNA, may allow an attacker to execute arbitrary code on the host, with or without SYSTEM privileges.

NCR SelfServ ATMs running APTRA XFS 05.01.00 or older also sport two more flaws:

  • CVE-2020-9063 stems from the lack of authentication and integrity protection of the USB HID communications between the currency dispenser and the host computer
  • CVE-2020-10123 is caused by the currency dispenser’s inadequate authentication of session key generation requests from the host computer, allowing the attacker to issue valid commands to dispense currency

Attack prevention

To exploit all of these flaws, attackers must have physical access to internal ATM components, but if they succeed, they can fiddle with the host system and steal money from banks.

Affected organizations are advised to peruse the security advisories and to implement the offered firmware and software updates, as well as make specific configuration changes.

Diebold also advised them to limit physical access to the ATM and its internal components, adjust deposit transaction business logic, and implement fraud monitoring.

Disrupting a power grid with cheap equipment hidden in a coffee cup

Cyber-physical systems security researchers at the University of California, Irvine can disrupt the functioning of a power grid using about $50 worth of equipment tucked inside a disposable coffee cup.

disrupt power grid

Mohammad Al Faruque, UCI associate professor of electrical engineering and computer science, and his team revealed that the spoofing mechanism can generate a 32 percent change in output voltage, a 200 percent increase in low-frequency harmonics power and a 250 percent boost in real power from a solar inverter.

Al Faruque’s group in UCI’s Henry Samueli School of Engineering have made a habit in recent years of finding exploitable loopholes in systems that combine computer hardware and software with machines and other infrastructure. In addition to heightening awareness about these vulnerabilities, they invent new technologies that are better shielded against attacks.

Targeting electromagnetic components

For this project, Al Faruque and his team used a remote spoofing device to target electromagnetic components found in many grid-tied solar inverters.

“Without touching the solar inverter, without even getting close to it, I can just place a coffee cup nearby and then leave and go anywhere in the world, from which I can destabilize the grid,” Al Faruque said. “In an extreme case, I can even create a blackout.”

Solar inverters convert power collected by rooftop panels from direct to alternating current for use in homes and businesses. Often, the sustainably generated electricity will go into microgrids and main power networks. Many inverters rely on Hall sensors, devices that measure the strength of a magnetic field and are based on a technology that originated in 1879.

It’s this relatively ancient gizmo that makes many cyber-physical properties vulnerable to attack, Al Faruque said. Beyond solar inverters, Hall sensors can be found in cars, freight and passenger trains, and medical devices, among other applications.

[embedded content]

The components of the spoofing device

The spoofing apparatus assembled by Al Faruque’s team consists of an electromagnet, an Arduino Uno microprocessor, and an ultrasonic sensor to measure the distance between the unit and the solar inverter. A Zigbee network appliance is used to control the mechanism within a range of about 100 meters, but that can be replaced by a Wi-Fi router that would enable remote operation from anywhere on the planet.

Anomadarshi Barua, a Ph.D. student in electrical engineering and computer science who led the development of this technique, said that the components of the spoofing device are so simple and straightforward that a high school student could construct it.

“Schools all around the world teach kids how to program an Arduino processor,” he said. “Even UCI has camps that teach this technology. However, they would need a little more advanced knowledge to figure out the control part of the system.”

Barua noted that such an attack could target an individual home or an entire grid. “You could use the device to shut down a shopping mall, an airport or a military installation,” he said.

For Al Faruque, this endeavor points out gaps in older technologies that even seasoned experts may have overlooked.

New defense method enables telecoms, ISPs to protect consumer IoT devices

Instead of relying on customers to protect their vulnerable smart home devices from being used in cyberattacks, Ben-Gurion University of the Negev (BGU) and National University of Singapore (NUS) researchers have developed a new method that enables telecommunications and internet service providers to monitor these devices.

protect consumer IoT devices

An overview of the key steps in the proposed method

According to their new study, the ability to launch massive DDoS attacks via a botnet of compromised devices is an exponentially growing risk in the Internet of Things (IoT). Such attacks, possibly emerging from IoT devices in home networks, impact the attack target, as well as the infrastructure of telcos.

“Most home users don’t have the awareness, knowledge, or means to prevent or handle ongoing attacks,” says Yair Meidan, a Ph.D. candidate at BGU. “As a result, the burden falls on the telcos to handle. Our method addresses a challenging real-world problem that has already caused challenging attacks in Germany and Singapore, and poses a risk to telco infrastructure and their customers worldwide.”

Each connected device has a unique IP address. However, home networks typically use gateway routers with NAT functionality, which replaces the local source IP address of each outbound data packet with the household router’s public IP address. Consequently, detecting connected IoT devices from outside the home network is a challenging task.

The researchers developed a method to detect connected, vulnerable IoT models before they are compromised by monitoring the data traffic from each smart home device. This enables telcos to verify whether specific IoT models, known to be vulnerable to exploitation by malware for cyberattacks are connected to the home network. It helps telcos identify potential threats to their networks and take preventive actions quickly.

By using the proposed method, a telco can detect vulnerable IoT devices connected behind a NAT, and use this information to take action. In the case of a potential DDoS attack, this method would enable the telco to take steps to spare the company and its customers harm in advance, such as offloading the large volume of traffic generated by an abundance of infected domestic IoT devices. In turn, this could prevent the combined traffic surge from hitting the telco’s infrastructure, reduce the likelihood of service disruption, and ensure continued service availability.

“Unlike some past studies that evaluated their methods using partial, questionable, or completely unlabeled datasets, or just one type of device, our data is versatile and explicitly labeled with the device model,” Meidan says. “We are sharing our experimental data with the scientific community as a novel benchmark to promote future reproducible research in this domain.” This dataset is available here.

This research is a first step toward dramatically mitigating the risk posed to telcos’ infrastructure by domestic NAT IoT devices. In the future, the researchers seek to further validate the scalability of the method, using additional IoT devices that represent an even broader range of IoT models, types and manufacturers.

“Although our method is designed to detect vulnerable IoT devices before they are exploited, we plan to evaluate the resilience of our method to adversarial attacks in future research,” Meidan says. “Similarly, a spoofing attack, in which an infected device performs many dummy requests to IP addresses and ports that are different from the default ones, could result in missed detection.”

The pandemic had a negative impact on data center operations

The effects of the COVID-19 pandemic have resulted in a negative impact on organizations’ ability to manage their storage infrastructures in order to ensure continued access to an increasingly remote workforce and to satisfy health protocols put in place to protect workers, according to StorONE.

data center operations

The impact on data center operations

More than two-thirds of those surveyed maintain some level of on-premises storage. Because of the pandemic, almost 40 percent of those organizations have had no or critically restricted access to their data centers to address storage hardware failures or increase data protection levels, such as improved drive redundancies or snapshot intervals.

Reduced budgets mean that organizations will be unable to offer more performance and capacity to their users or will need to rely on better vendor pricing to supplement their needs.

Among the survey’s findings are:

  • 20 percent of organizations have not had any access to their data centers, meaning that any physical hardware failures have had to wait. Another 20 percent have had restricted access to only allow work done in critical instances. The remaining 60 percent have been able to maintain moderate access with established maintenance windows and limited workforces.
  • A third and have been forced to go to the data center to replace drives despite health risks. 12.5 percent of respondents indicated that they have had to live with the risk of data loss due to access issues, while another 12.5 percent have leveraged hot spares for their failed drives.
  • 20 percent of organizations had restricted access to storage systems remotely during the pandemic, with 12 percent experiencing constrained remote administration capabilities due to hardware limitations. Another 12 percent had no remote administration during the pandemic with connections that either failed or were impractical.
  • 33.3 percent of respondents said they had to count on their backup system for improved data protection levels, with 20.8 percent not able to enable any improvements to protection levels and 16.7 percent unable to afford the performance impact required of increased data protection.
  • 16.7 percent of companies cut their IT budgets by more than 50 percent, with 45.8 percent cutting budgets between 10 and 25 percent. 4.2 percent cut budgets between 25 and 50 percent, 16.7 percent cut by as much as 10 percent and 16.7 percent reported no cuts to their IT budget due to coronavirus.
  • To deal with reduced budgets, 40 percent of organizations are hoping for better pricing from their existing vendors, 30 percent will seek out other vendors that provide lower prices, and 30 percent will stand pat without increasing services to their users.

“While some organizations have been able to weather the storm of this unprecedented event, the negative impacts of COVID-19 on storage infrastructures are already being felt by a large majority of companies throughout the world,” said Gal Naor, CEO, StorONE.

“IT has long been expected to do more with less, but these survey results show that data is being left unprotected and unavailable in many instances due to lack of access to physical hardware or severe budget cuts. Companies cannot afford to risk their data regardless of the current issues at hand. Organizations need to implement a solution that will allow them to take existing servers and storage to create a near-zero additional cost system complete with data-protection services. A storage system with these capabilities ensures mission-critical information is always available, immediately recoverable and remains durable during times of crisis.”

Things to consider when selecting enterprise SSDs for critical workloads

The process of evaluating solid state drives (SSDs) for enterprise applications can present a number of challenges. You want maximum performance for the most demanding servers running mission-critical workloads.

We sat down with Scott Hamilton, Senior Director, Product Management, Data Center Systems at Western Digital, to learn more about SSDs and how they fit into current business environments and data centers.

enterprise SSDs

What features do SSDs need to have in order to offer uncompromised performance for the most demanding servers running mission-critical workloads in enterprise environments? What are some of the misconceptions IT leaders are facing when choosing SSDs?

First, IT leaders must understand environmental considerations, including the application, use case and its intended workload, before committing to specific SSDs. It’s well understood that uncompromised performance is paramount to support mission critical workloads in the enterprise environment. However, performance has different meanings to different customers for their respective use cases and available infrastructure.

Uncompromised performance may focus more on latency (and associated consistency), IOPs (and queue depth) or throughput (and block size) depending on the use case and application.

Additionally, the scale of the application and solution dictate the level of emphasis, whether it be interface-, device-, or system-level performance. Similarly, mission-critical workloads may have different expectations or requirements e.g. high availability support, disaster recovery, or performance and performance consistency. This is where IT leaders need to rationalize and test the best fit for their use case.

Today there are many different SSD segments that fit certain types of infrastructure choices and use cases. For example, PCIe SSD options are available from boot drives to performance NVMe SSDs and they come in different form factors such as M.2 (ultra- light and thin) and U.2 (standard 2.5-inch) to name a few. It’s also important to consider power/performance. Some applications do not require interface saturation, and can leverage low-power, single-port mainstream SSDs instead of dual-port, high-power, higher-endurance and higher-performance drives.

IT managers have choices today, which they should consider carefully to rationalize, optimize, infrastructure elasticity and scaling, test and ultimately align their future system architecture strategies when it comes to choosing the best fit SSD. My final word of advice: Sometimes it is not wise to pick the highest performing SSD available on the market as you do not want to pay for a rocket engine for a bike. Understanding the use case and success metrics – e.g., price-capacity, latency, price performance (either $/IOPs or $/GB/sec) – will help eliminate some of the misconceptions IT leaders face when choosing SSDs.

How has the pandemic accelerated cloud adoption and how has that translated to digital transformation efforts and the creation of agile data infrastructures?

The rapid increase in our global online footprint is stressing IT infrastructure from virtual office, live video calls, online classes, healthcare services and content streaming to social media, instant messaging services, gaming and e-commerce. This is the new normal of our personal and professional lives. There is no doubt that the pandemic has increased dependence on cloud data centers and services. Private, public and hybrid cloud use cases will continue to co-exist due to costs, data governance and strategies, security and legacy application support.

Digital transformation continues all around us, and the pandemic accelerated these initiatives. Before the pandemic, digital transformation projects generally spanned over several years with lengthy and exhaustive cycles to go online and scale up their web foot print. However, 2020 has really surprised all of us. Tectonic shifts have happened (and are still happening) with projects now taking only weeks or months even for businesses that are learning to scale up for the first time.

This infrastructure stress will further accelerate technological shifts at as well, whether it be from SAS to NVMe at the endpoints or from DAS- or SAN-based solutions to NVMe over Fabrics (NVMe-oF) based solutions to deliver greater agility to meet both dynamic and unforeseen demands of the future.

OPIS

Organizations are scrambling to update their infrastructure, and many are battling inefficient data silos and large operational expenses. How can data centers take full advantage of modern NVMe SSDs?

NVMe SSDs are playing a pivotal role in making the new reality possible for the people and businesses around the world. As users transition from SAS and SATA, NVMe is not only increasing overall system performance and utilization, it’s creating next-generation flexible and agile IT infrastructure as well. Capitalizing on the power of NVMe, SSDs now enable data centers to run more services on their hardware i.e., improved utilization. This is an important consideration for IT leaders and organizations who are looking to improve efficiencies.

NVMe SSDs are helping both public and private cloud infrastructures in various areas such as the highest performance storage, the lowest latency interface and the flexibility to support needs from boot to high-performance compute as well as infrastructure productivity. NVMe supports enterprise specifications for server and storage systems such as namespaces, virtualization support, scatter gather list, reservations, fused operations, and emerging technologies such as Zoned Namespaces (ZNS).

Additionally, NVMe-oF extends the benefits of NVMe technology and enables sharing data between hosts and NVMe-based platforms over a fabric. The ratification of the NVMe 1.4 and NVMe-oF 1.1 specifications, with the addition of ZNS, have further strengthened NVMe’s position in enterprise data centers. Therefore, by introducing NVMe SSDs into their infrastructure, organizations will have the tools to get more from their data assets.

OPIS

What kind of demand for faster hardware do you expect in the next five years?

Now and into the future, data centers of all shapes and sizes are constantly striving to achieve greater scalability, efficiencies and increased productivity and responsiveness with the best TCO. Business leaders and IT decision-makers must understand and navigate through the complexities of cloud, edge and hybrid on-prem data center technologies and architectures, which are increasingly being relied upon to support a growing and complex ecosystem of workloads, applications and AI/IoT datasets.

More than a decade ago, IT systems used to rely on software running on dedicated general purpose systems for any applications. This created many inefficiencies and scaling challenges, especially with large scale system designs. Today, data dependence has been consistently and exponentially growing, which has forced data center architects to decouple the applications from the systems. This was the birth of the HCI market and now the composable disaggregated infrastructure market.

Next-generation infrastructures are moving to disaggregated, pooled resources (e.g., compute, accelerators and storage) that can be dynamically composed to meet the ever increasing and somewhat unpredictable demands of the future. All of this allows us to make efficient use of hardware to increase infrastructure agility, scalability and software control, remove various system bottlenecks and improve overall TCO.

BadPower: Fast chargers can be modified to damage mobile devices

If you needed another reason not to use a charger made available at a coffeeshop or airport or by an acquaintance, here it is: maliciously modified fast chargers may damage your phone, tablet or laptop and set it on fire.

fast chargers damage

Researchers from Tencent‘s Xuanwu Lab have demonstrated how some fast chargers may be easily and quickly modified to deliver too much power at once and effectively “overwhelm” digital devices:

[embedded content]

How is this possible?

As out use of digital mobile devices increased, so did the need to be able to charge them quickly. Fast chargers and power banks are not a rarity anymore, and most digital devices now support fast charging.

The charging operation is performed after the power supply terminal and the power receiving device negotiate and agree on the amount of power both parties can support.

The set of programs that complete the power negotiation and control the charging process is usually stored in the firmware of the fast charge management chip at the power supply terminal and the power receiver terminal, the researchers explained.

Unfortunately, that code can be rewritten by malicious actors because “some manufacturers have designed interfaces that can read and write built-in firmware in the data channel, but they have not performed effective security verification of the read and write behavior, or there are problems in the verification process, or the implementation of the fast charge protocol has some memory corruption problems.”

Even worse: the attack (dubbed BadPower) can be performed in a way that will not raise any suspicion: the attacker may rewrite the firmware by simply connecting a mobile device loaded with attack code to the charger.

Users’ mobile devices can also be implanted with malware with BadPower attack capabilities and be the infection agent for every fast charger that is connected to it.

Possible solutions

Tencent’s researchers tested 35 of the 234 fast charging devices currently available on the market, and found that at least 18 of them (by 8 different brands) are susceptible to BadPower attacks.

They also discovered that at least 18 fast-charging chip manufacturers produce chips with the ability to update firmware after the product is built.

End users are advised to keep their devices safe by not giving their own fast charger and power bank to others and by not using those belonging to other people or establishments.

Ultimately, though, this is a problem that has to be solved by the manufacturers.

They should make sure that fast chargers’ firmware is without common software vulnerabilities and make sure that firmware can’t be modified without authorization.

“At the same time, we also suggest adding technical requirements for safety verification during firmware update to the relevant national standards for fast charging technology,” the researchers added.

“It is recommended to add components such as chip fuses to non-fast charging and receiving equipment powered by the USB interface, or an overvoltage protection circuit that can withstand at least 20V. It is recommended that powered devices that support fast charging continue to check the input voltage and current after power negotiation to confirm that they meet the negotiated range.”

New wave of attacks aiming to rope home routers into IoT botnets

A Trend Micro research is warning consumers of a major new wave of attacks attempting to compromise their home routers for use in IoT botnets. The report urges users to take action to stop their devices from enabling this criminal activity.

home routers IoT botnets

The importance of home routers for IoT botnets

There has been a recent spike in attacks targeting and leveraging routers, particularly around Q4 2019. This research indicates increased abuse of these devices will continue as attackers are able to easily monetize these infections in secondary attacks.

“With a large majority of the population currently reliant on home networks for their work and studies, what’s happening to your router has never been more important,” said Jon Clay, director of global threat communications for Trend Micro.

“Cybercriminals know that a vast majority of home routers are insecure with default credentials and have ramped up attacks on a massive scale. For the home user, that’s hijacking their bandwidth and slowing down their network. For the businesses being targeted by secondary attacks, these botnets can totally take down a website, as we’ve seen in past high-profile attacks.”

Force log-in attempts against routers increasing

The research revealed an increase from October 2019 onwards in brute force log-in attempts against routers, in which attackers use automated software to try common password combinations.

The number of attempts increased nearly tenfold, from around 23 million in September to nearly 249 million attempts in December 2019. As recently as March 2020, Trend Micro recorded almost 194 million brute force logins.

Another indicator that the scale of this threat has increased is devices attempting to open telnet sessions with other IoT devices. Because telnet is unencrypted, it’s favored by attackers – or their botnets – as a way to probe for user credentials.

At its peak, in mid-March 2020, nearly 16,000 devices attempted to open telnet sessions with other IoT devices in a single week.

Cybercriminals are competing with each other

This trend is concerning for several reasons. Cybercriminals are competing with each other to compromise as many routers as possible so they can be conscripted into botnets. These are then sold on underground sites either to launch DDoS attacks, or as a way to anonymize other attacks such as click fraud, data theft and account takeover.

Competition is so fierce that criminals are known to uninstall any malware they find on targeted routers, booting off their rivals so they can claim complete control over the device.

For the home user, a compromised router is likely to suffer performance issues. If attacks are subsequently launched from that device, their IP address may also be blacklisted – possibly implicating them in criminal activity and potentially cutting them off from key parts of the internet, and even corporate networks.

As explained in the report, there’s a thriving black market in botnet malware and botnets-for-hire. Although any IoT device could be compromised and leveraged in a botnet, routers are of particular interest because they are easily accessible and directly connected to the internet.

home routers IoT botnets

Recommendations for home users

  • Make sure you use a strong password. Change it from time to time.
  • Make sure the router is running the latest firmware.
  • Check logs to find behavior that doesn’t make sense for the network.
  • Only allow logins to the router from the local network.

Investigation highlights the dangers of using counterfeit Cisco switches

An investigation, which concluded that counterfeit network switches were designed to bypass processes that authenticate system components, illustrates the security challenges posed by counterfeit hardware.

counterfeit Cisco switches

The suspected counterfeit switch (on the left) has port numbers in bright white, while the known genuine device has them in grey. The text itself is misaligned. The triangles indicating different ports are different shapes.

Counterfeit Cisco Catalyst 2960-X series switches

F-Secure Consulting’s Hardware Security team investigated two different counterfeit versions of Cisco Catalyst 2960-X series switches. The counterfeits were discovered by an IT company after a software update stopped them from working, which is a common reaction of forged/modified hardware to new software. At the company’s request, researchers performed a thorough analysis of the counterfeits to determine the security implications.

The investigators found that while the counterfeits did not have any backdoor-like functionality, they did employ various measures to fool security controls. For example, one of the units exploited what the research team believes to be a previously undiscovered software vulnerability to undermine secure boot processes that provide protection against firmware tampering.

“We found that the counterfeits were built to bypass authentication measures, but we didn’t find evidence suggesting the units posed any other risks,” said Dmitry Janushkevich, a senior consultant with F-Secure Consulting’s Hardware Security team, and lead author of the report. “The counterfeiters’ motives were likely limited to making money by selling the components. But we see motivated attackers use the same kind of approach to stealthily backdoor companies, which is why it’s important to thoroughly check any modified hardware.”

The counterfeits were physically and operationally similar to an authentic Cisco switch. One of the unit’s engineering suggests that the counterfeiters either invested heavily in replicating Cisco’s original design or had access to proprietary engineering documentation to help them create a convincing copy.

According to F-Secure Consulting’s Head of Hardware Security Andrea Barisani, organizations face considerable security challenges in trying to mitigate the security implications of sophisticated counterfeits such as the those analyzed in the report.

“Security departments can’t afford to ignore hardware that’s been tampered with or modified, which is why they need to investigate any counterfeits that they’ve been tricked into using,” explained Barisani. “Without tearing down the hardware and examining it from the ground up, organizations can’t know if a modified device had a larger security impact. And depending on the case, the impact can be major enough to completely undermine security measures intended to protect an organization’s security, processes, infrastructure, etc.”

How to ensure you’re not using counterfeit components

F-Secure has the following advice to help organizations prevent themselves from using counterfeit components:

  • Source all your components from authorized resellers.
  • Have clear internal processes and policies that governing procurement processes.
  • Ensure all components run the latest available software provided by vendors.
  • Make note of even physical differences between different units of the same product, no matter how subtle they may be.

USB storage devices: Convenient security nightmares

There’s no denying the convenience of USB media. From hard drives and flash drives to a wide range of other devices, they offer a fast, simple way to transport, share and store data. However, from a business security perspective, their highly accessible and portable nature makes them a complete nightmare, with data leakage, theft, and loss all common occurrences.

Widespread remote working appears to have compounded these issues. According to new research, there’s been a 123% increase in the volume of data downloaded to USB media by employees since the onset of COVID-19, suggesting many have used such devices to take large volumes of data home with them. As a result, there’s hundreds of terabytes of potentially sensitive, unencrypted corporate data floating around at any given time, greatly increasing the risk of serious data loss.

Fortunately, effective implementation of USB control and encryption can significantly minimize that risk.

What is USB control and encryption?

USB control and encryption refers to the set of techniques and practices used to secure the access of devices to USB ports. Such techniques and practices form a key part of endpoint security and help protect both computer systems and sensitive data assets from loss, as well as security threats (e.g., malware) that can be deployed via physical plug-in USB devices.

There are numerous ways that USB control and encryption can be implemented. The most authoritarian approach is to block the use of USB devices altogether, either by physically covering endpoint USB ports or by disabling USB adapters throughout the operating system. While this is certainly effective, for the vast majority of businesses it simply isn’t a workable approach given the huge number of peripheral devices that rely on USB ports to function, such as keyboards, chargers, printers and so on.

Instead, a more practical approach is to combine less draconian physical measures with the use of encryption that protects sensitive data itself, meaning even if a flash drive containing such data is lost or stolen, its contents remain safe. The easiest (and usually most expensive) way to do this is by purchasing devices that already have robust encryption algorithms built into them.

A cheaper (but harder to manage) alternative is to implement and enforce specific IT policies governing the use of USB devices. This could either be one that only permits employees to use certain “authenticated” USB devices – whose file systems have been manually encrypted – or stipulating that individual files must be encrypted before they can be transferred to a USB storage device.

Greater control means better security

The default USB port controls offered as part of most operating systems tend to be quite limited in terms of functionality. Security teams can choose to leave them completely open, designate them as read-only, or fully disable them. However, for those wanting a more nuanced approach, a much greater level of granular control can be achieved with the help of third-party security applications and/or solutions. For instance, each plugged-in USB device is required to tell the OS exactly what kind of device it is as part of the connection protocol.

With the help of USB control applications, admins can use this information to limit or block certain types of USB devices on specific endpoint ports. A good example would be permitting the use of USB-connected mice via the port, but banning storage devices, such as USB sticks, that pose a much greater threat to security.

Some control applications go further still, allowing security teams to put rules in place that govern USB ports down to an individual level. This includes specifying exactly what kinds of files can be copied or transferred via a particular USB port or stipulating that a particular port can only be used by devices from a pre-approved whitelist (based on their serial number). Such controls can be extremely effective at preventing unauthorized data egress, as well as malicious actions like trying to upload malware via an unauthorized USB stick.

A centrally controlled solution saves significant logistical headaches

It’s worth noting that a normal business network can contain hundreds, or even thousands of endpoints, each with one or more USB ports. As such, control and encryption solutions that can be managed centrally, rather than on an individual basis, are significantly easier to implement and manage. This is particularly true at this current point in time, where remote working protocols make it almost impossible to effectively manage devices any other way.

While portable USB drives and devices are seen as a quick, convenient way to transport or store data by employees, they often present a major headache for security professionals.

Fortunately, implementing USB control and encryption solutions can greatly improve the tools at a security team’s disposal to deal with such challenges and ensure both the network and sensitive company data remains protected at all times.

Attackers are bypassing F5 BIG-IP RCE mitigation – you might want to patch after all

Attackers are bypassing a mitigation for the BIG-IP TMUI RCE vulnerability (CVE-2020-5902) originally provided by F5 Networks, NCC Group’s Research and Intelligence Fusion Team has discovered.

“Early data made available to us, as of 08:05 on July 8, 2020, is showing of ~10,000 Internet exposed F5 devices that ~6,000 were made potentially vulnerable again due to the bypass,” they warned.

F5 Networks has updated the security advisory to reflect this discovery and to provide an updated version of the mitigation. The advisory has also been updated with helpful notes regarding the impact of the flaw, the various mitigations, as well as indicators of compromise.

CVE-2020-5902 exploitation attempts

CVE-2020-5902 was discovered and privately disclosed by Positive Technologies researcher Mikhail Klyuchnikov.

F5 Networks released patches and published mitigations last Wednesday and PT followed with more information.

Security researchers were quick to set up honeypots to detect exploitation attempts and, a few dats later, after several exploits had been made public, they started.

Some were reconnaissance attempts, some tried to deliver backdoors, DDoS bots, coin miners, web shells, etc. Some were attempts to scrape admin credentials off vulnerable devices in an automated fashion.

There’s also a Metasploit module for CVE-2020-5902 exploitation available (and in use).

What now?

Any organization that applied the original, incomplete mitigation instead of patching their F5 BIG-IP boxes should take action again:

They should also check whether their devices have been compromised in the interim.