Technology solutions providers must empower end users to improve cybersecurity standards

Despite the fact that many organizations are turning to outside cybersecurity specialists to protect their systems and data, bringing in a third-party provider remains just a piece of the security jigsaw. For some businesses, working with a technology solutions provider (TSP) creates a mindset that the problem is no longer theirs, and as a result, their role in preventing and mitigating cybersecurity risks becomes more passive.

TSP cybersecurity

This is an important misunderstanding, not least because it risks setting aside one of the most powerful influences on promoting outstanding cybersecurity standards: employees. Their individual and collective role in defeating cybercriminals is well understood, and mobilizing everyone to play a role in protecting systems and data remains critical, despite ongoing improvements in cybersecurity technologies. Every stakeholder in this process has a role to play in avoiding the dangers this creates, TSPs included.

Despite the increasing sophistication of cyber attacks, TSPs that invest in key foundational, standardized approaches to training put their clients in a much stronger position. In particular, helping end users to focus on phishing and social engineering attacks, access and passwords, together with device and physical security can close the loop between TSP and end users and keep cybercriminals at bay.

Access, passwords, and connection

TSPs have an important role to play in training end users about key network vulnerabilities, including access privileges, passwords, and also the network connection itself. For instance, their clients should know who has general or privileged access.

As a rule, privileged access is reserved for users who carry out administrative-level functions or have more senior roles that require access to sensitive data. Employees should be informed, therefore, what type of user they are in order to understand what they can access on the network and what they can’t.

Passwords remain a perennial challenge, and frequent reminders about the importance of unique passwords is a valuable element of TSP training and communication strategy. The well-tried approach of using at least eight characters with a combination of letters and special characters, and excluding obvious details like names and birthdays can mitigate many potential risks.

There are also a wide range of password management tools that can help individuals achieve a best practice approach – TSPs should be sharing that insight on a regular basis.

In addition, employees should be cautious about using network connections outside of their home or work. Public networks – now practically ubiquitous in availability – can expose corporate data on a personal device to real risk. It’s important to educate and encourage end users to only connect to trusted networks or secure the connection with proper VPN settings.

Social engineering and phishing

An attack that deceives a user or administrator into disclosing information is considered social engineering, and phishing is one of the most common. These are usually attempted by the cybercriminal by engaging with the victim via email or chat, with the goal to uncover sensitive information such as credit card details and passwords.

The reason they are so successful is because they appear to come from a credible source, but in many cases, there are some definitive clues that should make users/employees suspicious. These can include weblinks containing random numbers and letters, typos, communication from senior colleagues that doesn’t usually occur, or even just a sense that something feels wrong about the situation.

But despite the efforts of cybercriminals to refine their approach to social engineering, well established preventative rules have remained effective. The first is – just don’t click. End users should trust their suspicions that something might not be right, they shouldn’t click on a link or attachment or give out any sensitive information. Just as important is to inform the internal IT or the TSP.

Alerting the right person or department in a timely manner is critical in preventing a phishing scam from having company-wide repercussions. TSPs should always encourage clients to seek their help to investigate or provide next steps.

Physical and device security

Online threats aren’t the only risks that need to be included in end user training – physical security is just as important to keeping sensitive information protected. For example, almost everyone will identify with the stress caused by accidentally leaving their phone or tablet unguarded. And unfortunately, many of us know what it’s like to lose a phone or have one stolen – the first worry that usually comes to mind is about the safety of data.

The same risks apply to workplace data if a device is left unattended, lost or stolen, but there are ways to help end users minimize the risk:

1. Keep devices locked when not in use. For many smartphone users, this is an automatic setting or a good habit they have acquired, but it also needs to be applied to desktop computers and laptops, where the same approach isn’t always applied.

2. Secure physical documents. Despite the massive surge in digital document creation and sharing, many organizations still need to use physical copies of key documents. They should be locked away when not needed, particularly outside of working hours.

3. Destroy old and unwanted information. Data protection extends to shredding documents that are no longer needed, and TSPs should be including reminders about these risks as an important addendum to their training on digital security.

This also extends to the impact BYOD policies can have on network security. For TSPs, this is a critical consideration as the massive growth in personal devices connected to corporate networks significantly increases their vulnerability to attack.

BYOD users are susceptible to the same threats as company desktops and without pre-installed endpoint protection, can be even less secure. Mobile devices must, therefore, be securely connected to the corporate network and remain in the employee’s possession. Helping them to manage device security will also help TSP security teams maintain the highest levels of vigilance.

Empowering end users to guard against the most common risks might feel intangible to employers and TSPs alike, and in reality, they may never be able to measure how many attacks they have defeated. But for TSPs, employees should form a central part of their overall security service, because failing to work with them risks failing their clients.

Moving past the madness of manually updated X.509 certificates

Microsoft’s Active Directory (AD) is by far the most widely used enterprise repository for digital identities. Microsoft Active Directory Certificate Services (ADCS) is an integrated, optional component of Windows Server designed to issue digital certificates.

Since it’s a built-in and “free” option, it’s no surprise that ADCS has been widely embraced by enterprises around the world. An on-premises or cloud/hybrid public key infrastructure (PKI) has proven to be more secure and easier to use than passwords, and is far easier to deal with when automated certificate management is integrated into AD.

For organizations operating an AD environment, the ability to leverage the certificate template information already included in Microsoft certificate services can make running a Microsoft CA extremely appealing. Since AD and Microsoft certificate services are connected, you can seamlessly register and provision certificates to all domain-connected objects based on group policies. Auto enrollment and silent installation make certificate enrollment and renewal easy for IT administrators and end users alike.

One of the greatest advantages of the Microsoft CA is automation, but that advantage does not extend to endpoints outside the Windows environment. Unfortunately, there are no free or open source Linux, UNIX or Mac tools available today that provide auto-enrollment or integrate with the Microsoft CA. The only “free” option is to manually create and renew certificates from a Microsoft CA using complicated and error-prone commands.

Within enterprise networks, Linux is often used for critical services that require X.509 trusted certificates. One typical need is for an SSL/TLS Server Authentication certificate, or web server certificate, on Red Hat Enterprise Linux (RHEL), Ubuntu server, or other Linux distributions. Certificates are also often needed on many other endpoints, including macOS-based systems, and to provide trusted security for enterprise applications.

The traditional process of creating a X.509 certificate on Linux, UNIX or macOS manually requires a working level of PKI knowledge and can take three to six hours to complete. You have to generate a key pair, create a Certificate Signing Request (CSR), submit it to a Certificate Authority (CA), wait for a PKI administrator to issue a certificate, download the certificate, configure and update the application using the service, and finally verify that it’s active. An example of what’s involved in creating a CSR using Open SSL is shown in Figure 1.

OPIS

Figure 1. Using OpenSSL to create a CSR requires knowledge of enterprise PKI policies for keys and certificates and filling in lots of details by hand.

A few years ago, when multi-year certificate lifespans were the norm and certificate volumes were lower, a few manually issued certificates weren’t seen as a big problem. When exceptions like a Linux or Mac system came up, they were handled on a one-off basis. This in turn led to the creation of manual processes. Such processes were easily justified by saying that the certificates could be easily tracked using a spreadsheet, and since the numbers were small and certificate renewals were years apart, it wasn’t worth the effort to get a product to solve the problem when the existing solution was sufficient.

Now, however, that has most definitely changed. Apple, Google and Mozilla have implemented policies that went to effect on Sept. 1, 2020 and that say that any new website certificate valid for more than 398 days will not be trusted by their respective browsers and instead will be rejected. Adding to the certificate management problem is the dramatic increase in the number of device and applications that require certificates, often numbering into the thousands or more.

Looking across the industry landscape, there are third-party certificate management systems (CMS) that can automate enterprise certificate processes more broadly, but such systems have require large-scale “rip and replace”. You have to deeply integrate each Linux system with Active Directory, including switching your system user authentication over as well.

This requires extensive changes to existing Linux and Microsoft infrastructure, staff re-training, and additional software licensing costs. The “rip and replace” approach also requires implementation time frames ranging from a few months to a few years for complex PKIs.

The advantage of such an approach – once you’ve worked through the implementation process – is end-to-end automation. When a CMS is used to create a certificate, it has all the data it needs to not only monitor the certificate for expiration but automatically provision a replacement certificate without human intervention.

While some CMS solutions can be incredibly complex, expensive, and time-consuming to install, there is a new generation of CMS offerings designed to simply extend the Microsoft-based PKI to encompass certificates on systems and applications outside the purview of Active Directory.

Such systems install into an existing Microsoft ADCS environment and mirror Windows certificate auto enrollment onto Linux, Unix or macOS systems. Certificates can be automatically created by registered computers from a central management console. Alternatively, as shown in Figure 2, the administrator of an end-node can request a certificate using one simple command without knowing how to create a keypair, generate a certificate request or know how to translate difficult to understand enrollment templates, formats and attribute requirements.

OPIS

Figure 2. Creating a CSR is greatly simplified through automation and requires no PKI expertise.

Once issued, this command will automatically create a CSR, submit it to the enterprise Microsoft CA, and install the certificate after it has been issued. This is done using pre-configured PKI policies stored on the enterprise CA. The policies are set up based on the role or purpose of the certificate. Because these policies are pre-defined, the system admin doesn’t need to know anything beyond the role or purpose of the system they are setting up.

Once the certificate has been installed, the admin can move on to other tasks with the knowledge that the enterprise CA will automatically renew the certificate without further intervention. And because the certificate was created by the enterprise CA and not self-signed, the certificate is automatically verifiable by any application in the enterprise.

With the certificate lifetimes already getting shorter, and likely to continue getting shorter, the need for automated certificate management has never been greater. While the Microsoft CA addresses the automation challenge within the Active Directory environment, far too many enterprises are still relying on manual processes for non-Microsoft systems and applications. Rather than abandon the Microsoft CA at considerable expense and disruption, it may be time to consider a certificate management option that bring the benefits of ADCS to the entire enterprise.

What’s next for cloud backup?

Cloud adoption was already strong heading into 2020. According to a study by O’Reilly, 88% of businesses were using the cloud in some form in January 2020. The global pandemic just accelerated the move to SaaS tools. This seismic shift where businesses live day-to-day means a massive amount of business data is making its way into the cloud.

cloud backup data

All this data is absolutely critical for core business functions. However, it is all too often mistakenly considered “safe” thanks to blind trust in the SaaS platform. But human error, cyberattacks, platform updates and software integrations can all easily compromise or erase that data … and totally destroy a business.

According to Microsoft, 94% of businesses report security benefits since moving to the cloud. Although there are definitely benefits, data is by no means fully protected – and the threat to cloud data continues to rise, especially as it ends up spread across multiple applications.

Organizations continue to overlook the simple steps they can take to better protect cloud data and their business. In fact, our 2020 Ecommerce Data Protection Survey found that one in four businesses has already experienced data loss that immediately impacted sales and operations.

Cloud data security illusions

Many companies confuse cloud storage with cloud backup. Cloud storage is just that – you’ve stored your data in the cloud. But what if, three years later, you need a record of that data and how it was moved or changed for an audit? What if you are the target of a cyberattack and suddenly your most important data is no longer accessible? What if you or an employee accidentally delete all the files tied to your new product line?

Simply storing data in the cloud does not mean it is fully protected. The ubiquity of cloud services like Box, Dropbox, Microsoft 365, Google G Suite/Drive, etc., has created the illusion that cloud data is protected and easily accessible in the event of a data loss event. Yet even the most trusted providers manage data by following the Shared Responsibility Model.

The same goes for increasingly popular business apps like BigCommerce, GitHub, Shopify, Slack, Trello, QuickBooks Online, Xero, Zendesk and thousands of other SaaS applications. Cloud service providers only fully protect system-level infrastructure and data. So while they ensure reliability and recovery for system-wide failures, the cloud app data of individual businesses is still at risk.

In the current business climate, human errors are even more likely. With the pandemic increasing the amount of remote work, employees are navigating constant distractions tied to health concerns, increasing family needs and an inordinate amount of stress.

Complicating things further, many online tools do not play nicely with each other. APIs and integrations can be a challenge when trying to move or share data between apps. Without a secure backup, one cyberattack, failed integration, faulty update or click of the mouse could wipe out the data a business needs to survive.

While top SaaS platforms continue to expand their security measures, data backup and recovery is missing from the roadmap. Businesses need to take matters into their own hands.

Current cloud backup best practices

In its most rudimentary form, a traditional cloud backup essentially makes a copy of cloud data to support business continuity and disaster recovery initiatives. Proactively protecting cloud data ensures that if that business-critical data is compromised, corrupted, deleted or inaccessible, they still have immediate access to a comprehensive, usable copy of the data needed to avoid business disruption.

From multi-level user access restrictions, password managers and regularly timed manual downloads, there are many basic (even if tedious) ways for businesses to better protect their cloud data. Some companies have invested in building more robust backup solutions to keep their cloud business data safe. However, homegrown backup solutions are costly and time intensive as they require constant updates to keep pace with ever-changing APIs.

In contrast, third-party backup solutions can provide an easier to manage, cost/time-efficient way to protect cloud data. There is a wide range of offerings though – some more reputable and secure than others. Any time business data is entrusted to a third party, reputability and security of that vendor must take center stage. If they have your data, they need to protect it.

Cloud backup providers need to meet stringent security and regulatory requirements so look for explicit details about how they secure your data. As business data continues to move to the cloud, storage limits, increasingly complex integrations and new security concerns will heighten the need for comprehensive cloud data protection.

Evolution

The trend of business operations moving to the cloud started long before the quarantine. Nevertheless, the cloud storage and security protocols most businesses currently rely on to protect cloud data are woefully insufficient.

Critical business data used to be stored (and secured) in a central location. Companies invested significant resources to manage walls of servers. With SaaS, everything is in the cloud and distributed – apps running your store, your account team, your mailing list, your website, etc. Business data in the backend of each SaaS tool looks very different and isn’t easily transferable.

All the data has become decentralized, and most backups can’t keep pace. It isn’t a matter of “if” a business will one day have a data loss event, it’s “when”. We need to evolve cloud backups into a comprehensive, distributed cloud data protection platform that secures as much business-critical data as possible across various SaaS platforms.

As businesses begin to rethink their approach to data protection in the cloud era, business backups will need to alleviate the worry tied to losing data – even in the cloud. True business data protection means not worrying about whether an online store will be taken out, a third-party app will cause problems, an export is fully up to date, where your data is stored, if it is compliant or if you have all of the information needed to fully (and easily) get apps back up and running in case of an issue.

Delivering cohesive cloud data protection, regardless of which application it lives in, will help businesses break free from backup worry. The next era of cloud data protection needs to let business owners and data security teams sleep easier.

Can automated penetration testing replace humans?

In the past few years, the use of automation in many spheres of cybersecurity has increased dramatically, but penetration testing has remained stubbornly immune to it.

automated penetration testing

While crowdsourced security has evolved as an alternative to penetration testing in the past 10 years, it’s not based on automation but simply throwing more humans at a problem (and in the process, creating its own set of weaknesses). Recently though, tools that can be used to automate penetration testing under certain conditions have surfaced – but can they replace human penetration testers?

How do automated penetration testing tools work?

To answer this question, we need to understand how they work, and crucially, what they can’t do. While I’ve spent a great deal of the past year testing these tools and comparing them in like-for-like tests against a human pentester, the big caveat here is that these automation tools are improving at a phenomenal rate, so depending on when you read this, it may already be out of date.

First of all, the “delivery” of the pen test is done by either an agent or a VM, which effectively simulates the pentester’s laptop and/or attack proxy plugging into your network. So far, so normal. The pentesting bot will then perform reconnaissance on its environment by performing scans a human would do – so where you often have human pentesters perform a vulnerability scan with their tool of choice or just a ports and services sweep with Nmap or Masscan. Once they’ve established where they sit within the environment, they will filter through what they’ve found, and this is where their similarities to vulnerability scanners end.

Vulnerability scanners will simply list a series of vulnerabilities and potential vulnerabilities that have been found with no context as to their exploitability and will simply regurgitate CVE references and CVSS scores. They will sometimes paste “proof” that the system is vulnerable but don’t cater well to false positives.

Automated penetration testing tools will then choose out of this list of targets the “best” system to take over, making decisions based on ease of exploit, noise and other such factors. So, for example, if it was presented with an Windows machine which was vulnerable to EternalBlue it may favor this over brute forcing an open SSH port that authenticates with a password as it’s a known quantity and much faster/easier to exploit.

Once it gains a foothold, it will propagate itself through the network, mimicking the way a pentester or attacker would do it, but the only difference being it actually installs a version of its own agent on the exploited machine and continues its pivot from there (there are variations in how different vendors do this).

It then starts the process again from scratch, but this time will also make sure it forensically investigates the machine it has landed on to give it more ammunition to continue its journey through your network. This is where it will dump password hashes if possible or look for hardcoded credentials or SSH keys. It will then add this to its repertoire for the next round of its expansion. So, while previously it may have just repeated the scan/exploit/pivot, this time it will try a pass-the-hash attack or try connecting to an SSH port using the key it just pilfered. Then, it pivots again from here and so on and so forth.

If you notice a lot of similarities to how a human pentester behaves, you’re absolutely right: a lot of this is exactly how pentesters (and to a lesser extent) attackers behave. The toolsets are similar and the techniques and vectors used to pivot are identical in many ways.

So, what’s different?

First of all, the act of automation gives a few advantages over the ageing pentesting methodology (and equally chaotic crowdsourced methodology).
The speed of the test and reporting is many magnitudes faster, and the reports are actually surprisingly readable (after verifying with some QSA’s, they will also pass the various PCI DSS pentesting requirements).

No more waiting days or weeks for a report that has been drafted by human hands and gone through a few rounds of QA before being delivered. This is one of the primary weaknesses of human pen tests: the adoption of continuous delivery has caused many pen test reports to become out of date as soon as they are delivered, since the environment on which the test was completed has been updated multiple times since, and therefore, had potential vulnerabilities and misconfigurations introduced that weren’t present at the time of the pen test. This is why traditional pentesting is more akin to a snapshot of your security posture at a particular point in time.

Automated penetration testing tools get around this limitation by being able to run tests daily, or twice daily, or on every change, and deliver a report almost instantly.

The second advantage is the entry point. A human pentester may be given a specific entry point into your network, while an automated pentesting tool can run the same pen test multiple times from different entry points to uncover vulnerable vectors within your network and monitor various impact scenarios depending on the entry point. While this is theoretically possible with a human it would require a huge budgetary investment due to having to pay each time for a different test.

What are the downsides?

1. Automated penetration testing tools don’t understand web applications – at all. While they will detect something like a web server at the ports/services level, they won’t understand that you have an IDOR vulnerability in your internal API or a SSRF in an internal web page that you can use to pivot further. This is because the web stack today is complex and, to be fair, even specialist scanners (like web application scanners) have a hard time detecting vulnerabilities that aren’t low hanging fruit (such as XSS or SQLi)

2. You can only use automated pentesting tools “inside” the network. As most exposed company infrastructure will be web-based and automated pentesting tools don’t understand these, you’ll still need to stick to a good old-fashioned human pentester for testing from the outside.

To conclude, the technology shows a lot of promise, but it’s still early days and while they aren’t ready to make human pentesters redundant just yet, they do have a role in meeting today’s offensive security challenges that can’t be met without automation.

What the IoT Cybersecurity Improvement Act of 2020 means for the future of connected devices

Connected devices are becoming more ingrained in our daily lives and the burgeoning IoT market is expected to grow to 41.6 billion devices by 2025. As a result of this rapid growth and adoption at the consumer and commercial level, hackers are infiltrating these devices and mounting destructive hacks that put sensitive information and even lives at risk.

IoT Cybersecurity Improvement Act of 2020

These attacks and potential dangers have kept security at top of mind for manufacturers, technology companies and government organizations, which ultimately led to the U.S. House of Representatives passing the IoT Cybersecurity Improvement Act of 2020.

The bill focuses on increasing the security of federal devices with standards provided by the National Institute of Standards and Technology (NIST), which will cover devices from development to the final product. The bill also requires Homeland Security to review and revisit the legislation up to every five years and revise it as necessary, which will keep it up to date with the latest innovative tech and new standards that might come along with it.

Additional considerations

Although it is a step in the right direction to tighten security for federal devices, it only scratches the surface of what the IoT industry needs as a whole. However, as this bill is the first of its kind to be passed by the House, we need to consider how it will help shape the future of IoT security:

Better transparency throughout the device lifecycle

With a constant focus on innovation in the IoT industry, oftentimes security is overlooked in order to rush a product onto shelves. By the time devices are ready to be purchased, important details like vulnerabilities may not have been disclosed throughout the supply chain, which could expose and exploit sensitive data. To date, many companies have been hesitant to publish these weak spots in their device security in order to keep it under wraps and their competition and hackers at bay.

However, now the bill mandates contractors and subcontractors involved in developing and selling IoT products to the government to have a program in place to report the vulnerabilities and subsequent resolutions. This is key to increasing end-user transparency on devices and will better inform the government on risks found in the supply chain, so they can update guidelines in the bill as needed.

For the future of securing connected devices, multiple stakeholders throughout the supply chain need to be held accountable for better visibility and security to guarantee adequate protection for end-users.

Public-private partnerships on the rise

Per this bill, for the development of the security guidelines, the government will need to consult with cybersecurity experts to align on industry standards and best practices for better IoT device protection.

Working with industry-led organizations can provide accurate insight and allow the government to see current loopholes to create standards for real-world application. Encouraging these public-private partnerships is essential to advancing security in a more holistic way and will ensure guidelines and standards aren’t created in a silo.

Shaping consumer security from a federal focused bill

The current bill only focuses on securing devices on a federal level, but with the crossover from manufacturers and technology companies working in both the commercial/government and consumer space, naturally this bill will infiltrate the consumer device market too. It’s not practical for a manufacturer to follow two separate guidelines for both categories of products, so those standards in place for government contracted devices will likely be applied to all devices on the assembly line.

As the focus will shift to consumer safety after this bill, the challenge for manufacturers to eventually test products against two bills – one with federal and one with consumer standards – has been raised in the industry. The only way to remedy the issue is if there are global, adoptable and scalable standards across all industries to streamline security and provide appropriate protection for all device categories.

Universal standards – Are we there yet?

While this bill is a great start for the IoT industry and may serve as the catalyst for future IoT bills, there is still some room for improvement for the future of connected device security. In its current form, the bill does not explicitly define the guidelines for security, which can be frustrating and confusing for IoT device stakeholders who need to comply with them. With multiple government organizations and industry-led programs creating their own set of standards, the only way to truly propel this initiative forward is to harmonize and clearly define standards for universal adoption.

While the IoT bill signals momentum from the US government to prioritize IoT security, an international effort needs to be in place for establishing global standards and protecting connected devices must be made, as the IoT knows no boundaries. Syncing these standards and enforcing them through trusted certification programs will hold manufacturers and tech companies accountable for security and provide transparency for all end-users on a global scale.

Conclusion

The IoT Cybersecurity Improvement Act of 2020 is a landmark accomplishment for the IoT industry but is only just the beginning. As the world grows more integrated through connected devices, security standards will need to evolve to keep up with the digital transformation occurring in nearly every industry.

Due to security remaining a key concern for device manufacturers, tech companies, consumers and government organizations, the need for global standards remains in focus and will likely need an act of Congress to make them a priority.

Political campaigns adopt surveillance capitalism at their own peril

Since the middle of the 20th century, commercial advertising and marketing techniques have made their way into the sphere of political campaigns. The tactics associated with surveillance capitalism – the commodification of personal data for profit as mastered by companies like Google and Facebook – have followed the same path.

surveillance capitalism

The race between competing political campaigns to out-collect, out-analyze and out-leverage voter data has raised concerns about the damaging effects it has on privacy and democratic participation, but also about the fact that all of this data, if seized by adversarial nation-states, opens up opportunities for affecting an election and sowing electoral chaos.

Let’s start by looking at the information available to political campaigns. Typically, everything begins and ends with the voter file, which is a compendium of information that’s rooted in public data about an individual voter, including their party affiliation and voting frequency. The goal for political operatives is to continually enrich this information and to do so better and faster than their political rivals.

Campaign field workers add to voter files with written notes reflecting conversations with and observations of actual voters. But the real magic happens when this data is augmented with other datasets that are purchased directly from a data broker or shared from outside political groups through the national party’s data exchange.

Consumer information supplied by data brokers typically draws from voters’ digital activities (such as smartphone app activity) as well as offline activities (like credit card purchases), often presenting hundreds of attributes. In addition to data on things like income and occupation, additional datapoints enable campaigns to infer a variety of lifestyle preferences and attitudes.

Within this category of consumer information, voters’ location histories have an outsized value to campaigns. For monetization purposes, many popular smartphone apps, with users’ permission, track their locations and then make this data available to data brokers or advertisers. This location data can reveal extremely private information, including where an individual lives and how often they attend religious services. Though the data is meant to be anonymous, companies can tie the data to an individual’s identity by matching their smartphone’s advertising ID number or their presumed home address with other information.

In addition to purchased data, presidential campaigns have another tool for getting information directly from supporters: the campaign app. These apps allow candidates to speak directly to voters and are intended to increase engagement through gamification or other means. But perhaps the more important driver is that these apps can serve as a huge source of data. The Trump 2020 app, for example, makes extensive permission requests, including for access to a smartphone’s identity and Bluetooth. The app can potentially sniff out much of the information on a user’s device, including their app usage.

With this trove of data at their disposal, the next step for campaigns is to combine the various datasets together into a single voter list, matching specific voters to the commercial data provided. The data is then run through custom-built models, the end result of which is that voters are put into granular segments and scored on certain issues.

Armed with these insights, campaigns can then find the voters they need to target, including voters who are potentially receptive but currently disengaged and voters who previously supported the candidate or party but have lost enthusiasm. Campaigns can also use their data learnings to boost turnout among decided voters, to register unregistered voters and even to suppress support for the opposition candidate.

But despite the value of this data to campaigns, securing it isn’t always a priority. The reality is that political campaigns are fast-moving operations where the focus is on reaching voters and raising money, not cybersecurity. As just one example of this poor data stewardship, close to 15 million records on Texas voters were found on an exposed and unsecured server just months before the 2018 midterm elections.

If another country were looking to meddle in our elections, such data could potentially be stolen and then weaponized in ways that could tip the scales for one preferred candidate or simply undermine democratic principles.

Some scenarios include:

  • The adversarial country dumps the stolen voter data online, creating a liability for the campaign from which the data was stolen (or at the very least, creating a distraction from the campaign’s messaging).
  • In an attempt to silence the opposing campaign’s high-profile supporters, the adversary doxes them using embarrassing or intensely private details gleaned from the stolen data.
  • The adversary spoofs the opposing campaign through text message, sharing disinformation about the candidate or the voting process directly to the candidate’s cadre of supporters.
  • Using a political action committee as a front, the adversary sets up a massive digital advertising scheme microtargeted to the opposition candidate’s softer supporters with messages designed to chip away at their enthusiasm for voting.
  • Leveraging psychometric insights from the stolen data, the adversary finds the opposing campaign’s ardent supporters who may be most susceptible to manipulation and then, posing as the campaign, lures the supporters into actions designed to make the campaign seem guilty by association once publicized.

In retrospect, the harvesting of data popularly associated with Cambridge Analytica wasn’t an aberration so much as it was a harbinger of the digital arms race to come in electoral politics, a race to gather as much information about citizens’ locations, habits and beliefs as possible for the purposes of better informing campaign strategies and delivering optimized messaging to individual voters.

In the absence of a national data privacy law or stricter campaign data regulations, there’s very little that any one of us can do, short of living off the grid, to prevent our personal data from being fodder for campaigns and threat actors alike. In the meantime, you may choose to reward the candidates who most respect your data and your privacy by giving them your vote.

How to apply data protection best practices to the 2020 presidential election

It’s safe to assume that we need to protect presidential election data, since it’s one of the most critical sets of information available. Not only does it ensure the legitimacy of elections and the democratic process, but also may contain personal information about voters. Given its value and sensitivity, it only makes sense that this data would be a target for cybercriminals looking for some notoriety – or a big ransom payment.

protect presidential election

In 2016, more needed to be done to protect the election and its data from foreign interference and corruption. This year, both stringent cybersecurity and backup and recovery protocols should be implemented in anticipation of sophisticated foreign interference.

Cybersecurity professionals in government and the public sector should look to the corporate world and mimic – and if possible improve upon – the policies and procedures being applied to keep data safe. Particularly as voting systems become more digitized, the likelihood of IT issues increases, so it’s essential to have a data protection plan in place to account for these challenges and emerging cyber threats.

The risk of ransomware in 2020

Four years ago, ransomware attacks impacting election data were significantly less threatening. Today, however, the thought of cybercriminals holding election data hostage in exchange for a record-breaking sum of money sounds entirely plausible. A recent attack on Tyler Technologies, a software provider for local governments across the US, highlighted the concerns held across the nation and left many to wonder if the software providers in charge of presidential election data might suffer a similar fate.

Regardless of whether data is recoverable, ransomware attacks typically cause IT downtime as security teams attempt to prevent the attack from spreading. While this is the best practice to follow to contain the malware, the impacts of system downtime on the day of the election could be catastrophic. To combat this, government officials should look for solutions that offer continuous availability technology.

The best defense also integrates cybersecurity and data protection, as removing segmentation streamlines the process of detecting and responding to attacks, while simultaneously recovering systems and data. This will simplify the process for stressed-out government IT teams already tasked with dealing with the chaos of election day.

Developing a plan to protect the presidential election

While ransomware is a key concern, it isn’t the only threat that election data faces. The 2016 election revealed to what degree party election data could be interfered with. Now that we know the risks, we also know that focusing solely on cybersecurity without a backup plan in place isn’t enough to keep this critical data secure.

The first step to any successful data protection plan is a robust backup strategy. Since the databases or cloud platforms that compile voter data are likely to be big targets, government security pros should store copies of that data in multiple locations to reduce the chance that one attack takes down an entire system. Ideally, they should follow the 3-2-1 rule by keeping three copies of data, in two locations, with one offsite or in the cloud.

It’s also important to protect these backups with the same level of care as you would critical IT infrastructure. Backups are only helpful if they’re clean and easily accessible – particularly for a time-sensitive situation like the presidential election, it’s important to be able to recover backed-up data as quickly as possible. The last thing government officials need is missing or inaccessible votes on election day.

The need to protect this data doesn’t end when voting does, however. Government IT pros also must consider implementing a strategy for protecting stored voter data long-term. Compliance with data privacy regulations surrounding voter data is key to maintaining a fair democratic process, so they should make sure to consider any local regulations that may dictate how this data is stored and accessed. Protection that extends after the election will also be important for safeguarding against cyberattacks that might target this data down the line.

Not only could cyberattacks hold voter data hostage, they may also affect how quickly the results of the election can be determined. Voter data that is lost altogether might cause an entire election to be called a fraud. This would have a far-reaching impact on people across America, and our democratic process as a whole. Luckily, this is avoidable with a data protection and ransomware response plan that gets government officials prepared for when an attack happens.

Work from home strategies leave many companies in regulatory limbo

Like most American businesses, middle market companies have been forced to rapidly implement a variety of work-from-home strategies to sustain productivity and keep employees safe during the COVID-19 pandemic. This shift, in most cases, was conducted with little chance for appropriate planning and due diligence.

regulatory grace period

This is especially true in regard to the security and compliance of remote work solutions, such as new cloud platforms, remote access products and outsourced third parties. Many middle market companies lacked the resources of their larger counterparts to diagnose and address potential gaps in a timely manner, and the pressure to make these changes to continue operations meant that many of these shortcomings were not even considered at the time.

Perhaps more important than the potential security risks that could come with these hastily deployed solutions is the risk that an organization could realize later that the mechanisms they deployed turned out to lack controls required by a variety of regulatory and industry standards.

The dilemma

Take medical and financial records as an example. In a normal scenario, an organization typically walls off systems that touch such sensitive data, creating a segmented environment where few systems or people can interact with that data, and even then, only under tightly controlled conditions. However, when many companies set up work-from-home solutions, they quickly realized that their new environment did not work with the legacy architecture protecting the data. Employees could not effectively do their jobs, so snap decisions were made to allow the business to operate.

In this situation, many companies took actions, such as removing segmentation to allow the data and systems to be accessible by remote workers, which unfortunately exposed sensitive information directly to the main corporate environment. Many companies also shifted data and processes into cloud platforms without determining if they were approved for sensitive data. In the end, these workarounds may have violated any number of regulatory, industry or contractual obligations.

In the vast majority of these circumstances, there is no evidence of any type of security event or a data breach, and the control issues have been identified and addressed. However, companies are now in a position where they know that, for a period of time (as short as a few days or months in some cases), they were technically non-compliant.

Many middle market companies now face a critical dilemma: as the time comes to perform audits or self-attestation reports, do they report these potential lapses to regulatory or industry entities, such as the SEC, PCI Council, HHS, DoD or FINRA, knowing that could ultimately result in significant reputational and financial damages and, if so, to what extent?

A temporary regulatory grace period is needed, and soon

The decision is a pivotal one for a significant number of middle market companies. To date, regulators have not been showing much sympathy during the pandemic, and a large segment of the middle market finds itself in a no man’s land. If they had not made these decisions to continue business operations as best they could, they would have gone out of business. But now, if they do report these violations, the related fines and penalties will likely result in the same fate.

A solution for this crucial predicament is a potential temporary regulatory grace period. Regulatory bodies or lawmakers could establish a window of opportunity for organizations to self-identify the type and duration of their non-compliance, what investigations were done to determine that no harm came to pass, and what steps were, or will be, taken to address the issue.

Currently, the concept of a regulatory grace period is slowly gaining traction in Washington, but time is of the essence. Middle market companies are quickly approaching the time when they will have to determine just what to disclose during these upcoming attestation periods.

Companies understand that mistakes were made, but those issues would not have arisen under normal circumstances. The COVID-19 pandemic is an unprecedented event that companies could have never planned for. Business operations and personal safety initially consumed management’s thought processes as companies scrambled to keep the lights on.

Ultimately, many companies made the right decisions from a business perspective to keep people working and avoid suffering a data breach, even in a heightened environment of data security risks. Any grace period would not absolve the organization of responsibility for any regulatory exposures. For example, if a weakness has not already been identified and addressed, the company could still be subject to fines and other penalties at the conclusion of the amnesty window.

Even a proposed grace period would not mean that middle market companies would be completely out of the woods. Companies often must comply with a host of non-regulatory obligations, and while a grace period may provide some relief from government regulatory agencies, it would not solve similar challenges that may arise related to industry regulations, such as PCI or lapses in third-party agreements.

But a grace period from legislators could be a significant positive first step and potentially represent a blueprint for other bodies. Without some kind of lifeline, many middle market companies that disclose their temporary compliance gaps would likely be unable to continue operations and a significant amount of jobs subsequently may be lost.

MDR service essentials: Market trends and what to look for

Mark Sangster, VP and Industry Security Strategist at eSentire, is a cybersecurity evangelist who has spent significant time researching and speaking to peripheral factors influencing the way that legal firms integrate cybersecurity into their day-to-day operations. In this interview, he discusses MDR services and the MDR market.

MDR service essentials

What are the essential building blocks of a robust MDR service?

Managed Detection and Response (MDR) must combine two elements. The first is an aperture that can collect the full spectrum of telemetry. This means not only monitoring the network through traditional logging and perimeter defenses but also collecting security telemetry from endpoints, cloud services and connected IoT devices.

The wider the aperture, the more light, or signal. This creates the need for rapid ingestion of a growing volume of data, while doing so in near real-time, to aid rapid detection.

The second element is the ability to respond beyond simple alerting. This means the ability to disrupt north and south traffic at the TCP/IP, DNS and geo-fencing levels. It can disrupt application layer traffic or at least block specific applications. Encompassing the ability to perform endpoint forensics to determine integrity of accessed data and systems and the ability to quarantine devices from endpoints to industrial IoT devices and other operational systems, such as medical diagnosis and patient-management systems.

What makes an MDR service successful?

MDR services require a hyper-vigilance with the ability to scale and rapidly adapt to secure emerging technology. This includes OT-based systems beyond the typical auspices of IT. It also requires an ecosystem of talent: working with universities to guide curriculum, training programs, certification maintenance and work paths through Security Operations Center (SOC) and into threat intelligence and lab work.

The MDR market is becoming more competitive and the number of providers continues to grow. What is the best approach for choosing an MDR provider?

Like any vendor selection, it is more about determining your requirements than picking vendors based on boasts or comprehensive data sheets. It means testing vendor capabilities and carefully matching them to your requirements. For example, if you don’t have internal forensics capabilities, then a vendor that is good at detection but only provides alerts won’t solve your problem.

Find a vendor that provides full services and matches your internal capabilities.

How do you see the MDR market evolving in the near future? What are organizations looking for?

More and more, companies will move to outsourced SOC-like services. This means MDR firms need to up their game, and a tighter definition must come into play to weed out pretender firms. Too much rests on their capabilities.

MDR vendors also need to focus on emerging tech (5G, IIoT, etc.) and be prepared to defend against larger adversaries, like organized criminal elements and state-sponsored actors who now troll the midmarket space.

5 tips to reduce the risk of email impersonation attacks

Email attacks have moved past standard phishing and become more targeted over the years. In this article, I will focus on email impersonation attacks, outline why they are dangerous, and provide some tips to help individuals and organizations reduce their risk exposure to impersonation attacks.

email impersonation attacks

What are email impersonation attacks?

Email impersonation attacks are malicious emails where scammers pretend to be a trusted entity to steal money and sensitive information from victims. The trusted entity being impersonated could be anyone – your boss, your colleague, a vendor, or a consumer brand you get automated emails from.

Email impersonation attacks are tough to catch and worryingly effective because we tend to take quick action on emails from known entities. Scammers use impersonation in concert with other techniques to defraud organizations and steal account credentials, sometimes without victims realizing their fate for days after the fraud.

Fortunately, we can all follow some security hygiene best practices to reduce the risk of email impersonation attacks.

Tip #1 – Look out for social engineering cues

Email impersonation attacks are often crafted with language that induces a sense of urgency or fear in victims, coercing them into taking the action the email wants them to take. Not every email that makes us feel these emotions will be an impersonation attack, of course, but it’s an important factor to keep an eye out for, nonetheless.

Here are some common phrases and situations you should look out for in impersonation emails:

  • Short deadlines given at short notice for processes involving the transfer of money or sensitive information.
  • Unusual purchase requests (e.g., iTunes gift cards).
  • Employees requesting sudden changes to direct deposit information.
  • Vendor sharing new.

email impersonation attacks

This email impersonation attack exploits the COVID-19 pandemic to make an urgent request for gift card purchases.

Tip #2 – Always do a context check on emails

Targeted email attacks bank on victims being too busy and “doing before thinking” instead of stopping and engaging with the email rationally. While it may take a few extra seconds, always ask yourself if the email you’re reading – and what the email is asking for – make sense.

  • Why would your CEO really ask you to purchase iTunes gift cards at two hours’ notice? Have they done it before?
  • Why would Netflix emails come to your business email address?
  • Why would the IRS ask for your SSN and other sensitive personal information over email?

To sum up this tip, I’d say: be a little paranoid while reading emails, even if they’re from trusted entities.

Tip #3 – Check for email address and sender name deviations

To stop email impersonation, many organizations have deployed keyword-based protection that catches emails where the email addresses or sender names match those of key executives (or other related keywords). To get past these security controls, impersonation attacks use email addresses and sender names with slight deviations from those of the entity the attacks are impersonating. Some common deviations to look out for are:

  • Changes to the spelling, especially ones that are missed at first glance (e.g., “ei” instead of “ie” in a name).
  • Changes based on visual similarities to trick victims (e.g. replacing “rn” with “m” because they look alike).
  • Business emails sent from personal accounts like Gmail or Yahoo without advance notice. It’s advisable to validate the identity of the sender through secondary channels (text, Slack, or phone call) if they’re emailing you with requests from their personal account for the first time.
  • Descriptive changes to the name, even if the changes fit in context. For example, attackers impersonating a Chief Technology Officer named Ryan Fraser may send emails with the sender name as “Ryan Fraser, Chief Technology Officer”.
  • Changes to the components of the sender name (e.g., adding or removing a middle initial, abbreviating Mary Jane to MJ).

Tip #4 – Learn the “greatest hits” of impersonation phrases

Email impersonation has been around for long enough that there are well-known phrases and tactics we need to be aware of. The emails don’t always have to be directly related to money or data – the first email is sometimes a simple request, just to see who bites and buys into the email’s faux legitimacy. Be aware of the following phrases/context:

  • “Are you free now?”, “Are you at your desk?” and related questions are frequent opening lines in impersonation emails. Because they seem like harmless emails with simple requests, they get past email security controls and lay the bait.
  • “I need an urgent favor”, “Can you do something for me within the next 15 minutes?”, and other phrases implying the email is of a time-sensitive nature. If you get this email from your “CEO”, your instinct might be to respond quickly and be duped by the impersonation in the process.
  • “Can you share your personal cell phone number?”, “I need your personal email”, and other out-of-context requests for personal information. The objective of these requests is to harvest information and build out a profile of the victim; once adversaries have enough information, they have another entity to impersonate.

Tip #5 – Use secondary channels of authentication

Enterprise adoption of two-factor authentication (2FA) has grown considerably over the years, helping safeguard employee accounts and reduce the impact of account compromise.

Individuals should try to replicate this best practice for any email that makes unusual requests related to money or data. For example:

  • Has a vendor emailed you with a sudden change in their bank account details, right when an invoice is due? Call or text the vendor and confirm that they sent the email.
  • Did your manager email you asking for gift card purchases? Send them a Slack message (or whatever productivity app you use) to confirm the request.
  • Did your HR representative email you a COVID resource document that needs email account credentials to be viewed? Check the veracity of the email with the HR rep.

Even if you’re reaching out to very busy people for this additional authentication, they will understand and appreciate your caution.

These tips are meant as starting points for individuals and organizations to better understand email impersonation and start addressing its risk factors. But effective protection against email impersonation can’t be down to eye tests alone. Enterprise security teams should conduct a thorough audit of their email security stack and explore augments to native email security that offer specific protection against impersonation.

With email more important to our digital lives than ever, it’s vital that we are able to believe people are who their email says they are. Email impersonation attacks exploit this sometimes-misplaced belief. Stopping email impersonation attacks will require a combination of security hygiene, email security solutions that provide specific impersonation protection, and some healthy paranoia while reading emails – even if they seem to be from people you trust.

Data protection predictions for 2021

2020 presented us with many surprises, but the world of data privacy somewhat bucked the trend. Many industry verticals suffered losses, uncertainty and closures, but the protection of individuals and their information continued to truck on.

data protection 2021

After many websites simply blocked access unless you accepted their cookies (now deemed unlawful), we received clarity on cookies from the European Data Protection Board (EDPB). With the ending of Privacy Shield, we witnessed the cessation of a legal basis for cross border data transfers.

Severe fines levied for General Data Protection Regulation (GDPR) non-compliance showed organizations that the regulation is far from toothless and that data protection authorities are not easing up just because there is an ongoing global pandemic.

What can we expect in 2021? Undoubtedly, the number of data privacy cases brought before the courts will continue to rise. That’s not necessarily a bad thing: with each case comes additional clarity and precedent on many different areas of the regulation that, to date, is open to interpretation and conjecture.

Last time I spoke to the UK Information Commissioner’s Office regarding a technicality surrounding data subject access requests (DSARs) submitted by a representative, I was told that I was far from the only person enquiring about it, and this only illustrates some of the ambiguities faced by those responsible for implementing and maintaining compliance.

Of course, this is just the GDPR. There are many other data privacy legislative frameworks to consider. We fully expect 2021 to bring full and complete alignment of the ePrivacy Regulations with GDPR, and eradicate the conflict that exists today, particularly around consent, soft opt-in, etc., where the GDPR is very clear but the current Privacy and Electronic Communication Regulation (PECR) not quite so much.

These are just inside Europe but across the globe we’re seeing continued development of data localization laws, which organizations are mandated to adhere to. In the US, the California Consumer Privacy Act (CCPA) has kickstarted a swathe of data privacy reforms within many states, with many calls for something similar at the federal level.

The following year(s) will see that build and, much like with the GDPR, precedent-setting cases are needed to provide more clarity regarding the rules. Will Americans look to replace the shattered Privacy Shield framework, or will they adopt Standard Contractual Clauses (SCCs) more widely? SCCs are a very strong legal basis, providing the clauses are updated to align with the GDPR (something else we’d expect to see in 2021), and I suspect the US will take this road as the realization of the importance of trade with the EU grows.

Other noteworthy movements in data protection laws are happening in Russia with amendments to the Federal Law on Personal Data, which is taking a closer look at TLS as a protective measure, and in the Philippines, where the Personal Data Protection Act 2021 (PDPA) is being replaced by a new bill (currently a work in progress, but it’s coming).

One of the biggest events of 2021 will be the UK leaving the EU. The British implementation of the GDPR comes in the form of the UK Data Protection Bill 2018. Aside from a few deregulations, it’s the GDPR and that’s great… as far as it goes. Having strong local data privacy laws is good, but after enjoying 47 years (at the time of writing) of free movement within the Union, how will being outside of the EU impact British business?

It is thought and hoped that the UK will be granted an adequacy decision fairly swiftly, given that historically local UK laws aligned with those inside the Union, but there is no guarantee. The uncertainty around how data transfers will look in future might result in the British industry using more SCCs. The currently low priority plans to make Binding Corporate Rules (BCR) easier and more affordable will come sharply to the fore as the demand for them goes up.

One thing is certain, it’s going to be a fascinating year for data privacy and we are excited to see clearer definitions, increased certification, precedent-setting case law and whatever else unfolds as we continue to navigate a journey of governance, compliance and security.

Moving to the cloud with a security-first, zero trust approach

Many companies tend to jump into the cloud before thinking about security. They may think they’ve thought about security, but when moving to the cloud, the whole concept of security changes. The security model must transform as well.

moving to the cloud

Moving to the cloud and staying secure

Most companies maintain a “castle, moat, and drawbridge” attitude to security. They put everything inside the “castle” (datacenter); establish a moat around it, with sharks and alligators, guns on turrets; and control access by raising the drawbridge. The access protocol involves a request for access, vetting through firewall rules where the access is granted or denied. That’s perimeter security.

When moving to the cloud, perimeter security is still important, but identity-based security is available to strengthen the security posture. That’s where a cloud partner skilled at explaining and operating a different security model is needed.

Anybody can grab a virtual machine, build the machine in the cloud, and be done, but establishing a VM and transforming the machine to a service with identity-based security is a different prospect. When identity is added to security, the model looks very different, resulting in cost savings and an increased security posture.

Advanced technology, cost of security, and lack of cybersecurity professionals place a strain on organizations. Cloud providers invest heavily in infrastructure, best-in-class tools, and a workforce uniquely focused on security. As a result, organizations win operationally, financially, and from a security perspective, when moving to the cloud. To be clear, moving applications and servers, as is, to the cloud does not make them secure.

Movement to the cloud should be a standardized process and should use a Cloud Center of Excellence (CCoE) or Cloud Business Office (CBO); however, implemented within a process focused on security first, organizations can reap the security benefits.

Shared responsibility

Although security is marketed as a shared responsibility in the cloud, ultimately, the owner of the data (customer) is responsible and the responsibility is non-transferrable. In short, the customer must understand the responsibility matrix (RACI) involved to accomplish their end goals. Every cloud provider has a shared responsibility matrix, but organizations often misunderstand the responsibilities or the lines fall into a grey area. Regardless of responsibility models, the data owner has a responsibility to protect the information and systems. As a result, the enterprise must own an understanding of all stakeholders, their responsibilities, and their status.

When choosing a partner, it’s vital for companies to identify their exact needs, their weaknesses, and even their culture. No cloud vendor will cover it all from the beginning, so it’s essential that organizations take control and ask the right questions (see Cloud Security Alliance’s CAIQ), in order to place trust in any cloud provider. If it’s to be a managed service, for example, it’s crucial to ask detailed questions about how the cloud provider intends to execute the offering.

It’s important to develop a standard security questionnaire and probe multiple layers deep into the service model until the provider is unable to meet the need. Looking through a multilayer deep lens allows the customer and service provider to understand the exact lines of responsibility and the details around task accomplishment.

Trust-as-a-Service

It might sound obvious, but it’s worth stressing: trust is a shared responsibility between the customer and cloud provider. Trust is also earned over time and is critical to the success of the customer-cloud provider relationship. That said, zero trust is a technical term that means, from a technology viewpoint, assume danger and breach. Organizations must trust their cloud provider but should avoid blind trust and validate. Trust as a Service (TaaS) is a newer acronym that refers to third-party endorsement of a provider’s security practices.

Key influencers of a customer’s trust in their cloud provider include:

  • Data location
  • Investigation status and location of data
  • Data segregation (keeping cloud customers’ data separated from others)
  • Availability
  • Privileged access
  • Backup and recovery
  • Regulatory compliance
  • Long-term viability

A TaaS example: Google Cloud

Google has taken great strides to earn customer trust, designing the Google Cloud Platform with a key eye on zero trust and its implementation of the model BeyondCorp. For example, Google has implemented two core concepts including:

  • Delivery of services and data: ensuring that people with the correct identity and the right purpose can access the required data every time
  • Prioritization and focus: access and innovation are placed ahead of threats and risks, meaning that as products are innovated, security is built into the environment

Transparency is very important to the trust relationship. Google has enabled transparency through strong visibility and control of data. When evaluating cloud providers, understanding their transparency related to access and service status is crucial. Google ensures transparency by using specific controls including:

  • Limited data center access from a physical standpoint, adhering to strict access controls
  • Disclosing how and why customer data is accessed
  • Incorporating a process of access approvals

Multi-layered security for a trusted infrastructure

Finally, cloud services must provide customers with an understanding of how each layer of infrastructure works and build rules into each. This includes operational and device security, encrypting data at rest, multiple layers of identity, and finally storage services: multi-layered, and supported by security by default.

Cloud native companies have a security-first approach and naturally have a higher security understanding and posture. That said, when choosing a cloud provider, enterprises should always understand, identify, and ensure that their cloud solution addresses each one of their security needs, and who’s responsible for what.

Essentially, every business must find a cloud partner that can answer all the key questions, provide transparency, and establish a trusted relationship in the zero trust world where we operate.

Preventing cybersecurity’s perfect storm

Zerologon might have been cybersecurity’s perfect storm: that moment when multiple conditions collide to create a devastating disaster. Thanks to Secura and Microsoft’s rapid response, it wasn’t.

Zerologon scored a perfect 10 CVSS score. Threats rating a perfect 10 are easy to execute and have deep-reaching impact. Fortunately, they aren’t frequent, especially in prominent software brands such as Windows. Still, organizations that perpetually lag when it comes to patching become prime targets for cybercriminals. Flaws like Zerologon are rare, but there’s no reason to assume that the next attack will not be using a perfect 10 CVSS vulnerability, this time a zero-day.

Zerologon: Unexpected squall

Zerologon escalates a domain user beyond their current role and permissions to a Windows Domain Administrator. This vulnerability is trivially easy to exploit. While it seems that the most obvious threat is a disgruntled insider, attackers may target any average user. The most significant risk comes from a user with an already compromised system.

In this scenario, a bad actor has already taken over an end user’s system but is constrained only to their current level of access. By executing this exploit, the bad actor can break out of their existing permissions box. This attack grants them the proverbial keys to the kingdom in a Windows domain to access whatever Windows-based devices they wish.

Part of why Zerologon is problematic is that many organizations rely on Windows as an authoritative identity for a domain. To save time, they promote their Windows Domain Administrators to an Administrator role throughout the organizational IT ecosystem and assign bulk permissions, rather than adding them individually. This method eases administration by removing the need to update the access permissions frequently as these users change jobs. This practice violates the principle of least privilege, leaving an opening for anyone with a Windows Domain Administrator role to exercise broad-reaching access rights beyond what they require to fulfill the role.

Beware of sharks

Advanced preparation for attacks like these requires a fundamental paradigm shift in organizational boundary definitions away from a legacy mentality to a more modern cybersecurity mindset. The traditional castle model assumes all threats remain outside the firewall boundary and trust everything either natively internal or connected via VPN to some degree.

Modern cybersecurity professionals understand the advantage of controls like zero standing privilege (ZSP), which authorizes no one and requires that each user request access and evaluation before granting privileged access. Think of it much like the security check at an airport. To get in, everyone —passenger, pilot, even store staff— needs to be inspected, prove they belong and have nothing questionable in their possession.

This continual re-certification prevents users from gaining access once they’ve experienced an event that alters their eligibility, such as leaving the organization or changing positions. Checking permissions before approving them ensures only those who currently require a resource can access it.

My hero zero (standing privilege)

Implementing the design concept of zero standing privilege is crucial to hardening against privilege escalation attacks, as it removes the administrator’s vast amounts of standing power and access. Users acquire these rights for a limited period and only on an as-needed basis. This Just-In-Time (JIT) method of provisioning creates a better access review process. Requests are either granted time-bound access or flagged for escalation to a human approver, ensuring automation oversight.

An essential component of zero standing privilege is avoiding super-user roles and access. Old school practitioners may find it odd and question the impact on daily administrative tasks that keep the ecosystem running. Users manage these tasks through heavily logged time-limited permission assignments. Reliable user behavior analytics, combined with risk-based privileged access management (PAM) and machine learning supported log analysis, offers organizations better contextual identity information. Understanding how their privileged access is leveraged and identifying access misuse before it takes root is vital to preventing a breach.

Peering into the depths

To even start with zero standing privilege, an organization must understand what assets they consider privileged. The categorization of digital assets begins the process. The next step is assigning ownership of these resources. Doing this allows organizations to configure the PAM software to accommodate the policies and access rules defined organizationally, ensuring access rules meet governance and compliance requirements.

The PAM solution requires in-depth visibility of each individual’s full access across all cloud and SaaS environments, as well as throughout the internal IT infrastructure. This information improves the identification of toxic combinations, where granted permissions create compliance issues such as segregation of duties (SoD) violations.

AI & UEBA to the rescue

Zero standing privilege generates a large number of user logs and behavioral information over time. Manual log review becomes unsustainable very quickly. Leveraging the power of AI and machine learning to derive intelligent analytics allows organizations to identify risky behaviors and locate potential breaches far faster than human users.

Integration of a user and entity behavior analytics (UEBA) software establishes baselines of behavior, triggering alerts when deviations occur. UEBA systems detect insider threats and advanced persistent threats (APTs) while generating contextual identity information.

UEBA systems track all behavior linked back to an entity and identify anomalous behaviors such as spikes in access requests, requesting access to data that would typically not be allowed for that user’s roles, or systematically accessing numerous items. Contextual information helps organizations identifying situations that might indicate a breach or point to unauthorized exfiltration of data.

Your compass points to ZTA

Protecting against privilege escalation threats requires more than merely staying up to date on patches. Part of stopping attacks like Zerologon is to re-imagine how security is architected in an organization. Centering identity as the new security perimeter and implementing zero standing privilege are essential to the foundation of a security model known as zero trust architecture (ZTA).

Zero trust architecture has existed for a while in the corporate world. It is gaining attention from the public sector since NIST’s recent approval of SP-207 outlined ZTA and how to leverage it for the government agencies. NIST’s sanctification of ZTA opened the doors for government entities and civilian contractors to incorporate it into their security model. Taking this route helps to close the privilege escalation pathway providing your organization a secure harbor in the event of another cybersecurity perfect storm.

Can we trust passwordless authentication?

We are beginning to shift away from what has long been our first and last line of defense: the password. It’s an exciting time. Since the beginning, passwords have aggravated people. Meanwhile, passwords have become the de facto first step in most attacks. Yet I can’t help but think, what will the consequences of our actions be?

trust passwordless

Intended and unintended consequences

Back when overhead cameras came to the express toll routes in Ontario, Canada, it wasn’t long before the SQL injection to drop tables made its way onto bumper stickers. More recently in California, researcher Joe Tartaro purchased a license plate that said NULL. With the bumper stickers, the story goes, everyone sharing the road would get a few hours of toll-free driving. But with the NULL license plate? Tartaro ended up on the hook for every traffic ticket with no plate specified, to the tune of thousands of dollars.

One organization I advised recently completed an initiative to reduce the number of agents on the endpoint. In a year when many are extending the lifespan and performance of endpoints while eliminating location-dependent security controls, this shift makes strategic sense.

Another CISO I spoke with recently consolidated multi-factor authenticators onto a single platform. Standardizing the user experience and reducing costs is always a pragmatic move. Yet these moves limited future moves. In both cases, any initiative by the security team which changed authenticators or added agents ended up stuck in park, waiting for a greenlight.

Be careful not to limit future moves

To make moves that open up possibilities, security teams think along two lines: usability and defensibility. That is, how will the change impact the workforce, near term and long term? On the opposite angle, how will the change affect criminal behavior, near term and long term?

Whether decreasing the number of passwords required through single sign-on (SSO) or eliminating the password altogether in favor of a strong authentication factor (passwordless), the priority is on the workforce experience. The number one reason for tackling the password problem given by security leaders is improving the user experience. It is a rare security control that makes people’s lives easier and leadership wants to take full advantage.

There are two considerations when planning for usability. The first is ensuring the tactic addresses the common friction points. For example, with passwordless, does the approach provide access to devices and applications people work with? Is it more convenient and faster what they do today? The second consideration is evaluating what the tactic allows the security team to do next. Does the approach to passwordless or SSO block a future initiative due to lock-in? Or will the change enable us to take future steps to secure authentication?

Foiling attackers

The one thing we know for certain is, whatever steps we take, criminals will take steps to get around us. In the sixty years since the first password leak, we’ve done everything we can, using both machine and man. We’ve encrypted passwords. We’ve hashed them. We increased key length and algorithm strength. At the same time, we’ve asked users to create longer passwords, more complex passwords, unique passwords. We’ve provided security awareness training. None of these steps were taken in a vacuum. Criminals cracked files, created rainbow tables, brute-forced and phished credentials. Sixty years of experience suggests the advancement we make will be met with an advanced attack.

We must increase the trust in authentication while increasing usability, and we must take steps that open up future options. Security teams can increase trust by pairing user authentication with device authentication. Now the adversary must both compromise the authentication and gain access to the device.

To reduce the likelihood of device compromise, set policies to prevent unpatched, insecure, infected, or compromised devices from authenticating. The likelihood can be even further reduced by capturing telemetry, modeling activity, and comparing activity to the user’s baseline. Now the adversary must compromise authentication, gain access to the endpoint device, avoid endpoint detection, and avoid behavior analytics.

Conclusion

Technology is full of unintended consequences. Some lead to tollfree drives and others lead to unexpected fees. Some open new opportunities, others new vulnerabilities. Today, many are moving to improve user experience by reducing or removing passwords. The consequences won’t be known immediately. We must ensure our approach meets the use cases the workforce cares about while positioning us to address longer-term goals and challenges.

Additionally, we must get ahead of adversaries and criminals. With device trust and behavior analytics, we must increase trust in passwordless authentication. We can’t predict what is to come, but these are steps security teams can take today to better position and protect our organizations.

Review: Netsparker Enterprise web application scanner

Vulnerability scanners can be a very useful addition to any development or operations process. Since a typical vulnerability scanner needs to detect vulnerabilities in deployed software, they are (generally) not dependent on the language or technology used for the application they are scanning.

This often doesn’t make them the top choice for detecting a large number of vulnerabilities or even detecting fickle bugs or business logic issues, but makes them great and very common tools for testing a large number of diverse applications, where such dynamic application security testing tools are indispensable. This includes testing for security defects in software that is being currently developed as a part of a SDLC process, reviewing third-party applications that are deployed inside one’s network (as a part of a due diligence process) or – most commonly – finding issues in all kinds of internally developed applications.

We reviewed Netsparker Enterprise, which is one of the industry’s top choices for web application vulnerability scanning.

Netsparker Enterprise is primarily a cloud-based solution, which means it will focus on applications that are publicly available on the open internet, but it can also scan in-perimeter or isolated applications with the help of an agent, which is usually deployed in a pre-packaged Docker container or a Windows or Linux binary.

To test this product, we wanted to know how Netsparker handles a few things:

1. Scanning workflow
2. Scan customization options
3. Detection accuracy and results
4. CI/CD and issue tracking integrations
5. API and integration capabilities
6. Reporting and remediation efforts.

To assess the tool’s detection capabilities, we needed a few targets to scan and assess.

After some thought, we decided on the following targets:

1. DVWA – Damn Vulnerable Web Application – An old-school extremely vulnerable application, written in PHP. The vulnerabilities in this application should be detected without an issue.
2. OWASP Juice Shop simulates a modern single page web application with a REST API backend. It has a Javascript heavy interface, websockets, a REST API in the backend, and many interesting points and vulnerabilities for testing.
3. Vulnapi – A python3-based vulnerable REST API, written in the FastAPI framework running on Starlette ASGI, featuring a number of API based vulnerabilities.

Workflow

After logging in to Netsparker, you are greeted with a tutorial and a “hand-holding” wizard that helps you set everything up. If you worked with a vulnerability scanner before, you might know what to do, but this feature is useful for people that don’t have that experience, e.g., software or DevOps engineers, who should definitely use such tools in their development processes.

Review Netsparker Enterprise

Initial setup wizard

Scanning targets can be added manually or through a discovery feature that will try to find them by matching the domain from your email, websites, reverse IP lookups and other methods. This is a useful feature if other methods of asset management are not used in your organization and you can’t find your assets.

New websites or assets for scanning can be added directly or imported via a CSV or a TXT file. Sites can be organized in Groups, which helps with internal organization or per project / per department organization.

Review Netsparker Enterprise

Adding websites for scanning

Scans can be defined per group or per specific host. Scans can be either defined as one-off scans or be regularly scheduled to facilitate the continuous vulnerability remediation process.

To better guide the scanning process, the classic scan scope features are supported. For example, you can define specific URLs as “out-of-scope” either by supplying a full path or a regex pattern – a useful option if you want to skip specific URLs (e.g., logout, user delete functions). Specific HTTP methods can also be marked as out-of-scope, which is useful if you are testing an API and want to skip DELETE methods on endpoints or objects.

Review Netsparker Enterprise

Initial scan configuration

Review Netsparker Enterprise

Scan scope options

One feature we quite liked is the support for uploading the “sitemap” or specific request information into Netsparker before scanning. This feature can be used to import a Postman collection or an OpenAPI file to facilitate scanning and improve detection capabilities for complex applications or APIs. Other formats such as CSV, JSON, WADL, WSDL and others are also supported.

For the red team, loading links and information from Fiddler, Burp or ZAP session files is supported, which is useful if you want to expand your automated scanning toolbox. One limitation we encountered is the inability to point to an URL containing an OpenAPI definition – a capability that would be extremely useful for automated and scheduled scanning workflows for APIs that have Swagger web UIs.

Scan policies can be customized and tuned in a variety of ways, from the languages that are used in the application (ASP/ASP.NET, PHP, Ruby, Java, Perl, Python, Node.js and Other), to database servers (Microsoft SQL server, MySQL, Oracle, PostgreSQL, Microsoft Access and Others), to the standard choice of Windows or Linux based OSes. Scan optimizations should improve the detection capability of the tool, shorten scanning times, and give us a glimpse where the tool should perform best.

Integrating Netsparker

The next important question is, does it blend… or integrate? From an integration standpoint, sending email and SMSes about the scan events is standard, but support for various issue tracking systems like Jira, Bitbucket, Gitlab, Pagerduty, TFS is available, and so is support for Slack and CI/CD integration. For everything else, there is a raw API that can be used to tie in Netsparker to other solutions if you are willing to write a bit of integration scripting.

Review Netsparker Enterprise

Integration options

One really well-implemented feature is the support for logging into the testing application, as the inability to hold a session and scan from an authenticated context in the application can lead to a bad scanning performance.

Netsparker has the support for classic form-based login, but 2FA-based login flows that require TOTP or HOTP are also supported. This is a great feature, as you can add the OTP seed and define the period in Netsparker, and you are all set to scan OTP protected logins. No more shimming and adding code to bypass the 2FA method in order to scan the application.

Review Netsparker Enterprise

Authentication methods

What’s more, Netsparker enables you to create a custom script for complex login flows or javascript/CSS heavy login pages. I was pleasantly surprised that instead of reading complex documentation, I just needed to right click on the DOM elements and add them to the script and press next.

Review Netsparker Enterprise

Custom scripting workflow for authentication

If we had to nitpick, we might point out that it would be great if Netsparker also supported U2F / FIDO2 implementations (by software emulating the CTAP1 / CTAP2 protocol), since that would cover the most secure 2FA implementations.

In addition to form-based authentication, Basic NTLM/Kerberos, Header based (for JWTs), Client Certificate and OAuth2-based authentication is also supported, which makes it easy to authenticate to almost any enterprise application. The login / logout flow is also verified and supported through a custom dialog, where you can verify that the supplied credentials work, and you can configure how to retain the session.

Review Netsparker Enterprise

Login verification helper

Scanning accuracy

And now for the core of this review: what Netsparker did and did not detect.

In short, everything from DVWA was detected, except broken client-side security, which by definition is almost impossible to detect with security scanning if custom rules aren’t written. So, from a “classic” application point of view, the coverage is excellent, even the out-of-date software versions were flagged correctly. Therefore, for normal, classic stateful applications, written in a relatively new language, it works great.

From a modern JavaScript-heavy single page application point of view, Netsparker correctly discovered the backend API interface from the user interface, and detected a decently complex SQL injection vulnerability, where it was not enough to trigger a ‘ or 1=1 type of vector but to adjust the vector to properly escape the initial query.

Netsparker correctly detected a stored XSS vulnerability in the reviews section of the Juice Shop product screen. The vulnerable application section is a JavaScript-heavy frontend, with a RESTful API in the backend that facilitates the vulnerability. Even the DOM-based XSS vulnerability was detected, although the specific vulnerable endpoint was marked as the search API and not the sink that is the entry point for DOM XSS. On the positive side, the vulnerability was marked as “Possible” and a manual security review would find the vulnerable sink.

One interesting point for vulnerability detection is that Netsparker uses an engine that tries to verify if the vulnerability is exploitable and will try to create a “proof” of vulnerability, which reduces false positives.

On the negative side, no vulnerabilities in WebSocket-based communications were found, and neither was the API endpoint that implemented insecure YAML deserialization with pyYAML. By reviewing the Netsparker knowledge base, we also found that there is no support for websockets and deserialization vulnerabilities.

That’s certainly not a dealbreaker, but something that needs to be taken into account. This also reinforces the need to use a SAST-based scanner (even if just a free, open source one) in the application security scanning stack, to improve test coverage in addition to other, manual based security review processes.

Reporting capability

Multiple levels of detail (from extensive, executive summary, to PCI-DSS level) are supported, both in a PDF or HTML export option. One nice feature we found is the ability to create F5 and ModSecurity rules for virtual patching. Also, scanned and crawled URLs can be exported from the reporting section, so it’s easy to review if your scanner hit any specific endpoints.

Review Netsparker Enterprise

Scan results dashboard

Review Netsparker Enterprise

Scan result details

Instead of describing the reports, we decided to export a few and attach them to this review for your enjoyment and assessment. All of them have been submitted to VirusTotal for our more cautious readers.

Netsparker’s reporting capabilities satisfy our requirements: the reports contain everything a security or AppSec engineer or a developer needs.

Since Netsparker integrates with JIRA and other ticketing systems, the general vulnerability management workflow for most teams will be supported. For lone security teams, or where modern workflows aren’t integrated, Netsparker also has an internal issue tracking system that will let the user track the status of each found issue and run rescans against specific findings to see if mitigations were properly implemented. So even if you don’t have other methods of triage or processes set up as part of a SDLC, you can manage everything through Netsparker.

Verdict

Netsparker is extremely easy to set up and use. The wide variety of integrations allow it to be integrated into any number of workflows or management scenarios, and the integrated features and reporting capabilities have everything you would want from a standalone tool. As far as features are concerned, we have no objections.

The login flow – the simple interface, the 2FA support all the way to the scripting interface that makes it easy to authenticate even in the more complex environments, and the option to report on the scanned and crawled endpoints – helps users discover their scanning coverage.

Taking into account the fact that this is an automated scanner that relies on “black boxing” a deployed application without any instrumentalization on the deployed environment or source code scanning, we think it is very accurate, though it could be improved (e.g., by adding the capability of detecting deserialization vulnerabilities). Following the review, Netsparker has confirmed that adding the capability of detecting deserialization vulnerabilities is included in the product development plans.

Nevertheless, we can highly recommend Netsparker.

New research shows risk in healthcare supply chain

Exposures and cybersecurity challenges can turn out to be costly, according to statistics from the US Department of Health and Human Services (HHS), 861 breaches of protected health information have been reported over the last 24 months.

healthcare supply chain

New research from RiskRecon and the Cyentia Institute pinpointed risk in third-party healthcare supply chain and showed that healthcare’s high exposure rate indicates that managing a comparatively small Internet footprint is a big challenge for many organizations in that sector.

But there is a silver lining: gaining the visibility needed to pinpoint and rectify exposures in the healthcare risk surface is feasible.

Key findings

The research and report are based on RiskRecon’s assessment of more than five million of internet-facing systems across approximately 20,000 organizations, focusing exclusively on the healthcare sector.

Highest rate

Healthcare has one of the highest average rates of severe security findings relative to other industries. Furthermore, those rates vary hugely across institutions, meaning the worst exposure rates in healthcare are worse than the worst exposure rates in other sectors.

Size matters

Severe security findings decrease as employees increase. For example, the rate of severe security findings in the smallest healthcare providers is 3x higher than that of the largest providers.

Sub sectors vary

Sub sectors within healthcare reveal different risk trends. The research shows that hospitals have a much larger Internet surface area (hosts, providers, countries), but maintain relatively low rates of security findings. Additionally, nursing and residential care sub-sector has the smallest Internet footprint yet the highest levels of exposure. Outpatient (ambulatory) and social services mostly fall in between hospitals and nursing facilities.

Cloud deployment impacts

As digital transformation ushers in a plethora of changes, critical areas of risk exposure are also changing and expanding. While most healthcare firms host a majority of their Internet-facing systems on-prem, they do also leverage the cloud. We found that healthcare’s severe finding rate for high-value assets in the cloud is 10 times that of on-prem. This is the largest on-prem versus cloud exposure imbalance of any sector.

It must also be noted that not all cloud environments are the same. A previous RiskRecon report on the cloud risk surface discovered an average 12 times the difference between cloud providers with the highest and lowest exposure rates. This says more about the users and use cases of various cloud platforms than intrinsic security inequalities. In addition, as healthcare organizations look to migrate to the cloud, they should assess their own capabilities for handling cloud security.

The healthcare supply chain is at risk

It’s important to realize that the broader healthcare ecosystem spans numerous industries and these entities often have deep connections into the healthcare provider’s facilities, operations, and information systems. Meaning those organizations can have significant ramifications for third-party risk management.

When you dig into it, even though big pharma has the biggest footprint (hosts, third-party service providers, and countries of operation), they keep it relatively hygienic. Manufacturers of various types of healthcare apparatus and instruments show a similar profile of extensive assets yet fewer findings. Unfortunately, the information-heavy industries of medical insurance, EHR systems providers, and collection agencies occupy three of the top four slots for the highest rate of security findings.

“In 2020, Health Information Sharing and Analysis Center (H-ISAC) members across healthcare delivery, big pharma, payers and medical device manufacturers saw increased cyber risks across their evolving and sometimes unfamiliar supply chains,” said Errol Weiss, CSO at H-ISAC.

“Adjusting to the new operating environment presented by COVID-19 forced healthcare companies to rapidly innovate and adopt solutions like cloud technology that also added risk with an expanded digital footprint to new suppliers and partners with access to sensitive patient data.”

Three best practices for responsible open source usage in the COVID-19 era

COVID-19 has forced developer agility into overdrive, as the tech industry’s quick push to adapt to changing dynamics has accelerated digital transformation efforts and necessitated the rapid introduction of new software features, patches, and functionalities.

open source code

During this time, organizations across both the private and public sector have been turning to open source solutions as a means to tackle emerging challenges while retaining the rapidity and agility needed to respond to evolving needs and remain competitive.

Since well before the pandemic, software developers have leveraged open source code as a means to speed development cycles. The ability to leverage pre-made packages of code rather than build software from the ground up has enabled them to save valuable time. However, the rapid adoption of open source has not come without its own security challenges, which developers and organizations should resolve safely.

Here are some best practices developers should follow when implementing open source code to promote security:

Know what and where open source code is in use

First and foremost, developers should create and maintain a record of where open source code is being used across the software they build. Applications today are usually designed using hundreds of unique open source components, which then reside in their software and workspaces for years.

As these open source packages age, there is an increasing likelihood of vulnerabilities being discovered in them and publicly disclosed. If the use of components is not closely tracked against the countless new vulnerabilities discovered every year, software leveraging these components becomes open to exploitation.

Attackers understand all too well how often teams fall short in this regard, and software intrusions via known open source vulnerabilities are a highly common sources of breaches. Tracking open source code usage along with vigilance around updates and vulnerabilities will go a long way in mitigating security risk.

Understand the risks before adopting open source

Aside from tracking vulnerabilities in the code that’s already in use, developers must do their research on open source components before adopting them to begin with. While an obvious first step is ensuring that there are no known vulnerabilities in the component in question, other factors should be considered focused on the longevity of the software being built.

Teams should carefully consider the level of support offered for a given component. It’s important to get satisfactory answers to questions such as:

  • How often is the component patched?
  • Are the patches of high quality and do they address the most pressing security issues when released?
  • Once implemented, are they communicated effectively and efficiently to the user base?
  • Is the group or individual who built the component a trustworthy source?

Leverage automation to mitigate risk

It’s no secret that COVID-19 has altered developers’ working conditions. In fact, 38% of developers are now releasing software monthly or faster, up from 27% in 2018. But this increased pace often comes paired with unwanted budget cuts and organizational changes. As a result, the imperative to “do more with less” has become a rallying cry for business leaders. In this context, it is indisputable that automation across the entire IT security portfolio has skyrocketed to the top of the list of initiatives designed to improve operational efficiency.

While already an important asset for achieving true DevSecOps agility, automated scanning technology has become near-essential for any organization attempting to stay secure while leveraging open source code. Manually tracking and updating open source vulnerabilities across an organization’s entire software suite is hard work that only increases in difficulty with the scale of an organization’s software deployments. And what was inefficient in normal times has become unfeasible in the current context.

Automated scanning technologies alleviate the burden of open source security by handling processes that would otherwise take up precious time and resources. These tools are able to detect and identify open source components within applications, provide detailed risk metrics regarding open source vulnerabilities, and flag outdated libraries for developers to address. Furthermore, they provide detailed insight into thousands of public open source vulnerabilities, security advisories and bugs, to ensure that when components are chosen they are secure and reputable.

Finally, these tools help developers prioritize and triage remediation efforts once vulnerabilities are identified. Equipped with the knowledge of which vulnerabilities present the greatest risk, developers are able to allocate resources most efficiently to ensure security does not get in the way of timely release cycles.

Confidence in a secure future

When it comes to open source security, vigilance is the name of the game. Organizations must be sure to reiterate the importance of basic best practices to developers as they push for greater speed in software delivery.

While speed has long been understood to come at the cost of software security, this type of outdated thinking cannot persist, especially when technological advancements in automation have made such large strides in eliminating this classically understood tradeoff. By following the above best practices, organizations can be more confident that their COVID-19 driven software rollouts will be secure against issues down the road.

The brain of the SIEM and SOAR

SIEM and SOAR solutions are important tools in a cybersecurity stack. They gather a wealth of data about potential security incidents throughout your system and store that info for review. But just like nerve endings in the body sending signals, what good are these signals if there is no brain to process, categorize and correlate this information?

siem soar tools

A vendor-agnostic XDR (Extended Detection and Response) solution is a necessary component for solving the data overload problem – a “brain” that examines all of the past and present data collected and assigns a collective meaning to the disparate pieces. Without this added layer, organizations are unable to take full advantage of their SIEM and SOAR solutions.

So, how do organizations implement XDR? Read on.

SIEM and SOAR act like nerves

It’s easy for solutions with acronyms to cause confusion. SOAR and SIEM are perfect examples, as they are two very different technologies that often get lumped together. They aren’t the same thing, and they do bring complementary capabilities to the security operations center, but they still don’t completely close the automation gap.

The SIEM is a decades-old solution that uses technology from that era to solve specific problems. At their core, SIEMs are data collection, workflow and rules engines that enable users to sift through alerts and group things together for investigation.

In the last several years, SOAR has been the favorite within the security industry’s marketing landscape. Just as the SIEM runs on rules, the SOAR runs on playbooks. These playbooks let an analyst automate steps in the event detection, enrichment, investigation and remediation process. And just like with SIEM rules, someone has to write and update them.

Because many organizations already have a SIEM, it seemed reasonable for the SOAR providers to start with automating the output from the SIEM tool or security platform console. So: Security controls send alerts to a SIEM > the SIEM uses rules written by the security team to filter down the number of alerts to a much smaller number, usually 1,000,000:1 > SIEM events are sent to the SOAR, where playbooks written by the security team use workflow automation to investigate and respond to the alerts.

SOAR investigation playbooks attempt to contextualize the events with additional data – often the same data that the SIEM has filtered out. Writing these investigation playbooks can occupy your security team for months, and even then, they only cover a few scenarios and automate simple tasks like virus total lookups.

The verdict is that SOARs and SIEMs purport to perform all the actions necessary to automate the screening of alerts, but the technology in itself cannot do this. It requires trained staff to bring forth this capability by writing rules and playbooks.

Coming back to the analogy, this data can be compared to the nerves flowing through the human body. They fire off alerts that something has happened – alerts that mean nothing without a processing system that can gather context and explain what has happened.

Giving the nerves a brain

What the nerves need is a brain that can receive and interpret their signals. An XDR engine, powered by Bayesian reasoning, is a machine-powered brain that can investigate any output from the SIEM or SOAR at speed and scale. This replaces the traditional Boolean logic (that is searching for things that IT teams know to be somewhat suspicious) with a much richer way to reason about the data.

This additional layer of understanding will work out of the box with the products an organization already has in place to provide key correlation and context. For instance, imagine that a malicious act occurs. That malicious act is going to be observed by multiple types of sensors. All of that information needs to be put together, along with the context of the internal systems, the external systems and all of the other things that integrate at that point. This gives the system the information needed to know the who, what, when, where, why and how of the event.

This is what the system’s brain does. It boils all of the data down to: “I see someone bad doing something bad. I have discovered them. And now I am going to manage them out.” What the XDR brain is going to give the IT security team is more accurate, consistent results, fewer false positives and faster investigation times.

How to apply an XDR brain

To get started with integrating XDR into your current system, take these three steps:

1. Deploy a solution that is vendor-agnostic and works out of the box. This XDR layer of security doesn’t need playbooks or rules. It changes the foundation of your security program and how your staff do their work. This reduces your commitment in time and budget for security engineering, or at least enables you to redirect it.

2. It has become much easier in the last several years to collect, store and – to some extent – analyze data. In particular, cloud architectures offer simple and cost-effective options for collecting and storing vast quantities of data. For this reason, it’s now possible to turn your sensors all the way up rather than letting in just a small stream of data.

3. Decide which risk reduction projects are critical for the team. Automation should release security professionals from mundane tasks so they can focus on high-value actions that truly reduce risk, like incident response, hunting and tuning security controls. There may also be budget that is freed up for new technology or service purchases.

Reading the signals

To make the most of SOARs and SIEMs, you XDR – a tool that will take the data collected and add the context needed to turn thousands of alerts into one complete situation that is worth investigating.

The XDR layer is an addition to a company’s cybersecurity strategy that will most effectively use SIEM and SOAR, giving all those nerve signals a genius brain that can sort them out and provide the context needed in today’s cyber threat landscape.

In the era of AI, standards are falling behind

According to a recent study, only a minority of software developers are actually working in a software development company. This means that nowadays literally every company builds software in some form or another.

standards development

As a professional in the field of information security, it is your task to protect information, assets, and technologies. Obviously, the software built by or for your company that is collecting, transporting, storing, processing, and finally acting upon your company’s data, is of high interest. Secure development practices should be enforced early on and security must be tested during the software’s entire lifetime.

Within the (ISC)² common body of knowledge for CISSPs, software development security is listed as an individual domain. Several standards and practices covering security in the Software Development Lifecycle (SDLC) are available: ISO/IEC 27024:2011, ISO/IEC TR 15504, or NIST SP800-64 Revision 2, to name some.

All of the above ask for continuous assessment and control of artifacts on the source-code level, especially regarding coding standards and Common Weakness Enumerations (CWE), but only briefly mention static application security testing (SAST) as a possible way to address these issues. In the search for possible concrete tools, NIST provides SP 500-268 v1.1 “Source Code Security Analysis Tool Function Specification Version 1.1”.

In May 2019, NIST withdrew the aforementioned SP800-64 Rev2. NIST SP 500-268 was published over nine years ago. This seems to be symptomatic for an underlying issue we see: the standards cannot keep up with the rapid pace of development and change in the field.

A good example is the dawn of the development language Rust, which addresses a major source of security issues presented by the classically used language C++ – namely memory management. Major players in the field such as Microsoft and Google saw great advantages and announced that they would focus future developments towards Rust. While the standards mention development languages superior to others, neither the mechanisms used by Rust nor Rust itself is mentioned.

In the field of Static Code Analysis, the information in NIST SP 500-268 is not wrong, but the paper simply does not mention advances in the field.

Let us briefly discuss two aspects: First, the wide use of open source software gave us insight into a vast quantity of source code changes and the reasoning behind them (security, performance, style). On top of that, we have seen increasing capacities of CPU power to process this data, accompanied by algorithmic improvements. Nowadays, we have a large lake of training data available. To use our company as an example, in order to train our underlying model for C++ alone, we are scanning changes in over 200,000 open source projects with millions of files containing rich history.

Secondly, in the past decade, we’ve witnessed tremendous advances in machine learning. We see tools like GPT-3 and their applications in source code being discussed widely. Classically, static source code analysis was the domain of Symbolic AI—facts and rules applied to source code. The realm of source code is perfectly suited for this approach since software source code has a well-defined syntax and grammar. The downside is that these rules were developed by engineers, which limits the pace in which rules can be generated. The idea would be to automate the rule construction by using machine learning.

Recently, we see research in the field of machine learning being applied to source code. Again, let us use our company as an example: By using the vast amount of changes in open source, our system looks out for patterns connected to security. It presents possible rules to an engineer together with found cases in the training set—both known and fixed, as well as unknown.

Also, the system supports parameters in the rules. Possible values for these parameters are collected by the system automatically. As a practical example, taint analysis follows incoming data to its use inside of the application to make sure the data is sanitized before usage. The system automatically learns possible sources, sanitization, and sink functions.

Back to the NIST Special Papers: With the withdrawal of SP 800-64 Rev 2, users were pointed to NIST SP 800-160 Vol 1 for the time being until a new, updated white paper is published. This was at the end of May 2019. The nature of these papers is to only describe high-level best practices, list some examples, and stay rather vague in concrete implementation. Yet, the documents are the basis for reviews and audits. Given the importance of the field, it seems as if a major component is missing. It is also time to think about processes that would help us to keep up with the pace of technology.

CPRA: More opportunity than threat for employers

Increasingly demanded by consumers, data privacy laws can create onerous burdens on even the most well-meaning businesses. California presents plenty of evidence to back up this statement, as more than half of organizations that do business in California still aren’t compliant with the California Consumer Privacy Act (CCPA), which went into effect earlier this year.

CPRA

As companies struggle with their existing compliance requirements, many fear that a new privacy ballot initiative – the California Privacy Rights Act (CPRA) – could complicate matters further. While it’s true that if passed this November, the CPRA would fundamentally change the way businesses in California handle both customer and employee data, companies shouldn’t panic. In fact, this law presents an opportunity for organizations to change their relationship with employee data to their benefit.

CPRA, the Californian GDPR?

Set to appear on the November 2020 ballot, the CPRA, also known as CCPA 2.0 or Prop 24 (its name on the ballot), builds on what is already the most comprehensive data protection law in the US. In essence, the CPRA will bring data protection in California nearer to the current European legal standard, the General Data Protection Regulation (GDPR).

In the process of “getting closer to GDPR,” the CCPA would gain substantial new components. Besides enhancing consumer rights, the CPRA also creates new provisions for employee data as it relates to their employers, as well as data that businesses collect from B2B business partners.

Although controversial, the CPRA is likely to pass. August polling shows that more than 80% of voters support the measure. However, many businesses do not. This is because, at first glance, the CPRA appears to create all kinds of legal complexities in how employers can and cannot collect information from workers.

Fearful of having to meet the same demanding requirements as their European counterparts, many organizations’ natural reaction towards the prospect of CPRA becoming law is fear. However, this is unfounded. In reality, if the CPRA passes, it might not be as scary as some businesses think.

CPRA and employment data

The CPRA is actually a lot more lenient than the GDPR in regard to how it polices the relationship between employers and employees’ data. Unlike for its EU equivalent, there are already lots of exceptions written into the proposed Californian law acknowledging that worker-employer relations are not like consumer-vendor relations.

Moreover, the CPRA extends the CCPA exemption for employers, set to end on January 1, 2021. This means that if the CPRA passes into law, employers would be released from both their existing and potential new employee data protection obligations for two more years, until January 1, 2023. This exemption would apply to most provisions under the CPRA, including the personal information collected from individuals acting as job applicants, staff members, employees, contractors, officers, directors, and owners.

However, employers would still need to provide notice of data collection and maintain safeguards for personal information. It’s highly likely that during this two-year window, additional reforms would be passed that might further ease employer-employee data privacy requirements.

Nonetheless, employers should act now

While the CPRA won’t change much overnight, impacted organizations shouldn’t wait to take action, but should take this time to consider what employee data they collect, why they do so, and how they store this information.

This is especially pertinent now that businesses are collecting more data than ever on their employees. With companies like the workplace monitoring company Prodoscore reporting that interest from prospective customers rose by 600% since the pandemic began, we are seeing rapid growth in companies looking to monitor how, where, and when their employees work.

This trend emphasizes the fact that the information flow between companies and their employees is mostly one-sided (i.e., from the worker to the employer). Currently, businesses have no legal requirement to be transparent about this information exchange. That will change for California-based companies if the CPRA comes into effect and they will have no choice but to disclose the type of data they’re collecting about their staff.

The only sustainable solution for impacted businesses is to be transparent about their data collection with employees and work towards creating a “culture of privacy” within their organization.

Creating a culture of privacy

Rather than viewing employee data privacy as some perfunctory obligation where the bare minimum is done for the sake of appeasing regulators, companies need to start thinking about worker privacy as a benefit. Presented as part of a benefits package, comprehensive privacy protection is a perk that companies can offer prospective and existing employees.

Privacy benefits can include access to privacy protection services that give employees privacy benefits beyond the workplace. Packaged alongside privacy awareness training and education, these can create privacy plus benefits that can be offered to employees alongside standard perks like health or retirement plans. Doing so will build a culture of privacy which can help companies ensure they’re in regulatory compliance, while also making it easier to attract qualified talent and retain workers.

It’s also worth bearing in mind that creating a culture of privacy doesn’t necessarily mean that companies have to stop monitoring employee activity. In fact, employees are less worried about being watched than they are by the possibility of their employers misusing their data. Their fears are well-founded. Although over 60% of businesses today use workforce data, only 3 in 10 business leaders are confident that this data is treated responsibly.

For this reason, companies that want to keep employee trust and avoid bad PR need to prioritize transparency. This could mean drawing up a “bill of rights” that lets employees know what data is being collected and how it will be used.

Research into employee satisfaction backs up the value of transparency. Studies show that while only 30% of workers are comfortable with their employer monitoring their email, the number of employees open to the use of workforce data goes up to 50% when the employer explains the reasons for doing so. This number further jumps to 92% if employees believe that data collection will improve their performance or well-being or come with other personal benefits, like fairer pay.

On the other hand, most employees would leave an organization if its leaders did not use workplace data responsibly. Moreover, 55% of candidates would not even apply for a job with such an organization in the first place.

Final thoughts

With many exceptions for workplace data management already built-in and more likely to come down the line, most employers should be able to easily navigate the stipulations CPRA entails.

That being said, if it becomes law this November, employers shouldn’t misuse the two-year window they have to prepare for new compliance requirements. Rather than seeing this time as breathing space before a regulatory crackdown, organizations should instead use it to be proactive in their approach to how they manage their employees’ data. As well as just ensuring they comply with the law, businesses should look at how they can turn employee privacy into an asset.

As data privacy stays at the forefront of employees’ minds, businesses that can show they have a genuine privacy culture will be able to gain an edge when it comes to attracting and retaining talent and, ultimately, coming out on top.

How to build up cybersecurity for medical devices

Manufacturing medical devices with cybersecurity firmly in mind is an endeavor that, according to Christopher Gates, an increasing number of manufacturers is trying to get right.

cybersecurity medical devices

Healthcare delivery organizations have started demanding better security from medical device manufacturers (MDMs), he says, and many have have implemented secure procurement processes and contract language for MDMs that address the cybersecurity of the device itself, secure installation, cybersecurity support for the life of the product in the field, liability for breaches caused by a device not following current best practice, ongoing support for events in the field, and so on.

“For someone like myself who has been focused on cybersecurity at MDMs for over 12 years, this is excellent progress as it will force MDMs to take security seriously or be pushed out of the market by competitors who do take it seriously. Positive pressure from MDMs is driving cybersecurity forward more than any other activity,” he told Help Net Security.

Gates is a principal security architect at Velentium and one of the authors of the recently released Medical Device Cybersecurity for Engineers and Manufacturers, a comprehensive guide to medical device secure lifecycle management, aimed at engineers, managers, and regulatory specialists.

In this interview, he shares his knowledge regarding the cybersecurity mistakes most often made by manufacturers, on who is targeting medical devices (and why), his view on medical device cybersecurity standards and initiatives, and more.

[Answers have been edited for clarity.]

Are attackers targeting medical devices with a purpose other than to use them as a way into a healthcare organization’s network?

The easy answer to this is “yes,” since many MDMs in the medical device industry perform “competitive analysis” on their competitors’ products. It is much easier and cheaper for them to have a security researcher spend a few hours extracting an algorithm from a device for analysis than to spend months or even years of R&D work to pioneer a new algorithm from scratch.

Also, there is a large, hundreds-of-millions-of-dollars industry of companies who “re-enable” consumed medical disposables. This usually requires some fairly sophisticated reverse-engineering to return the device to its factory default condition.

Lastly, the medical device industry, when grouped together with the healthcare delivery organizations, constitutes part of critical national infrastructure. Other industries in that class (such as nuclear power plants) have experienced very directed and sophisticated attacks targeting safety backups in their facilities. These attacks seem to be initial testing of a cyber weapon that may be used later.

While these are clearly nation-state level attacks, you have to wonder if these same actors have been exploring medical devices as a way to inhibit our medical response in an emergency. I’m speculating: we have no evidence that this has happened. But then again, if it has happened there likely wouldn’t be any evidence, as we haven’t been designing medical devices and infrastructure with the ability to detect potential cybersecurity events until very recently.

What are the most often exploited vulnerabilities in medical devices?

It won’t come as a surprise to anyone in security when I say “the easiest vulnerabilities to exploit.” An attacker is going to start with the obvious ones, and then increasingly get more sophisticated. Mistakes made by developers include:

Unsecured firmware updating

I personally always start with software updates in the field, as they are so frequently implemented incorrectly. An attacker’s goal here is to gain access to the firmware with the intent of reverse-engineering it back into easily-readable source code that will yield more widely exploitable vulnerabilities (e.g., one impacting every device in the world). All firmware update methods have at least three very common potential design vulnerabilities. They are:

  • Exposure of the binary executable (i.e., it isn’t encrypted)
  • Corrupting the binary executable with added code (i.e., there isn’t an integrity check)
  • A rollback attack which downgrades the version of firmware to a version with known exploitable vulnerabilities (there isn’t metadata conveying the version information).

Overlooking physical attacks

Physical attack can be mounted:

  • Through an unsecured JTAG/SWD debugging port
  • Via side-channel (power monitoring, timing, etc.) exploits to expose the values of cryptographic keys
  • By sniffing internal busses, such as SPI and I2C
  • Exploiting flash memory external to the microcontroller (a $20 cable can get it to dump all of its contents)

Manufacturing support left enabled

Almost every medical device needs certain functions to be available during manufacturing. These are usually for testing and calibration, and none of them should be functional once the device is fully deployed. Manufacturing commands are frequently documented in PDF files used for maintenance, and often only have minor changes across product/model lines inside the same manufacturer, so a little experimentation goes a long way in letting an attacker get access to all kinds of unintended functionality.

No communication authentication

Just because a communications medium connects two devices doesn’t mean that the device being connected to is the device that the manufacturer or end-user expects it to be. No communications medium is inherently secure; it’s what you do at the application level that makes it secure.

Bluetooth Low Energy (BLE) is an excellent example of this. Immediately following a pairing (or re-pairing), a device should always, always perform a challenge-response process (which utilizes cryptographic primitives) to confirm it has paired with the correct device.

I remember attending an on-stage presentation of a new class II medical device with a BLE interface. From the audience, I immediately started to explore the device with my smartphone. This device had no authentication (or authorization), so I was able to perform all operations exposed on the BLE connection. I was engrossed in this interface when I suddenly realized there was some commotion on stage as they couldn’t get their demonstration to work: I had accidentally taken over the only connection the device supported. (I then quickly terminated the connection to let them continue with the presentation.)

What things must medical device manufacturers keep in mind if they want to produce secure products?

There are many aspects to incorporating security into your development culture. These can be broadly lumped into activities that promote security in your products, versus activities that convey a false sense of security and are actually a waste of time.

Probably the most important thing that a majority of MDMs need to understand and accept is that their developers have probably never been trained in cybersecurity. Most developers have limited knowledge of how to incorporate cybersecurity into the development lifecycle, where to invest time and effort into securing a device, what artifacts are needed for premarket submission, and how to proper utilize cryptography. Without knowing the details, many managers assume that security is being adequately included somewhere in their company’s development lifecycle; most are wrong.

To produce secure products, MDMs must follow a secure “total product life cycle,” which starts on the first day of development and ends years after the product’s end of life or end of support.

They need to:

  • Know the three areas where vulnerabilities are frequently introduced during development (design, implementation, and through third-party software components), and how to identify, prevent, or mitigate them
  • Know how to securely transfer a device to production and securely manage it once in production
  • Recognize an MDM’s place in the device’s supply chain: not at the end, but in the middle. An MDMs cybersecurity responsibilities extend up and down the chain. They have to contractually enforce cybersecurity controls on their suppliers, and they have to provide postmarket support for their devices in the field, up through and after end-of-life
  • Ccreate and maintain Software Bills of Materials (SBOMs) for all products, including legacy products. Doing this work now will help them stay ahead of regulation and save them money in the long run.

They must avoid mistakes like:

  • Not thinking that a medical device needs to be secured
  • Assuming their development team ‘can’ and ‘is’ securing their product
  • Not designing-in the ability to update the device in the field
  • Assuming that all vulnerabilities can be mitigated by a field update
  • Only considering the security of one aspect of your design (e.g., its wireless communication protocol). Security is a chain: for the device to be secure, all the links of the chain need to be secure. Attackers are not going to consider certain parts of the target device ‘out of bounds’ for exploiting.

Ultimately, security is about protecting the business model of an MDM. This includes the device’s safety and efficacy for the patient, which is what the regulations address, but it also includes public opinion, loss of business, counterfeit accessories, theft of intellectual property, and so forth. One mistake I see companies frequently make is doing the minimum on security to gain regulatory approval, but neglecting to protect their other business interests along the way – and those can be very expensive to overlook.

What about the developers? Any advice on skills they should acquire or brush up on?

First, I’d like to take some pressure off developers by saying that it’s unreasonable to expect that they have some intrinsic knowledge of how to implement cybersecurity in a product. Until very recently, cybersecurity was not part of traditional engineering or software development curriculum. Most developers need additional training in cybersecurity.

And it’s not only the developers. More than likely, project management has done them a huge disservice by creating a system-level security requirement that says something like, “Prevent ransomware attacks.” What is the development team supposed to do with that requirement? How is it actionable?

At the same time, involving the company’s network or IT cybersecurity team is not going to be an automatic fix either. IT Cybersecurity diverges from Embedded Cybersecurity in many respects, from detection to implementation of mitigations. No MDM is going to be putting a firewall on a device that is powered by a CR2032 battery anytime soon; yet there are ways to secure such a low-resource device.

In addition to the how-to book we wrote, Velentium will soon offer training available specifically for the embedded device domain, geared toward creating a culture of cybersecurity in development teams. My audacious goal is that within 5 years every medical device developer I talk to will be able to converse intelligently on all aspects of securing a medical device.

What cybersecurity legislation/regulation must companies manufacturing medical devices abide by?

It depends on the markets you intend to sell into. While the US has had the Food and Drug Administration (FDA) refining its medical device cybersecurity position since 2005, others are more recent entrants into this type of regulation, including Japan, China, Germany, Singapore, South Korea, Australia, Canada, France, Saudi Arabia, and the greater EU.

While all of these regulations have the same goal of securing medical devices, how they get there is anything but harmonized among them. Even the level of abstraction varies, with some focused on processes while others on technical activities.

But there are some common concepts represented in all these regulations, such as:

  • Risk management
  • Software bill of materials (SBOM)
  • Monitoring
  • Communication
  • “Total Product Lifecycle”
  • Testing

But if you plan on marketing in the US, the two most important document should be FDA’s:

  • 2018 – Draft Guidance: Content of Premarket Submissions for Management of Cybersecurity in Medical Devices
  • 2016 – Final Guidance: Postmarket Management of Cybersecurity in Medical Devices (The 2014 version of the guidance on premarket submissions can be largely ignored, as it no longer represents the FDA’s current expectations for cybersecurity in new medical devices).
What are some good standards for manufacturers to follow if they want to get cybersecurity right?

The Association for the Advancement of Medical Instrumentation’s standards are excellent. I recommend AAMI TIR57: 2016 and AAMI TIR97: 2019.

Also very good is the Healthcare & Public Health Sector Coordinating Council’s (HPH SCC) Joint Security Plan. And, to a lesser extent, the NIST Cyber Security Framework.

The work being done at the US Department of Commerce / NTIA on SBOM definition for vulnerability management and postmarket surveillance is very good as well, and worth following.

What initiatives exist to promote medical device cybersecurity?

Notable initiatives I’m familiar with include, first, the aforementioned NTIA work on SBOMs, now in its second year. There are also several excellent working groups at HSCC, including the Legacy Medical Device group and the Security Contract Language for Healthcare Delivery Organizations group. I’d also point to numerous working groups in the H-ISAC Information Sharing and Analysis Organization (ISAO), including the Securing the Medical Device Lifecycle group.

And I have to include the FDA itself here, which is in the process of revising its 2018 premarket draft guidance; we hope to see the results of that effort in early 2021.

What changes do you expect to see in the medical devices cybersecurity field in the next 3-5 years?

So much is happening at high and low levels. For instance, I hope to see the FDA get more of a direct mandate from Congress to enforce security in medical devices.

Also, many working groups of highly talented people are working on ways to improve the security posture of devices, such as the NTIA SBOM effort to improve the transparency of software “ingredients” in a medical device, allowing end-users to quickly assess their risk level when new vulnerabilities are discovered.

Semiconductor manufacturers continue to give us great mitigation tools in hardware, such as side-channel protections, cryptographic accelerators, virtualized security cores. Trustzone is a great example.

And at the application level, we’ll continue to see more and better packaged tools, such as cryptographic libraries and processes, to help developers avoid cryptography mistakes. Also, we’ll see more and better process tools to automate the application of security controls to a design.

HDOs and other medical device purchasers are better informed than ever before about embedded cybersecurity features and best practices. That trend will continue, and will further accelerate demand for better-secured products.

I hope to see some effort at harmonization between all the federal, state, and foreign regulations that have been recently released with those currently under consideration.

One thing is certain: legacy medical devices that can’t be secured will only go away when we can replace them with new medical devices that are secure by design. Bringing new devices to market takes a long time. There’s lots of great innovation underway, but really, we’re just getting started!