5 tips to reduce the risk of email impersonation attacks

Email attacks have moved past standard phishing and become more targeted over the years. In this article, I will focus on email impersonation attacks, outline why they are dangerous, and provide some tips to help individuals and organizations reduce their risk exposure to impersonation attacks.

email impersonation attacks

What are email impersonation attacks?

Email impersonation attacks are malicious emails where scammers pretend to be a trusted entity to steal money and sensitive information from victims. The trusted entity being impersonated could be anyone – your boss, your colleague, a vendor, or a consumer brand you get automated emails from.

Email impersonation attacks are tough to catch and worryingly effective because we tend to take quick action on emails from known entities. Scammers use impersonation in concert with other techniques to defraud organizations and steal account credentials, sometimes without victims realizing their fate for days after the fraud.

Fortunately, we can all follow some security hygiene best practices to reduce the risk of email impersonation attacks.

Tip #1 – Look out for social engineering cues

Email impersonation attacks are often crafted with language that induces a sense of urgency or fear in victims, coercing them into taking the action the email wants them to take. Not every email that makes us feel these emotions will be an impersonation attack, of course, but it’s an important factor to keep an eye out for, nonetheless.

Here are some common phrases and situations you should look out for in impersonation emails:

  • Short deadlines given at short notice for processes involving the transfer of money or sensitive information.
  • Unusual purchase requests (e.g., iTunes gift cards).
  • Employees requesting sudden changes to direct deposit information.
  • Vendor sharing new.

email impersonation attacks

This email impersonation attack exploits the COVID-19 pandemic to make an urgent request for gift card purchases.

Tip #2 – Always do a context check on emails

Targeted email attacks bank on victims being too busy and “doing before thinking” instead of stopping and engaging with the email rationally. While it may take a few extra seconds, always ask yourself if the email you’re reading – and what the email is asking for – make sense.

  • Why would your CEO really ask you to purchase iTunes gift cards at two hours’ notice? Have they done it before?
  • Why would Netflix emails come to your business email address?
  • Why would the IRS ask for your SSN and other sensitive personal information over email?

To sum up this tip, I’d say: be a little paranoid while reading emails, even if they’re from trusted entities.

Tip #3 – Check for email address and sender name deviations

To stop email impersonation, many organizations have deployed keyword-based protection that catches emails where the email addresses or sender names match those of key executives (or other related keywords). To get past these security controls, impersonation attacks use email addresses and sender names with slight deviations from those of the entity the attacks are impersonating. Some common deviations to look out for are:

  • Changes to the spelling, especially ones that are missed at first glance (e.g., “ei” instead of “ie” in a name).
  • Changes based on visual similarities to trick victims (e.g. replacing “rn” with “m” because they look alike).
  • Business emails sent from personal accounts like Gmail or Yahoo without advance notice. It’s advisable to validate the identity of the sender through secondary channels (text, Slack, or phone call) if they’re emailing you with requests from their personal account for the first time.
  • Descriptive changes to the name, even if the changes fit in context. For example, attackers impersonating a Chief Technology Officer named Ryan Fraser may send emails with the sender name as “Ryan Fraser, Chief Technology Officer”.
  • Changes to the components of the sender name (e.g., adding or removing a middle initial, abbreviating Mary Jane to MJ).

Tip #4 – Learn the “greatest hits” of impersonation phrases

Email impersonation has been around for long enough that there are well-known phrases and tactics we need to be aware of. The emails don’t always have to be directly related to money or data – the first email is sometimes a simple request, just to see who bites and buys into the email’s faux legitimacy. Be aware of the following phrases/context:

  • “Are you free now?”, “Are you at your desk?” and related questions are frequent opening lines in impersonation emails. Because they seem like harmless emails with simple requests, they get past email security controls and lay the bait.
  • “I need an urgent favor”, “Can you do something for me within the next 15 minutes?”, and other phrases implying the email is of a time-sensitive nature. If you get this email from your “CEO”, your instinct might be to respond quickly and be duped by the impersonation in the process.
  • “Can you share your personal cell phone number?”, “I need your personal email”, and other out-of-context requests for personal information. The objective of these requests is to harvest information and build out a profile of the victim; once adversaries have enough information, they have another entity to impersonate.

Tip #5 – Use secondary channels of authentication

Enterprise adoption of two-factor authentication (2FA) has grown considerably over the years, helping safeguard employee accounts and reduce the impact of account compromise.

Individuals should try to replicate this best practice for any email that makes unusual requests related to money or data. For example:

  • Has a vendor emailed you with a sudden change in their bank account details, right when an invoice is due? Call or text the vendor and confirm that they sent the email.
  • Did your manager email you asking for gift card purchases? Send them a Slack message (or whatever productivity app you use) to confirm the request.
  • Did your HR representative email you a COVID resource document that needs email account credentials to be viewed? Check the veracity of the email with the HR rep.

Even if you’re reaching out to very busy people for this additional authentication, they will understand and appreciate your caution.

These tips are meant as starting points for individuals and organizations to better understand email impersonation and start addressing its risk factors. But effective protection against email impersonation can’t be down to eye tests alone. Enterprise security teams should conduct a thorough audit of their email security stack and explore augments to native email security that offer specific protection against impersonation.

With email more important to our digital lives than ever, it’s vital that we are able to believe people are who their email says they are. Email impersonation attacks exploit this sometimes-misplaced belief. Stopping email impersonation attacks will require a combination of security hygiene, email security solutions that provide specific impersonation protection, and some healthy paranoia while reading emails – even if they seem to be from people you trust.

Cybersecurity is failing due to ineffective technology

A failing cybersecurity market is contributing to ineffective performance of cybersecurity technology, a Debate Security research reveals.

cybersecurity market failing

Based on over 100 comprehensive interviews with business and cybersecurity leaders from large enterprises, together with vendors, assessment organizations, government agencies, industry associations and regulators, the research shines a light on why technology vendors are not incentivized to deliver products that are more effective at reducing cyber risk.

The report supports the view that efficacy problems in the cybersecurity market are primarily due to economic issues, not technological ones. The research addresses three key themes and ultimately arrives at a consensus for how to approach a new model.

Cybersecurity technology is not as effective as it should be

90% of participants reported that cybersecurity technology is not as effective as it should be when it comes to protecting organizations from cyber risk. Trust in technology to deliver on its promises is low, and yet when asked how organizations evaluate cybersecurity technology efficacy and performance, there was not a single common definition.

Pressure has been placed on improving people and process related issues, but ineffective technology has become accepted as normal – and shamefully – inevitable.

The underlying problem is one of economics, not technology

92% of participants reported that there is a breakdown in the market relationship between buyers and vendors, with many seeing deep-seated information asymmetries.

Outside government, few buyers today use detailed, independent cybersecurity efficacy assessment as part of their cybersecurity procurement process, and not even the largest organizations reported having the resources to conduct all the assessments themselves.

As a result, vendors are incentivized to focus on other product features, and on marketing, deprioritizing cybersecurity technology efficacy – one of several classic signs of a “market for lemons”.

Coordinated action between stakeholders only achieved through regulation

Unless buyers demand greater efficacy, regulation may be the only way to address the issue. Overcoming first-mover disadvantages will be critical to fixing the broken cybersecurity technology market.

Many research participants believe that coordinated action between all stakeholders can only be achieved through regulation – though some hold out hope that coordination could be achieved through sectoral associations.

In either case, 70% of respondents feel that independent, transparent assessment of technology would help solve the market breakdown. Setting standards on technology assessment rather than on technology itself could prevent stifling innovation.

Defining cybersecurity technology efficacy

Participants in this research broadly agree that four characteristics are required to comprehensively define cybersecurity technology efficacy.

To be effective, cybersecurity solutions need to have the capability to deliver the stated security mission (be fit-for-purpose), have the practicality that enterprises need to implement, integrate, operate and maintain them (be fit-for-use), have the quality in design and build to avoid vulnerabilities and negative impact, and the provenance in the vendor company, its people and supply chain such that these do not introduce additional security risk.

“In cybersecurity right now, trust doesn’t always sell, and good security doesn’t always sell and isn’t always easy to buy. That’s a real problem,” said Ciaran Martin, advisory board member, Garrison Technology.

“Why we’re in this position is a bit of a mystery. This report helps us understand it. Fixing the problem is harder. But our species has fixed harder problems and we badly need the debate this report calls for, and industry-led action to follow it up.”

“Company boards are well aware that cybersecurity poses potentially existential risk, but are generally not well equipped to provide oversight on matters of technical detail,” said John Cryan, Chairman Man Group.

“Boards are much better equipped when it comes to the issues of incentives and market dynamics revealed by this research. Even if government regulation proves inevitable, I would encourage business leaders to consider these findings and to determine how, as buyers, corporates can best ensure that cybersecurity solutions offered by the market are fit for purpose.”

“As a technologist and developer of cybersecurity products, I really feel for cybersecurity professionals who are faced with significant challenges when trying to select effective technologies,” said Henry Harrison, CSO of Garrison Technology.

“We see two noticeable differences when selling to our two classes of prospects. For security-sensitive government customers, technology efficacy assessment is central to buying behavior – but we rarely see anything similar when dealing with even the most security-sensitive commercial customers. We take from this study that in many cases this has less to do with differing risk appetites and more to do with structural market issues.”

Moving to the cloud with a security-first, zero trust approach

Many companies tend to jump into the cloud before thinking about security. They may think they’ve thought about security, but when moving to the cloud, the whole concept of security changes. The security model must transform as well.

moving to the cloud

Moving to the cloud and staying secure

Most companies maintain a “castle, moat, and drawbridge” attitude to security. They put everything inside the “castle” (datacenter); establish a moat around it, with sharks and alligators, guns on turrets; and control access by raising the drawbridge. The access protocol involves a request for access, vetting through firewall rules where the access is granted or denied. That’s perimeter security.

When moving to the cloud, perimeter security is still important, but identity-based security is available to strengthen the security posture. That’s where a cloud partner skilled at explaining and operating a different security model is needed.

Anybody can grab a virtual machine, build the machine in the cloud, and be done, but establishing a VM and transforming the machine to a service with identity-based security is a different prospect. When identity is added to security, the model looks very different, resulting in cost savings and an increased security posture.

Advanced technology, cost of security, and lack of cybersecurity professionals place a strain on organizations. Cloud providers invest heavily in infrastructure, best-in-class tools, and a workforce uniquely focused on security. As a result, organizations win operationally, financially, and from a security perspective, when moving to the cloud. To be clear, moving applications and servers, as is, to the cloud does not make them secure.

Movement to the cloud should be a standardized process and should use a Cloud Center of Excellence (CCoE) or Cloud Business Office (CBO); however, implemented within a process focused on security first, organizations can reap the security benefits.

Shared responsibility

Although security is marketed as a shared responsibility in the cloud, ultimately, the owner of the data (customer) is responsible and the responsibility is non-transferrable. In short, the customer must understand the responsibility matrix (RACI) involved to accomplish their end goals. Every cloud provider has a shared responsibility matrix, but organizations often misunderstand the responsibilities or the lines fall into a grey area. Regardless of responsibility models, the data owner has a responsibility to protect the information and systems. As a result, the enterprise must own an understanding of all stakeholders, their responsibilities, and their status.

When choosing a partner, it’s vital for companies to identify their exact needs, their weaknesses, and even their culture. No cloud vendor will cover it all from the beginning, so it’s essential that organizations take control and ask the right questions (see Cloud Security Alliance’s CAIQ), in order to place trust in any cloud provider. If it’s to be a managed service, for example, it’s crucial to ask detailed questions about how the cloud provider intends to execute the offering.

It’s important to develop a standard security questionnaire and probe multiple layers deep into the service model until the provider is unable to meet the need. Looking through a multilayer deep lens allows the customer and service provider to understand the exact lines of responsibility and the details around task accomplishment.

Trust-as-a-Service

It might sound obvious, but it’s worth stressing: trust is a shared responsibility between the customer and cloud provider. Trust is also earned over time and is critical to the success of the customer-cloud provider relationship. That said, zero trust is a technical term that means, from a technology viewpoint, assume danger and breach. Organizations must trust their cloud provider but should avoid blind trust and validate. Trust as a Service (TaaS) is a newer acronym that refers to third-party endorsement of a provider’s security practices.

Key influencers of a customer’s trust in their cloud provider include:

  • Data location
  • Investigation status and location of data
  • Data segregation (keeping cloud customers’ data separated from others)
  • Availability
  • Privileged access
  • Backup and recovery
  • Regulatory compliance
  • Long-term viability

A TaaS example: Google Cloud

Google has taken great strides to earn customer trust, designing the Google Cloud Platform with a key eye on zero trust and its implementation of the model BeyondCorp. For example, Google has implemented two core concepts including:

  • Delivery of services and data: ensuring that people with the correct identity and the right purpose can access the required data every time
  • Prioritization and focus: access and innovation are placed ahead of threats and risks, meaning that as products are innovated, security is built into the environment

Transparency is very important to the trust relationship. Google has enabled transparency through strong visibility and control of data. When evaluating cloud providers, understanding their transparency related to access and service status is crucial. Google ensures transparency by using specific controls including:

  • Limited data center access from a physical standpoint, adhering to strict access controls
  • Disclosing how and why customer data is accessed
  • Incorporating a process of access approvals

Multi-layered security for a trusted infrastructure

Finally, cloud services must provide customers with an understanding of how each layer of infrastructure works and build rules into each. This includes operational and device security, encrypting data at rest, multiple layers of identity, and finally storage services: multi-layered, and supported by security by default.

Cloud native companies have a security-first approach and naturally have a higher security understanding and posture. That said, when choosing a cloud provider, enterprises should always understand, identify, and ensure that their cloud solution addresses each one of their security needs, and who’s responsible for what.

Essentially, every business must find a cloud partner that can answer all the key questions, provide transparency, and establish a trusted relationship in the zero trust world where we operate.

With database attacks on the rise, how can companies protect themselves?

Misconfigured or unsecured databases exposed on the open web are a fact of life. We hear about some of them because security researchers tell us how they discovered them, pinpointed their owners and alerted them, but many others are found by attackers first.

exposed databases

It used to take months to scan the Internet looking for open systems, but attackers now have access to free and easy-to-use scanning tools that can find them in less than an hour.

“As one honeypot experiment showed, open databases are targeted hundreds of times within a few hours,” Josh Bressers, product security lead at Elastic, told Help Net Security.

“There’s no way to leave unsecured data online without opening the data up to attack. This is why it’s crucial to always enable security and authentication features when setting up databases, so that your organization avoids this risk altogether.”

What do attackers do with exposed databases?

Bressers has been involved in the security of products and projects – especially open-source – for a very long time. In the past two decades, he created the product security division at Progeny Linux Systems and worked as a manager of the Red Hat product security team and headed the security strategy in Red Hat’s Platform Business Unit.

He now manages bug bounties, penetration testing and security vulnerability programs for Elastic’s products, as well as the company’s efforts to improve application security, add new and improve existing security features as needed or requested by customers.

The problem with exposed Elasticsearch (MariaDB, MongoDB, etc.) databases, he says, is that they are often left unsecured by developers by mistake and companies don’t discover the exposure quickly.

“The scanning tools do most of the work, so it’s up to the attacker to decide if the database has any data worth stealing,” he noted, and pointed out that this isn’t hacking, exactly – it’s mining of open services.

Attackers can quickly exfiltrate the accessible data, hold it for ransom, sell it to the highest bidder, modify it or simply delete it all.

“Sometimes there’s no clear advantage or motive. For example, this summer saw a string of cyberattacks called the Meow Bot attacks that have affected at least 25,000 databases so far. The attacker replaced the contents of every afflicted database with the word ‘meow’ but has not been identified or revealed anything behind the purpose of the attack,” he explained.

Advice for organizations that use clustered databases

Open-source database platforms such as Elasticsearch have built-in security to prevent attacks of this nature, but developers often disable those features in haste or due to a lack of understanding that their actions can put customer data at risk, Bressers says.

“The most important thing to keep in mind when trying to secure data is having a clear understanding of what you are securing and what it means to your organization. How sensitive is the data? What level of security needs to be applied? Who should have access?” he explained.

“Sometimes working with a partner who is an expert at running a modern database is a more secure alternative than doing it yourself. Sometimes it’s not. Modern data management is a new problem for many organizations; make sure your people understand the opportunities and challenges. And most importantly, make sure they have the tools and training.”

Secondly, he says, companies should set up external scanning systems that continuously check for exposed databases.

“These may be the same tools used by attackers, but they immediately notify security teams when a developer has mistakenly left sensitive data unlocked. For example, a free scanner is available from Shadowserver.”

Elastic offers information and documentation on how to enable the security features of Elasticsearch databases and prevent exposure, he adds and points out that security is enabled by default in their Elasticsearch Service on Elastic Cloud and cannot be disabled.

Defense in depth

No organization will ever be 100% safe, but steps can be taken to decrease a company’s attack surface. “Defense in depth” is the name of the game, Bressers says, and in this case, it should include the following security layers:

  • Discovery of data exposure (using the previously mentioned external scanning systems)
  • Strong authentication (SSO or usernames/passwords)
  • Prioritization of data access (e.g., HR may only need access to employee information and the accounting department may only need access to budget and tax data)
  • Deployment of monitoring infrastructures and automated solutions that can quickly identify potential problems before they become emergencies, isolate infected databases, and flag to support and IT teams for next steps

He also advises organizations that don’t have the internal expertise to set security configurations and managing a clustered database to hire of service providers that can handle data management and have a strong security portfolio, and to always have a mitigation plan in place and rehearse it with their IT and security teams so that when something does happen, they can execute a swift and intentional response.

How to build up cybersecurity for medical devices

Manufacturing medical devices with cybersecurity firmly in mind is an endeavor that, according to Christopher Gates, an increasing number of manufacturers is trying to get right.

cybersecurity medical devices

Healthcare delivery organizations have started demanding better security from medical device manufacturers (MDMs), he says, and many have have implemented secure procurement processes and contract language for MDMs that address the cybersecurity of the device itself, secure installation, cybersecurity support for the life of the product in the field, liability for breaches caused by a device not following current best practice, ongoing support for events in the field, and so on.

“For someone like myself who has been focused on cybersecurity at MDMs for over 12 years, this is excellent progress as it will force MDMs to take security seriously or be pushed out of the market by competitors who do take it seriously. Positive pressure from MDMs is driving cybersecurity forward more than any other activity,” he told Help Net Security.

Gates is a principal security architect at Velentium and one of the authors of the recently released Medical Device Cybersecurity for Engineers and Manufacturers, a comprehensive guide to medical device secure lifecycle management, aimed at engineers, managers, and regulatory specialists.

In this interview, he shares his knowledge regarding the cybersecurity mistakes most often made by manufacturers, on who is targeting medical devices (and why), his view on medical device cybersecurity standards and initiatives, and more.

[Answers have been edited for clarity.]

Are attackers targeting medical devices with a purpose other than to use them as a way into a healthcare organization’s network?

The easy answer to this is “yes,” since many MDMs in the medical device industry perform “competitive analysis” on their competitors’ products. It is much easier and cheaper for them to have a security researcher spend a few hours extracting an algorithm from a device for analysis than to spend months or even years of R&D work to pioneer a new algorithm from scratch.

Also, there is a large, hundreds-of-millions-of-dollars industry of companies who “re-enable” consumed medical disposables. This usually requires some fairly sophisticated reverse-engineering to return the device to its factory default condition.

Lastly, the medical device industry, when grouped together with the healthcare delivery organizations, constitutes part of critical national infrastructure. Other industries in that class (such as nuclear power plants) have experienced very directed and sophisticated attacks targeting safety backups in their facilities. These attacks seem to be initial testing of a cyber weapon that may be used later.

While these are clearly nation-state level attacks, you have to wonder if these same actors have been exploring medical devices as a way to inhibit our medical response in an emergency. I’m speculating: we have no evidence that this has happened. But then again, if it has happened there likely wouldn’t be any evidence, as we haven’t been designing medical devices and infrastructure with the ability to detect potential cybersecurity events until very recently.

What are the most often exploited vulnerabilities in medical devices?

It won’t come as a surprise to anyone in security when I say “the easiest vulnerabilities to exploit.” An attacker is going to start with the obvious ones, and then increasingly get more sophisticated. Mistakes made by developers include:

Unsecured firmware updating

I personally always start with software updates in the field, as they are so frequently implemented incorrectly. An attacker’s goal here is to gain access to the firmware with the intent of reverse-engineering it back into easily-readable source code that will yield more widely exploitable vulnerabilities (e.g., one impacting every device in the world). All firmware update methods have at least three very common potential design vulnerabilities. They are:

  • Exposure of the binary executable (i.e., it isn’t encrypted)
  • Corrupting the binary executable with added code (i.e., there isn’t an integrity check)
  • A rollback attack which downgrades the version of firmware to a version with known exploitable vulnerabilities (there isn’t metadata conveying the version information).

Overlooking physical attacks

Physical attack can be mounted:

  • Through an unsecured JTAG/SWD debugging port
  • Via side-channel (power monitoring, timing, etc.) exploits to expose the values of cryptographic keys
  • By sniffing internal busses, such as SPI and I2C
  • Exploiting flash memory external to the microcontroller (a $20 cable can get it to dump all of its contents)

Manufacturing support left enabled

Almost every medical device needs certain functions to be available during manufacturing. These are usually for testing and calibration, and none of them should be functional once the device is fully deployed. Manufacturing commands are frequently documented in PDF files used for maintenance, and often only have minor changes across product/model lines inside the same manufacturer, so a little experimentation goes a long way in letting an attacker get access to all kinds of unintended functionality.

No communication authentication

Just because a communications medium connects two devices doesn’t mean that the device being connected to is the device that the manufacturer or end-user expects it to be. No communications medium is inherently secure; it’s what you do at the application level that makes it secure.

Bluetooth Low Energy (BLE) is an excellent example of this. Immediately following a pairing (or re-pairing), a device should always, always perform a challenge-response process (which utilizes cryptographic primitives) to confirm it has paired with the correct device.

I remember attending an on-stage presentation of a new class II medical device with a BLE interface. From the audience, I immediately started to explore the device with my smartphone. This device had no authentication (or authorization), so I was able to perform all operations exposed on the BLE connection. I was engrossed in this interface when I suddenly realized there was some commotion on stage as they couldn’t get their demonstration to work: I had accidentally taken over the only connection the device supported. (I then quickly terminated the connection to let them continue with the presentation.)

What things must medical device manufacturers keep in mind if they want to produce secure products?

There are many aspects to incorporating security into your development culture. These can be broadly lumped into activities that promote security in your products, versus activities that convey a false sense of security and are actually a waste of time.

Probably the most important thing that a majority of MDMs need to understand and accept is that their developers have probably never been trained in cybersecurity. Most developers have limited knowledge of how to incorporate cybersecurity into the development lifecycle, where to invest time and effort into securing a device, what artifacts are needed for premarket submission, and how to proper utilize cryptography. Without knowing the details, many managers assume that security is being adequately included somewhere in their company’s development lifecycle; most are wrong.

To produce secure products, MDMs must follow a secure “total product life cycle,” which starts on the first day of development and ends years after the product’s end of life or end of support.

They need to:

  • Know the three areas where vulnerabilities are frequently introduced during development (design, implementation, and through third-party software components), and how to identify, prevent, or mitigate them
  • Know how to securely transfer a device to production and securely manage it once in production
  • Recognize an MDM’s place in the device’s supply chain: not at the end, but in the middle. An MDMs cybersecurity responsibilities extend up and down the chain. They have to contractually enforce cybersecurity controls on their suppliers, and they have to provide postmarket support for their devices in the field, up through and after end-of-life
  • Ccreate and maintain Software Bills of Materials (SBOMs) for all products, including legacy products. Doing this work now will help them stay ahead of regulation and save them money in the long run.

They must avoid mistakes like:

  • Not thinking that a medical device needs to be secured
  • Assuming their development team ‘can’ and ‘is’ securing their product
  • Not designing-in the ability to update the device in the field
  • Assuming that all vulnerabilities can be mitigated by a field update
  • Only considering the security of one aspect of your design (e.g., its wireless communication protocol). Security is a chain: for the device to be secure, all the links of the chain need to be secure. Attackers are not going to consider certain parts of the target device ‘out of bounds’ for exploiting.

Ultimately, security is about protecting the business model of an MDM. This includes the device’s safety and efficacy for the patient, which is what the regulations address, but it also includes public opinion, loss of business, counterfeit accessories, theft of intellectual property, and so forth. One mistake I see companies frequently make is doing the minimum on security to gain regulatory approval, but neglecting to protect their other business interests along the way – and those can be very expensive to overlook.

What about the developers? Any advice on skills they should acquire or brush up on?

First, I’d like to take some pressure off developers by saying that it’s unreasonable to expect that they have some intrinsic knowledge of how to implement cybersecurity in a product. Until very recently, cybersecurity was not part of traditional engineering or software development curriculum. Most developers need additional training in cybersecurity.

And it’s not only the developers. More than likely, project management has done them a huge disservice by creating a system-level security requirement that says something like, “Prevent ransomware attacks.” What is the development team supposed to do with that requirement? How is it actionable?

At the same time, involving the company’s network or IT cybersecurity team is not going to be an automatic fix either. IT Cybersecurity diverges from Embedded Cybersecurity in many respects, from detection to implementation of mitigations. No MDM is going to be putting a firewall on a device that is powered by a CR2032 battery anytime soon; yet there are ways to secure such a low-resource device.

In addition to the how-to book we wrote, Velentium will soon offer training available specifically for the embedded device domain, geared toward creating a culture of cybersecurity in development teams. My audacious goal is that within 5 years every medical device developer I talk to will be able to converse intelligently on all aspects of securing a medical device.

What cybersecurity legislation/regulation must companies manufacturing medical devices abide by?

It depends on the markets you intend to sell into. While the US has had the Food and Drug Administration (FDA) refining its medical device cybersecurity position since 2005, others are more recent entrants into this type of regulation, including Japan, China, Germany, Singapore, South Korea, Australia, Canada, France, Saudi Arabia, and the greater EU.

While all of these regulations have the same goal of securing medical devices, how they get there is anything but harmonized among them. Even the level of abstraction varies, with some focused on processes while others on technical activities.

But there are some common concepts represented in all these regulations, such as:

  • Risk management
  • Software bill of materials (SBOM)
  • Monitoring
  • Communication
  • “Total Product Lifecycle”
  • Testing

But if you plan on marketing in the US, the two most important document should be FDA’s:

  • 2018 – Draft Guidance: Content of Premarket Submissions for Management of Cybersecurity in Medical Devices
  • 2016 – Final Guidance: Postmarket Management of Cybersecurity in Medical Devices (The 2014 version of the guidance on premarket submissions can be largely ignored, as it no longer represents the FDA’s current expectations for cybersecurity in new medical devices).
What are some good standards for manufacturers to follow if they want to get cybersecurity right?

The Association for the Advancement of Medical Instrumentation’s standards are excellent. I recommend AAMI TIR57: 2016 and AAMI TIR97: 2019.

Also very good is the Healthcare & Public Health Sector Coordinating Council’s (HPH SCC) Joint Security Plan. And, to a lesser extent, the NIST Cyber Security Framework.

The work being done at the US Department of Commerce / NTIA on SBOM definition for vulnerability management and postmarket surveillance is very good as well, and worth following.

What initiatives exist to promote medical device cybersecurity?

Notable initiatives I’m familiar with include, first, the aforementioned NTIA work on SBOMs, now in its second year. There are also several excellent working groups at HSCC, including the Legacy Medical Device group and the Security Contract Language for Healthcare Delivery Organizations group. I’d also point to numerous working groups in the H-ISAC Information Sharing and Analysis Organization (ISAO), including the Securing the Medical Device Lifecycle group.

And I have to include the FDA itself here, which is in the process of revising its 2018 premarket draft guidance; we hope to see the results of that effort in early 2021.

What changes do you expect to see in the medical devices cybersecurity field in the next 3-5 years?

So much is happening at high and low levels. For instance, I hope to see the FDA get more of a direct mandate from Congress to enforce security in medical devices.

Also, many working groups of highly talented people are working on ways to improve the security posture of devices, such as the NTIA SBOM effort to improve the transparency of software “ingredients” in a medical device, allowing end-users to quickly assess their risk level when new vulnerabilities are discovered.

Semiconductor manufacturers continue to give us great mitigation tools in hardware, such as side-channel protections, cryptographic accelerators, virtualized security cores. Trustzone is a great example.

And at the application level, we’ll continue to see more and better packaged tools, such as cryptographic libraries and processes, to help developers avoid cryptography mistakes. Also, we’ll see more and better process tools to automate the application of security controls to a design.

HDOs and other medical device purchasers are better informed than ever before about embedded cybersecurity features and best practices. That trend will continue, and will further accelerate demand for better-secured products.

I hope to see some effort at harmonization between all the federal, state, and foreign regulations that have been recently released with those currently under consideration.

One thing is certain: legacy medical devices that can’t be secured will only go away when we can replace them with new medical devices that are secure by design. Bringing new devices to market takes a long time. There’s lots of great innovation underway, but really, we’re just getting started!

Review: Practical Vulnerability Management: A Strategic Approach to Managing Cyber Risk

review practical vulnerability management

Andrew Magnusson started his information security career 20 years ago and he decided to offer the knowledge he accumulated through this book, to help the reader eliminate security weaknesses and threats within their system.

As he points out in the introduction, bugs are everywhere, but there are actions and processes the reader can apply to eliminate or at least mitigate the associated risks.

The author starts off by explaining vulnerability management basics, the importance of knowing your network and the process of collecting and analyzing data.

He explains the importance of a vulnerability scanner and why it is essential to configure and deploy it correctly, since it gives valuable infromation to successfully complete a vulnerabilty management process.

The next step is to automate the processes, which prioritizes vulnerabilities and gives time to work on more severe issues, consequently boosting an organization’s security posture.

Finally, it is time to decide what to do with the vulnerabilities you have detected, which means choosing the appropriate security measures, whether it’s patching, mitigation or systemic measures. When the risk has a low impact, there’s also the option of accepting it, but this still needs to be documented and agreed upon.

The important part of this process, and perhaps also the hardest, is building relationships within the organization. The reader needs to respect office politics and make sure all the decisions and changes they make are approved by the superiors.

The second part of the book is practical, with the author guiding the reader through the process of building their own vulnerability management system with a detailed analysis of the open source tools they need to use such as Nmap, OpenVAS, and cve-search, everything supported by coding examples.

The reader will learn how to build an asset and vulnerability database and how to keep it accurate and up to date. This is especially important when generating reports, as those need to be based on recent vulnerability findings.

Who is it for?

Practical Vulnerability Management is aimed at security practitioners who are responsible for protecting their organization and tasked with boosting its security posture. It is assumed they are familiar with Linux and Python.

Despite the technical content, the book is an easy read and offers comprehensive solutions to keeping an organization secure and always prepared for possible attacks.

How do I select a data storage solution for my business?

We live in the age of data. We are constantly producing it, analyzing it, figuring out how to store and protect it, and, hopefully, using it to refine business practices. Unfortunately, 58% of organizations make decisions based on outdated data.

While enterprises are rapidly deploying technologies for real-time analytics, machine learning and IoT, they are still utilizing legacy storage solutions that are not designed for such data-intensive workloads.

To select a suitable data storage for your business, you need to think about a variety of factors. We’ve talked to several industry leaders to get their insight on the topic.

Phil Bullinger, SVP and General Manager, Data Center Business Unit, Western Digital

select data storage solutionSelecting the right data storage solution for your enterprise requires evaluating and balancing many factors. The most important is aligning the performance and capabilities of the storage system with your critical workloads and their specific bandwidth, application latency and data availability requirements. For example, if your business wants to gain greater insight and value from data through AI, your storage system should be designed to support the accelerated performance and scale requirements of analytics workloads.

Storage systems that maximize the performance potential of solid state drives (SSDs) and the efficiency and scalability of hard disk drives (HDDs) provide the flexibility and configurability to meet a wide range of application workloads.

Your applications should also drive the essential architecture of your storage system, whether directly connected or networked, whether required to store and deliver data as blocks, files, objects or all three, and whether the storage system must efficiently support a wide range of workloads while prioritizing the performance of the most demanding applications.

Consideration should be given to your overall IT data management architecture to support the scalability, data protection, and business continuity assurance required for your enterprise, spanning from core data centers to those distributed at or near the edge and endpoints of your enterprise operations, and integration with your cloud-resident applications, compute and data storage services and resources.

Ben Gitenstein, VP of Product Management, Qumulo

select data storage solutionWhen searching for the right data storage solution to support your organizational needs today and in the future, it’s important to select a solution that is trusted, scalable to secure demanding workloads of any size, and ensures optimal performance of applications and workloads both on premises and in complex, multi- cloud environments.

With the recent pandemic, organizations are digitally transforming faster than ever before, and leveraging the cloud to conduct business. This makes it more important than ever that your storage solution has built in tools for data management across this ecosystem.

When evaluating storage options, be sure to do your homework and ask the right questions. Is it a trusted provider? Would it integrate well within my existing technology infrastructure? Your storage solution should be easy to manage and meet the scale, performance and cloud requirements for any data environment and across multi-cloud environments.

Also, be sure the storage solution gives IT control in how they manage storage capacity needs and delivers real-time insight into analytics and usage patterns so they can make smart storage allocation decisions and maximize an organizations’ storage budget.

David Huskisson, Senior Solutions Manager, Pure Storage

select data storage solutionData backup and disaster recovery features are critically important when selecting a storage solution for your business, as now no organization is immune to ransomware attacks. When systems go down, they need to be recovered as quickly and safely as possibly.

Look for solutions that offer simplicity in management, can ensure backups are viable even when admin credentials are compromised, and can be restored quickly enough to greatly reduce major organizational or financial impact.

Storage solutions that are purpose-built to handle unstructured data are a strong place to start. By definition, unstructured data means unpredictable data that can take any form, size or shape, and can be accessed in any pattern. These capabilities can accelerate small, large, random or sequential data, and consolidate a wide range of workloads on a unified fast file and object storage platform. It should maintain its performance even as the amount of data grows.

If you have an existing backup product, you don’t need to rip and replace it. There are storage platforms with robust integrations that work seamlessly with existing solutions and offer a wide range of data-protection architectures so you can ensure business continuity amid changes.

Tunio Zafer, CEO, pCloud

select data storage solutionBear in mind: your security team needs to assist. Answer these questions to find the right solution: Do you need ‘cold’ storage or cloud storage? If you’re looking to only store files for backup, you need a cloud backup service. If you’re looking to store, edit and share, go for cloud storage. Where are their storage servers located? If your business is located in Europe, the safest choice is a storage service based in Europe.

Best case scenario – your company is going to grow. Look for a storage service that offers scalability. What is their data privacy policy? Research whether someone can legally access your data without your knowledge or consent. Switzerland has one of the strictest data privacy laws globally, so choosing a Swiss-based service is a safe bet. How is your data secured? Look for a service that offers robust encryption in-transit and at-rest.

Client-side encryption means that your data is secured on your device and is transferred already encrypted. What is their support package? At some point, you’re going to need help. A data storage service with a support package that’s included for free, answers in up to 24 hours is preferred.

Three immediate steps to take to protect your APIs from security risks

In one form or another, APIs have been around for years, bringing the benefits of ease of use, efficiency and flexibility to the developer community. The advantage of using APIs for mobile and web apps is that developers can build and deploy functionality and data integrations quickly.

API security posture

API security posture

But there is a huge downside to this approach. Undermining the power of an API-driven development methodology are shadow, deprecated and non-conforming APIs that, when exposed to the public, introduce the risk of data loss, compromise or automated fraud.

The stateless nature of APIs and their ubiquity makes protecting them increasingly difficult, largely because malicious actors can leverage the same developer benefits – ease of use and flexibility – to easily execute account takeovers, credential stuffing, fake account creation or content scraping. It’s no wonder that Gartner identified API security as a top concern for 50% of businesses.

Thankfully, it’s never too late to get your API footprint in order to better protect your organization’s critical data. Here are a few easy steps you can follow to mitigate API security risks immediately:

1. Start an organization-wide conversation

If your company is having conversations around API security at all, it’s likely that they are happening in a fractured manner. If there’s no larger, cohesive conversation, then various development and operational teams could be taking conflicting approaches to mitigating API security risks.

For this reason, teams should discuss how they can best work together to support API security initiatives. As a basis for these meetings, teams should refer to the NIST Cybersecurity Framework, as it’s a great way to develop a shared understanding of organization-wide cybersecurity risks. The NIST CSF will help the collective team to gain a baseline awareness about the APIs used across the organization to pinpoint the potential gaps in the operational processes that support them, so that companies can work towards improving their API strategy immediately.

2. Ask (& answer) any outstanding questions as a team

To improve an organization’s API security posture, it’s critical that outstanding questions are asked and answered immediately so that gaps in security are reduced and closed. When posing these questions to the group, consider the API assets you have overall, the business environment, governance, risk assessment, risk management strategy, access control, awareness and training, anomalies and events, continuous security monitoring, detection processes, etc. Leave no stone unturned. Here are a few suggested questions to use as a starting point as you work on the next step in this process towards getting your API security house in order:

  • How many APIs do we have?
  • How were they developed? Which are open-source, custom built or third-party?
  • Which APIs are subject to legal or regulatory compliance?
  • How do we monitor for vulnerabilities in our APIs?
  • How do we protect our APIs from malicious traffic?
  • Are there APIs with vulnerabilities?
  • What is the business impact if the APIs are compromised or abused?
  • Is API security a part of our on-going developer training and security evangelism?

Once any security holes have been identified through a shared understanding, the team then can collectively work together to fill those gaps.

3. Strive for complete and continuous API security and visibility

Visibility is critical to immediate and continuous API security. By going through step one and two, organizations are working towards more secure APIs today – but what about tomorrow and in the years to come as your API footprint expands exponentially?

Consider implementing a visibility and monitoring solution to help you oversee this security program on an ongoing basis, so that your organization can feel confident in having a strong and comprehensive API security posture that grows and adapts as your number of APIs expand and shift. The key components to visibility and monitoring?

Centralized visibility and inventory of all APIs, a detailed view of API traffic patterns, discovery of APIs transmitting sensitive data, continuous API specification conformance assessment, having validated authentication and access control programs in place and running automatic risk analysis based on predefined criteria. Continuous, runtime visibility into how many APIs an organization has, who is accessing them, where they are located and how they are being used, is key to API security.

As organizations continue to expand their use of APIs to drive their business, it’s crucial that companies consider every path malicious actors might take to attack their organization’s critical data.

The lifecycle of a eureka moment in cybersecurity

It takes more than a single eureka moment to attract investor backing, especially in a notoriously high-stakes and competitive industry like cybersecurity.

eureka moment cybersecurity

While every seed-stage investor has their respective strategies for vetting raw ideas, my experience of the investment due diligence process involves a veritable ringer of rapid-fire, back-to-back meetings with cybersecurity specialists and potential customers, as well as rigorous market scoping by analysts and researchers.

As the CTO of a seed-stage venture capital firm entirely dedicated to cybersecurity, I spend a good portion of my time ideating alongside early-stage entrepreneurs and working through this process with them. To do this well, I’ve had to develop an internal seismometer for industry pain points and potential competitors, play matchmaker between tech geniuses and industry decision-makers, and peer down complex roadmaps to find the optimal point of convergence for good tech and good business.

Along the way, I’ve gained a unique perspective on the set of necessary qualities for a good idea to turn into a successful startup with significant market traction.

Just as a good idea doesn’t necessarily translate into a great product, the qualities of a great product don’t add up to a magic formula for guaranteed success. However, how well an idea performs in the categories I set out below can directly impact the confidence of investors and potential customers you’re pitching to. Therefore, it’s vital that entrepreneurs ask themselves the following before a pitch:

Do I have a strong core value proposition?

The cybersecurity industry is saturated with features passing themselves off as platforms. While the accumulated value of a solution’s features may be high, its core value must resonate with customers above all else. More pitches than I wish to count have left me scratching my head over a proposed solution’s ultimate purpose. Product pitches must lead with and focus on the solution’s core value proposition, and this proposition must be able to hold its own and sell itself.

Consider a browser security plugin with extensive features that include XSS mitigation, malicious website blocking, employee activity logging and download inspections. This product proposition may be built on many nice-to-have features, but, without a strong core feature, it doesn’t add up to a strong product that customers will be willing to buy. Add-on features, should they need to be discussed, ought to be mentioned as secondary or additional points of value.

What is my solution’s path to scalability?

Solutions must be scalable in order to reach as many customers as possible and avoid price hikes with reduced margins. Moreover, it’s critical to factor in the maintenance cost and “tech debt” of solutions that are environment-dependent on account of integrations with other tools or difficult deployments.

I’ve come across many pitches that fail to do this, and entrepreneurs who forget that such an omission can both limit their customer pool and eventually incur tremendous costs for integrations that are destined to lose value over time.

What is my product experience like for customers?

A solution’s viability and success lie in so much more than its outcome. Both investors and customers require complete transparency over the ease-of use of a product in order for it to move forward in the pipeline. Frictionless and resource-light deployments are absolutely key and should always mind the realities of inter-departmental politics. Remember, the requirement of additional hires for a company to use your product is a hidden cost that will ultimately reduce your margins.

Moreover, it can be very difficult for companies to rope in the necessary stakeholders across their organization to help your solution succeed. Finally, requiring hard-to-come-by resources for a POC, such as sensitive data, may set up your solution for failure if customers are reluctant to relinquish the necessary assets.

What is my solution’s time-to-value?

Successfully discussing a core value must eventually give way to achieving it. Satisfaction with a solution will always ultimately boil down to deliverables. From the moment your idea raises funds, your solution will be running against the clock to provide its promised value, successfully interact with the market and adapt itself where necessary.

The ability to demonstrate strong initial performance will draw in sought-after design partners and allow you to begin selling earlier. Not only are these sales necessary bolsters to your follow on rounds, they also pave the way for future upsells to customers.

It’s critical, where POCs are involved, that the beta content installed by early customers delivers well in order to drive conversions and complete the sales process. It’s critical to create a roadmap for achieving this type of deliverability that can be clearly articulated to your stakeholders.

When will my solution deliver value?

It’s all too common for entrepreneurs to focus on “the ultimate solution”. This usually amounts to what they hope their solution will achieve some three years into development while neglecting the market value it can provide along the way. While investors are keen to embrace the big picture, this kind of entrepreneurial tunnel vision hurts product sales and future fundraising.

Early-stage startups must build their way up to solving big problems and reconcile with the fact that they are typically only equipped to resolve small ones until they reach maturity. This must be communicated transparently to avoid creating a false image of success in your market validation. Avoid asking “do you need a product that solves your [high-level problem]?” and ask instead “would you pay for a product that solves this key element of your [high-level problem]?”.

Unless an idea breaks completely new ground or looks to secure new tech, it’s likely to be an improvement to an already existing solution. In order to succeed at this, however, it’s critical to understand the failures and drawbacks of existing solutions before embarking on building your own.

Cybersecurity buyers are often open to switching over to a product that works as well as one they already use without its disadvantages. However, it’s incumbent on vendors to avoid making false promises and follow through on improving their output.

The cybersecurity industry is full of entrepreneurial genius poised to disrupt the current market. However, that potential can only manifest by designing it to address much more than mere security gaps.

The lifecycle of a good cybersecurity idea may start with tech, but it requires a powerful infusion of foresight and listening to make it through investor and customer pipelines. This requires an extraordinary amount of research in some very unexpected places, and one of the biggest obstacles ideating entrepreneurs face is determining precisely what questions to ask and gaining access to those they need to understand.

Working with well-connected investors dedicated to fostering those relationships, ironing out roadmap kinks in the ideation process is one of the surest ways to secure success. We must focus on building good ideas sustainably and remember that immediate partial value delivery is a small compromise towards building out the next great cybersecurity disruptor.

Hardware security: Emerging attacks and protection mechanisms

Maggie Jauregui’s introduction to hardware security is a fun story: she figured out how to spark, smoke, and permanently disable GFCI (Ground Fault Circuit Interrupter – the two button protections on plugs/sockets that prevent you from electrocuting yourself by accident with your hair dryer) wirelessly with a walkie talkie.

hardware security challenges

“I could also do this across walls with a directional antenna, and this also worked on AFCI’s (Arc Fault Circuit Interrupts – part of the circuit breaker box in your garage), which meant you could drive by someone’s home and potentially turn off their lights,” she told Help Net Security.

This first foray into hardware security resulted in her first technical presentation ever at DEF CON and a follow up presentation at CanSecWest about the effects of radio waves on modern platforms.

Jauregui says she’s always been interested in hardware. She started out as an electrical engineering major but switched to computer science halfway through university, and ultimately applied to be an Intel intern in Mexico.

“After attending my first hackathon — where I actually met my husband — I’ve continued to explore my love for all things hardware, firmware, and security to this day, and have been a part of various research teams at Intel ever since,” she added. (She’s currently a member of the corporation’s Platform Armoring and Resilience team.)

What do we talk about when we talk about hardware security?

Computer systems – a category that these days includes everything from phones and laptops to wireless thermostats and other “smart” home appliances – are a combination of many hardware components (a processor, memory, i/o peripherals, etc.) that together with firmware and software are capable of delivering services and enabling the connected data centric world we live in.

Hardware-based security typically refers to the defenses that help protect against vulnerabilities targeting these devices, and it’s main focus it to make sure that the different hardware components working together are architected, implemented, and configured correctly.

“Hardware can sometimes be considered its own level of security because it often requires physical presence in order to access or modify specific fuses, jumpers, locks, etc,” Jauregui explained. This is why hardware is also used as a root of trust.

Hardware security challenges

But every hardware device has firmware – a tempting attack vector for many hackers. And though the industry has been making advancements in firmware security solutions, many organizations are still challenged by it and don’t know how to adequately protect their systems and data, she says.

She advises IT security specialists to be aware of firmware’s importance as an asset to their organization’s threat model, to make sure that the firmware on company devices is consistently updated, and to set up automated security validation tools that can scan for configuration anomalies within their platform and evaluate security-sensitive bits within their firmware.

“Additionally, Confidential Computing has emerged as a key strategy for helping to secure data in use,” she noted. “It uses hardware memory protections to better isolate sensitive data payloads. This represents a fundamental shift in how computation is done at the hardware level and will change how vendors can structure their application programs.”

Finally, the COVID-19 pandemic has somewhat disrupted the hardware supply chain and has brought to the fore another challenge.

“Because a computing system is typically composed of multiple components from different manufacturers, each with its own level of scrutiny in relation to potential supply chain attacks, it’s challenging to verify the integrity across all stages of its lifecycle,” Jauregui explained.

“This is why it is critical for companies to work together on a validation and attestation solution for hardware and firmware that can be conducted prior to integration into a larger system. If the industry as a whole comes together, we can create more measures to help protect a product through its entire lifecycle.”

Achieving security in low-end systems on chips

The proliferation of Internet of Things devices and embedded systems and our reliance on them should make the security of these systems extremely important.

As they commonly rely on systems on chips (SoCs) – integrated circuits that consolidate the components of a computer or other electronic system on a single microchip – securing these devices is a different proposition than securing “classic” computer systems, especially if they rely on low-end SoCs.

Jauregui says that there is no single blanket solution approach to implement security of embedded systems, and that while some of the general hardware security recommendations apply, many do not.

“I highly recommend readers to check out the book Demystifying Internet of Things Security written by Intel scientists and Principal Engineers. It’s an in depth look at the threat model, secure boot, chain of trust, and the SW stack leading up to defense-in-depth for embedded systems. It also examines the different security building blocks available in Intel Architecture (IA) based IoT platforms and breaks down some the misconceptions of the Internet of Things,” she added.

“This book explores the challenges to secure these devices and provides suggestions to make them more immune to different threats originating from within and outside the network.”

For those security professionals who are interested in specializing in hardware security, she advises being curious about how things work and doing research, following folks doing interesting things on Twitter and asking them things, and watching hardware security conference talks and trying to reproduce the issues.

“Learn by doing. And if you want someone to lead you through it, go take a class! I recommend hardware security classes by Joe FitzPatrick and Joe Grand, as they are brilliant hardware researchers and excellent teachers,” she concluded.

Cybersecurity lessons learned from data breaches and brand trust matters

Your brand is a valuable asset, but it’s also a great attack vector. Threat actors exploit the public’s trust of your brand when they phish under your name or when they counterfeit your products. The problem gets harder because you engage with the world across so many digital platforms – the web, social media, mobile apps. These engagements are obviously crucial to your business.

Something else should be obvious as well: guarding your digital trust – public confidence in your digital security – is make-or-break for your business, not just part of your compliance checklist.

COVID-19 has put a renewed spotlight on the importance of defending against cyberattacks and data breaches as more users are accessing data from remote or non-traditional locations. Crisis fuels cybercrime and we have seen that hacking has increased substantially as digital transformation initiatives have accelerated and many employees have been working from home without adequate firewalls and back-up protection.

The impact of cybersecurity breaches is no longer constrained to the IT department. The frequency and sophistication of ransomware, phishing schemes, and data breaches have the potential to destroy both brand health and financial viability. Organizations across industry verticals have seen their systems breached as cyber thieves have tried to take advantage of a crisis.

Good governance will be essential for handling the management of cyber issues. Strong cybersecurity will also be important to show customers that steps are being taken to avoid hackers and keep their data safe.

The COVID crisis has not changed the cybersecurity fundamentals. What will the new normal be like? While the COVID pandemic has turned business and society upside down, well-established cybersecurity practices – some known for decades – remain the best way to protect yourself.

1. Data must be governed

Data governance is the capability within an organization to help provide for and protect for high quality data throughout the lifecycle of that data. This includes data integrity, data security, availability, and consistency. Data governance includes people, processes, and technology that help enable appropriate handling of the data across the organization. Data governance program policies include:

  • Delineating accountability for those responsible for data and data assets
  • Assigning responsibility to appropriate levels in the organization for managing and protecting the data
  • Determining who can take what actions, with what data, under what circumstances, using what methods
  • Identifying safeguards to protect data
  • Providing integrity controls to provide for the quality and accuracy of data

2. Patch management and vulnerability management: Two sides of a coin

Address threats with vulnerability management. Bad actors look to take advantage of discovered vulnerabilities in an attempt to infect a workstation or server. Managing threats is a reactive process where the threat must be actively present, whereas vulnerability management is proactive, seeking to close the security gaps that exist before they are taken advantage of.

It’s more than just patching vulnerabilities. Formal vulnerability management doesn’t simply involve the act of patching and reconfiguring insecure settings. Vulnerability management is a disciplined practice that requires an organizational mindset within IT that new vulnerabilities are found daily requiring the need for continual discovery and remediation.

3. Not “if” but “when”: Assume you’re already hacked

If you build your operations and defense with this premise in mind, your chances of helping to detect these types of attacks and preventing the breaches are much greater than most organizations today.

The importance of incident response steps

A data breach should be viewed as a “when” not “if” occurrence, so be prepared for it. Under the pressure of a critical-level incident is no time to be figuring out your game plan. Your future self will thank you for the time and effort you invest on the front end.

Incident response can be stressful and is stressful when a critical asset is involved and you realize there’s an actual threat. Incident response steps help in these stressing, high pressure situations to more quickly guide you to successful containment and recovery. Response time is critical to minimizing damages. With every second counting, having a plan to follow already in place is the key to success.

4. Your size does not mean security maturity

It does not matter how big you are or the resources your team can access. As defenders, we always think, “If I only had enough money or people, I could solve this problem.” We need to change our thinking. It’s not how much you spend but rather, is that spend an effective use? Does it allow your team to disrupt attacks or just wait to be alerted (maybe)? No matter where an organization is on its journey toward security maturity, a risk assessment can prove invaluable in deciding where and when it needs most improvement.

cybersecurity lessons learned

For more mature organizations, the risk assessment process will focus less on discovering major controls gaps and more on finding subtler opportunities for continuously improving the program. An assessment of a less mature program is likely to find misalignments with business goals, inefficiencies in processes or architecture, and places where protections could be taken to another level of effectiveness.

5. Do more with less

Limited budgets, limited staff, limited time. Any security professional will have dealt with all of these repeatedly while trying to launch new initiatives or when completing day-to-day tasks. They are possibly the most severe and dangerous adversaries that many cybersecurity professionals will face. They affect every organization regardless of industry, size, or location and pose an existential threat to even the most prepared company. There is no easy way to contain them either, since no company has unlimited funding or time, and the lack of cybersecurity professionals makes filling roles incredibly tricky.

How can organizations cope with these natural limitations? The answer is resource prioritization, along with a healthy dose of operational improvements. By identifying areas where processes can be streamlined and understanding what the most significant risks are, organizations can begin to help protect their systems while staying within their constraints.

6. Rome wasn’t built in a day

An edict out of the IT department won’t get the job done. Building a security culture takes time and effort. What’s more, cybersecurity awareness training ought to be a regular occurrence — once a quarter at a minimum — where it’s an ongoing conversation with employees. One-and-done won’t suffice.

People have short memories, so repetition is altogether appropriate when it comes to a topic that’s so strategic to the organization. This also needs to be part of a broader top-down effort starting with senior management. Awareness training should be incorporated across all organizations, not just limited to governance, threat detection, and incident response plans. The campaign should involve more than serving up a dry set of rules, separate from the broader business reality.

Measuring impact beyond a single incident

Determining the true impact of a cyber attack has always and will likely be one of the most challenging aspects of this technological age.

true impact

In an environment where very limited transparency on the root cause and the true impact is afforded we are left with isolated examples to point to the direct cost of a security incident. For example, the 2010 attack on the Natanz nuclear facilities was and in certain cases is still used as the reference case study for why cybersecurity is imperative within an ICS environment (quite possibly substituted with BlackEnergy).

For the impact on ransomware, it was the impact WannaCry had on healthcare and will likely be replaced with the awful story where a patient sadly lost their life because of a ransomware attack.

What these cases clearly provide is a degree of insight into their impact. Albeit this would be limited in certain scenarios, but this approach sadly almost excludes the multitude of attacks that successfully occurred prior and in which the impact was either unavailable or did not make the headline story.

It can of course be argued that the use of such case studies are a useful vehicle to influence change, there is equally the risk that they simply are such outliers that decision makers do not recognise their own vulnerabilities within the broader problem statement.

If we truly need to influence change, then a wider body of work to develop the broader economic, and societal impact, from the multitude of incidents is required. Whilst this is likely to be hugely subjective it is imperative to understand the true impact of cybersecurity. I recall a conversation a friend of mine had with someone who claimed they “are not concerned with malware because all it does is slow down their computer”. This of course is the wider challenge to articulate the impact in a manner which will resonate.

Ask anybody the impact of car theft and this will be understood, ask the same question about any number of digital incidents and the reply will likely be less clear.

It can be argued that studies which measure the macro cost of such incidents do indeed exist, but the problem statement of billions lost is so enormous that we each are unable to relate to this. A small business owner hearing about how another small business had their records locked with ransomware, and the impact to their business is likely to be more influential than an economic model explaining the financial cost of cybercrime (which is still imperative to policy makers for example).

If such case studies are so imperative and there exists a stigma with being open about such breaches what can be done? This of course is the largest challenge, with potential litigation governing every communication. To be entirely honest as I sit here and try and conclude with concrete proposals I am somewhat at a loss as to how to change the status quo.

The question is more an open one, what can be done? Can we leave fault at the door when we comment on security incidents? Perhaps encourage those that are victims to be more open? Of course this is only a start, and an area that deserves a wider discussion.

Using virtualization to isolate risky applications and other endpoint threats

More and more security professionals are realizing that it’s impossible to fully secure a Windows machine – with all its legacy components and millions of potentially vulnerable lines of code – from within the OS. With attacks becoming more sophisticated than ever, hypervisor-based security, from below the OS, becomes a necessity.

Unlike modern OS kernels, hypervisors are designed for a very specific task. Their code is usually very small, well-reviewed and tested, making them very hard to exploit. Because of that, the world trusts modern hypervisors to run servers, containers, and other workloads in the cloud, which sometimes run side-by-side on the same physical server with complete separation and isolation. Because of that, companies are leveraging the same trusted technology to bring hardware-enforced isolation to the endpoint.

Microsoft Defender Application Guard

Microsoft Defender Application Guard (previously known as Windows Defender Application Guard, or just WDAG), brings hypervisor-based isolation to Microsoft Edge and Microsoft Office applications.

It allows administrators to apply policies that force untrusted web sites and documents to be opened in isolated Hyper-V containers, completely separating potential malware from the host OS. Malware running in such containers won’t be able to access and exfiltrate sensitive files such as corporate documents or the users’ corporate credentials, cookies, or tokens.

With Application Guard for Edge, when a user opens a web site that was not added to the allow-list, he is automatically redirected to a new isolated instance of Edge, continuing the session there. This isolated instance of Edge provides another, much stronger, sandboxing layer to cope with web threats. If allowed by the administrator, files downloaded during that session can be accessed later from the host OS.

isolate risky applications

With Application Guard for Office, when a user opens an unknown document, maybe downloaded from the internet or opened as an email attachment, the document is automatically opened in an isolated instance of Office.

Until now, such documents would be opened in “protected view”, a special mode that eliminates the threat from scripts and macros by disabling embedded code execution. Unfortunately, this mode sometimes breaks legit files, such as spreadsheets that contain harmless macros. It also prevents users from editing documents.

Many users blindly disable the “protected view” mode to enable editing, thereby allowing malware to execute on the device. With Application Guard for Office, users don’t compromise security (the malware is trapped inside the isolated container) nor productivity )the document is fully functional and editable inside the container).

In both cases, the container is spawned instantly, with minimal CPU, memory, and disk footprints. Unlike traditional virtual machines, IT administrators don’t need to manage the underlying OS inside the container. Instead, it’s built out of existing Windows system binaries that remain patched as long as the host OS is up to date. Microsoft has also introduced new virtual GPU capabilities, allowing software running inside the container to be hardware-GPU accelerated. With all these optimizations, Edge and Office running inside the container feel fast and responsive, almost as if they were running without an additional virtualization layer.

The missing compatibility

While Application Guard works well with Edge and Office, it doesn’t support other applications. Edge will always be the browser running inside the container. That means, for example, no Google accounts synchronization, something that many users probably want.

What about downloaded applications? Applications are not allowed to run inside the container. (The container hardening contains some WDAC policies that allow only specific apps to execute.) That means that users can execute those potentially malicious applications on the host OS only.

Administrators who don’t allow unknown apps on the host OS might reduce users’ productivity and increase frustration. This is probably more prominent today, with so many people working from home and using a new wave of modern collaboration tools and video conferencing applications.

Users who are invited to external meetings sometimes need to download and run a client that may be blocked by the organization on the host OS. Unfortunately, it’s not possible to run the client inside the container either, and the users need to look for other solutions.

And what about non-Office documents? Though Office documents are protected, non-Office documents aren’t. Users sometimes use various other applications to create and edit documents, such as Adobe Acrobat and Photoshop, Autodesk AutoCAD, and many others. Application Guard won’t help to protect the host OS from such documents that are received over email or downloaded from the internet.

Even with Office alone, there might be problems. Many organizations use Office add-ons to customize and streamline the end-user experience. These add-ons may integrate with other local or online applications to provide additional functionality. As Application Guard runs a vanilla Office without any customizations, these add-ons won’t be able to run inside the container.

The missing manageability

Configuring Application Guard is not easy. First, while Application Guard for Edge technically works on both Windows Pro and Windows Enterprise, only on Windows Enterprise is it possible to configure it to kick-in automatically for untrusted websites. For non-technical users, that makes Application Guard almost useless in the eyes of their IT administrators, as those users have to launch it manually every time they consider a website to be untrusted. That’s a lot of room for human error. Even if all the devices are running Windows Enterprise, it’s not a walk in the park for administrators.

For the networking isolation configuration, administrators have to provide a manual list of comma-separated IPs and domain names. It’s not possible to integrate with your already fully configured web-proxy. It’s also not possible to integrate with category-based filtering systems that you might also have. Aside from the additional system to manage, there is no convenient UI or advanced capabilities (such as automatic filtering based on categories) to use. To make it work with Chrome or Firefox, administrators also need to perform additional configurations, such as delivering browser extensions.

This is not a turnkey solution for administrators and it requires messing with multiple configurations and GPOs until it works.
In addition, other management capabilities are very limited. For example, while admins can define whether clipboard operations (copy+paste) are allowed between the host and the container, it’s not possible to allow these operations only one way and not the other. It’s also not possible to allow certain content types such as text and images, while blocking others, such as binary files.
OS customizations and additional software bundlings such as Edge extensions and Office add-ins are not available either.

While Office files are opened automatically in Application Guard, other file types aren’t. Administrators that would like to use Edge as a secure and isolated PDF viewer, for example, can’t configure that.

The missing security

As stated before, Application Guard doesn’t protect against malicious files that were mistakenly categorized to be safe by the user. The user might securely download a malicious file on his isolated Edge but then choose to execute it on the host OS. He might also mistakenly categorize an untrusted document as a corporate one, to have it opened on the host OS. Malware could easily infect the host due to user errors.

Another potential threat comes from the networking side. While malware getting into the container is isolated in some aspects such as memory (it can’t inject itself into processes running on the host) and filesystem (it can’t replace files on the host with infected copies), it’s not fully isolated on the networking side.

Application Guard containers leverage the Windows Internet Connection Sharing (ICS) feature, to fully share networking with the host. That means that malware running inside the container might be able to attack some sensitive corporate resources that are accessible by the host (e.g., databases and data centers) by exploiting network vulnerabilities.

While Application Guard tries to isolate web and document threats, it doesn’t provide isolation in other areas. As mentioned before, Application Guard can’t isolate non-Microsoft applications that the organization chooses to use but not trust. Video conferencing applications, for example, have been exploited in the past and usually don’t require access to corporate data – it’s much safer to execute these in an isolated container.

External device handling is another risky area. Think of CVE-2016-0133, which allowed attackers to execute malicious code in the Windows kernel simply by plugging a USB thumb drive into the victim’s laptop. Isolating unknown USB devices can stop such attacks.

The missing holistic solution

Wouldn’t it be great if users could easily open any risky document in an isolated environment, e.g., through a context menu? Or if administrators could configure any risky website, document, or application to be automatically transferred and opened in an isolated environment? And maybe also to have corporate websites to be automatically opened back on the host OS, to avoid mixing sensitive information and corporate credentials with non-corporate work?

How about automatically attaching risky USB devices to the container, e.g., personal thumb drives, to reduce chances of infecting the host OS? And what if all that could be easy for administrators to deploy and manage, as a turn-key solution in the cloud?

Credential stuffing is just the tip of the iceberg

Credential stuffing attacks are taking up a lot of the oxygen in cybersecurity rooms these days. A steady blitz of large-scale cybersecurity breaches in recent years have flooded the dark web with passwords and other credentials that are used in subsequent attacks such as those on Reddit and State Farm, as well as widespread efforts to exploit the remote work and online get-togethers resulting from the COVID-19 pandemic.

credential stuffing

But while enterprises are rightly worried about weathering a hurricane of credential-stuffing attacks, they also need to be concerned about more subtle, but equally dangerous, threats to APIs that can slip in under the radar.

Attacks that exploit APIs, beyond credential stuffing, can start small with targeted probing of unique API logic, and lead to exploits such as the theft of personal information, wholesale data exfiltration or full account takeovers.

Unlike automated flood-the-zone, volume-based credential attacks, other API attacks are conducted almost one-to-one and carried out in elusive ways, targeting the distinct vulnerabilities of each API, making them even harder to detect than attacks happening on a large scale. Yet, they’re capable of causing as much, if not more, damage. And they’re becomingg more and more prevalent with APIs being the foundation of modern applications.

Beyond credential stuffing

Credential stuffing attacks are a key concern for good reason. High profile breaches—such as those of Equifax and LinkedIn, to name two of many—have resulted in billions of compromised credentials floating around on the dark web, feeding an underground industry of malicious activity. For several years now, about 80% of breaches that have resulted from hacking have involved stolen and/or weak passwords, according to Verizon’s annual Data Breach Investigations Report.

Additionally, research by Akamai determined that three-quarters of credential abuse attacks against the financial services industry in 2019 were aimed at APIs. Many of those attacks are conducted on a large scale to overwhelm organizations with millions of automated login attempts.

The majority of threats to APIs move beyond credential stuffing, which is only one of many threats to APIs as defined in the 2019 OWASP API Security Top 10. In many instances they are not automated, are much more subtle and come from authenticated users.

APIs, which are essential to an increasing number of applications, are specialized entities performing particular functions for specific organizations. Someone exploiting a vulnerability in an API used by a bank, retailer or other institution could, with a couple of subtle calls, dump the database, drain an account, cause an outage or do all kinds of other damage to impact revenue and brand reputation.

An attacker doesn’t even have to necessarily sneak in. For instance, they could sign on to Disney+ as a legitimate user and then poke around the API looking for opportunities to exploit. In one example of a front-door approach, a researcher came across an API vulnerability on the Steam developer site that would allow the theft of game license keys. (Luckily for the company, he reported it—and was rewarded with $20,000.)

Most API attacks are very difficult to detect and defend against since they’re carried out in such a clandestine manner. Because APIs are mostly unique, their vulnerabilities don’t conform to any pattern or signature that would allow common security controls to be enforced at scale. And the damage can be considerable, even coming from a single source. For example, an attacker exploiting a weakness in an API could launch a successful DoS attack with a single request.

API DoS

Rather than the more common DDoS attack, which floods a target with requests from many sources via a botnet, an API DoS can happen when the attacker manipulates the logic of the API, causing the application to overwork itself. If an API is designed to return, say, 10 items per request, an attacker could change that value to 10 million, using up all of an application’s resources and crashing it—with a single request.

Credential stuffing attacks present security challenges of their own. With easy access to evasion tools—and with their own sophistication improving dramatically – it’s not difficult for attackers to disguise their activity behind a mesh of thousands of IP addresses and devices. But credential stuffing nevertheless is an established problem with established solutions.

How enterprises can improve

Enterprises can scale infrastructure to mitigate credential stuffing attacks or buy a solution capable of identifying and stopping the attacks. The trick is to evaluate large volumes of activity and block malicious login attempts without impacting legitimate users, and to do it quickly, identifying successful malicious logins and alerting users in time to protect them from fraud.

Enterprises can improve API security first and foremost by identifying all of their APIs including data exposure, usage, and even those they didn’t know existed. When APIs fly under security operators’ radar, otherwise secure infrastructure has a hole in the fence. Once full visibility is attained, enterprises can more tightly control API access and use, and thus, enable better security.

NIST guide to help orgs recover from ransomware, other data integrity attacks

The National Institute of Standards and Technology (NIST) has published a cybersecurity practice guide enterprises can use to recover from data integrity attacks, i.e., destructive malware and ransomware attacks, malicious insider activity or simply mistakes by employees that have resulted in the modification or destruction of company data (emails, employee records, financial records, and customer data).

guide recover ransomware

About the guide

Ransomware is currently one of the most disruptive scourges affecting enterprises. While it would be ideal to detect the early warning signs of a ransomware attack to minimize its effects or prevent it altogether, there are still too many successful incursions that organizations must recover from.

Special Publication (SP) 1800-11, Data Integrity: Recovering from Ransomware and Other Destructive Events can help organizations to develop a strategy for recovering from an attack affecting data integrity (and to be able to trust that any recovered data is accurate, complete, and free of malware), recover from such an event while maintaining operations, and manage enterprise risk.

The goal is to monitor and detect data corruption in widely used as well as custom applications, and to identify what data way altered/corrupted, when, by whom, the impact of the action, whether other events happened at the same time. Finally, organizations are advised on how to restore data to its last known good configuration and to identify the correct backup version.

“Multiple systems need to work together to prevent, detect, notify, and recover from events that corrupt data. This project explores methods to effectively recover operating systems, databases, user files, applications, and software/system configurations. It also explores issues of auditing and reporting (user activity monitoring, file system monitoring, database monitoring, and rapid recovery solutions) to support recovery and investigations,” the authors added.

The National Cybersecurity Center of Excellence (NCCoE) at NIST used specific commercially available and open-source components when creating a solution to address this cybersecurity challenge, but noted that each organization’s IT security experts should choose products that will best work for them by taking into consideration how they will integrate with the IT system infrastructure and tools already in use.

guide recover ransomware

The NCCoE tested the set up against several test cases (ransomware attack, malware attack, user modifies a configuration file, administrator modifies a user’s file, database or database schema has been altered in error by an administrator or script). Additional materials can be found here.

Your best defense against ransomware: Find the early warning signs

As ransomware continues to prove how devastating it can be, one of the scariest things for security pros is how quickly it can paralyze an organization. Just look at Honda, which was forced to shut down all global operations in June, and Garmin, which had its services knocked offline for days in July.

Ransomware isn’t hard to detect but identifying it when the encryption and exfiltration are rampant is too little too late. However, there are several warning signs that organizations can catch before the real damage is done. In fact, FireEye found that there is usually three days of dwell time between these early warning signs and detonation of ransomware.

So, how does a security team find these weak but important early warning signals? Somewhat surprisingly perhaps, the network provides a unique vantage point to spot the pre-encryption activity of ransomware actors such as those behind Maze.

Here’s a guide, broken down by MITRE category, of the many different warning signs organizations being attacked by Maze ransomware can see and act upon before it’s too late.

Initial access

With Maze actors, there are several initial access vectors, such as phishing attachments and links, external-facing remote access such as Microsoft’s Remote Desktop Protocol (RDP), and access via valid accounts. All of these can be discovered while network threat hunting across traffic. Furthermore, given this represents the actor’s earliest foray into the environment, detecting this initial access is the organization’s best bet to significantly mitigate impact.

ATT&CK techniques

Hunt for…

T1193 Spear-phishing attachment
T1192 Spear-phishing link

  • Previously unseen or newly registered domains, unique registrars
  • Doppelgangers of your organization / partner’s domains or Alexa top 500
T133 External Remote Services
  • Inbound RDP from external devices
T1078 Valid accounts
  • Exposed passwords across SMB, FTP, HTTP, and other clear text usage
T1190 Exploit public-facing application
  • Exposure and exploit to known vulnerabilities

Execution

The execution phase is still early enough in an attack to shut it down and foil any attempts to detonate ransomware. Common early warning signs to watch for in execution include users being tricked into clicking a phishing link or attachment, or when certain tools such as PsExec have been used in the environment.

ATT&CK techniques

Hunt for…

T1024 User execution

  • Suspicious email behaviors from users and associated downloads
T1035 Service execution
  • File IO over SMB using PsExec, extracting contents on one system and then later on another system
T1028 Windows remote management
  • Remote management connections excluding known good devices

Persistence

Adversaries using Maze rely on several common techniques, such as a web shell on internet-facing systems and the use of valid accounts obtained within the environment. Once the adversary has secured a foothold, it starts to become increasingly difficult to mitigate impact.

ATT&CK techniques

Hunt for…

T1100 Web shell

  • Unique activity connections (e.g. atypical ports and user agents) from external connections
T1078 Valid accounts
  • Remote copy of KeePass file stores across SMB or HTTP

Privilege escalation

As an adversary gains higher levels of access it becomes significantly more difficult to pick up additional signs of activity in the environment. For the actors of Maze, the techniques used for persistence are similar to those for privileged activity.

ATT&CK techniques

Hunt for…

T1100 Web shell

  • Web shells on external facing web and gateway systems
T1078 Valid accounts
  • Remote copy of password files across SMB (e.g. files with “passw”)

Defense evasion

To hide files and their access to different systems, adversaries like the ones who use Maze will rename files, encode, archive, and use other mechanisms to hide their tracks. Attempts to hide their traces are in themselves indicators to hunt for.

ATT&CK techniques

Hunt for…

T1027 Obfuscated files or information

  • Adversary tools by port usage, certificate issuer name, or unknown protocol communications
T1078 Valid accounts
  • New account creation from workstations and other non-admin used devices

Credential access

There are several defensive controls that can be put in place to help limit or restrict access to credentials. Threat hunters can enable this process by providing situational awareness of network hygiene including specific attack tool usage, credential misuse attempts and weak or insecure passwords.

ATT&CK techniques

Hunt for…

T110 Brute force

  • RDP brute force attempts against known username accounts
T1081 Credentials in files
  • Unencrypted passwords and password files in the environment

Discovery

Maze adversaries use a number of different methods for internal reconnaissance and discovery. For example, enumeration and data collection tools and methods leave their own trail of evidence that can be identified before the exfiltration and encryption occurs.

ATT&CK techniques

Hunt for…

T1201 Password policy discovery

  • Traffic of devices copying the password policy off file shares
  • Enumeration of password policy
T1018 Remote system discovery

T1087 Account discovery

T1016 System network configuration discovery

T1135 Network share discovery

T1083 File and directory discovery

  • Enumeration for computer names, accounts, network connections, network configurations, or files

Lateral movement

Ransomware actors use lateral movement to understand the environment, spread through the network and then to collect and prepare data for encryption / exfiltration.

ATT&CK techniques

Hunt for…

T1105 Remote file copy

T1077 Windows admin shares

  • Suspicious SMB file write activity
  • PsExec usage to copy attack tools or access other systems
  • Attack tools copied across SMB
T1076 Remote Desktop Protocol

T1028 Windows remote management

T1097 Pass the ticket

  • HTTP POST with the use of WinRM user agent
  • Enumeration of remote management capabilities
  • Non-admin devices with RDP activity

Collection

In this phase, Maze actors use tools and batch scripts to collect information and prepare for exfiltration. It is typical to find .bat files or archives using the .7z or .exe extension at this stage.

ATT&CK techniques

Hunt for…

T1039 Data from network share drive

  • Suspicious or uncommon remote system data collection activity

Command and control (C2)

Many adversaries will use common ports or remote access tools to try and obtain and maintain C2, and Maze actors are no different. In the research my team has done, we’ve also seen the use of ICMP tunnels to connect to the attacker infrastructure.

ATT&CK techniques

Hunt for…

T1043 Common used port

T1071 Standard application layer protocol

  • ICMP callouts to IP addresses
  • Non-browser originating HTTP traffic
  • Unique device HTTP script like requests
T1105 Remote file copy
  • Downloads of remote access tools through string searches
T1219 Remote access tools
  • Cobalt strike BEACON and FTP to directories with cobalt in the name

Exfiltration

At this stage, the risk of exposure of sensitive data in the public realm is dire and it means an organization has missed many of the earlier warning signs—now it’s about minimizing impact.

ATT&CK techniques

Hunt for…

T1030 Data transfer size limits

  • External device traffic to uncommon destinations
T1048 Exfiltration over alternative protocol
  • Unknown FTP outbound
T1002: Data compressed
  • Archive file extraction

Summary

Ransomware is never good news when it shows up at the doorstep. However, with disciplined network threat hunting and monitoring, it is possible to identify an attack early in the lifecycle. Many of the early warning signs are visible on the network and threat hunters would be well served to identify these and thus help mitigate impact.

DaaS, BYOD, leasing and buying: Which is better for cybersecurity?

In the digital age, staff expect employers to provide hardware, and companies need hardware that allows employees to work efficiently and securely. There are already a number of models to choose from to purchase and manage hardware, however, with remote work policies becoming more popular, enterprises have to prioritize cybersecurity when making their selection.

Daas BYOD

The COVID-19 pandemic and online shift has brought to light the need for robust cybersecurity strategies and technology that facilitates safe practices. Since the pandemic started, the FBI has reported a 300 percent increase in cybercrime. As more businesses are forced to operate at a distance, hackers are taking advantage of weak links in their networks. At the same time, the crisis has meant many enterprises have had to cut their budgets, and so risk compromising cybersecurity when opting for more cost-effective measures.

Currently, Device-as-Service (DaaS), Bring-Your-Own-Device (BYOD) and leasing/buying are some of the most popular hardware options. To determine which is most appropriate for your business cybersecurity needs, here are the pros and cons of each:

Device-as-a-Service (DaaS)

DaaS models are when an organization distributes hardware like computers, tablets, and phones to employees with preconfigured and customized services and software. For many enterprises, DaaS is attractive because it allows them to acquire technology without having to outright buy, set up, and manage it – therefore saving time and money in the long run. Because of DaaS’s growing popularity, 65 percent of major PC manufacturers now offer DaaS capabilities, including Apple and HP.

When it comes to cybersecurity, DaaS is favorable because providers are typically experts in the field. In the configuration phase, they are responsible for ensuring that all devices have the latest security protections installed as standard, and they are also responsible for maintaining such protections. Once the hardware is in use, DaaS models allow providers to monitor the company’s entire fleet – checking that all devices adhere to security policies, including protocols around passwords, approved apps, and accessing sensitive data.

Another bonus is that DaaS can offer analytical insights about hardware, such as device location and condition. With this information, enterprises can be alerted if tech is stolen, missing or outdated and a threat to overall cybersecurity. Not to mention, a smart way to boost the level of protection given by DaaS models is to integrate it with Unified Endpoint Management (UEM). UEM helps businesses organize and control internet-enabled devices from a single interface and uses mobile threat detection to identify and thwart vulnerabilities or attacks among devices.

Nonetheless, to effectively utilize DaaS, enterprises have to determine their own relevant security principles before adopting the model. They then need to have an in-depth understanding of how these principles are applied throughout DaaS services and how the level of assurance enacts them. Assuming that DaaS completely removes enterprises from being involved in device cybersecurity would be unwise.

Bring-Your-Own-Device (BYOD)

BYOD is when employees use their own mobile, laptops, PCs, and tablets for work. In this scenario, companies have greater flexibility and can make significant cost savings, but, there are many more risks associated with personal devices compared to corporate-issued devices. Although BYOD is favorable among employees – who can use devices that they are more familiar with – enterprises essentially lose control and visibility of how data is transmitted, stored, and processed.

Personal devices are dangerous because hackers can create a sense of trust via personal apps on the hardware and more easily coerce users into sharing business details or download malicious content. Plus, with BYOD, companies are dependent on employees keeping all their personal devices updated with the most current protective services. One employee forgetting to do so could negate the cybersecurity for the overall network.

Similar to DaaS, UEM can also help companies that have adopted BYOD take a more centralized approach to manage the risk of exposing their data to malicious actors. For example, UEM can block websites or content from personal devices, as well as implement passcodes, and device and disk encryption. Alternatively, VPNs are common to enhance cybersecurity in companies that allow BYOD. In the COVID-19 pandemic, 68 percent of employees claim their company has expanded VPN usage as a direct result of the crisis. It’s worthwhile noting though, that VPNs only encrypt data accessed via the internet and cloud-based services.

When moving forward with BYOD models, enterprises must host regular training and education sessions around safe practices on devices, including recognizing threats, avoiding harmful websites, and the importance of upgrading. They also need to have documented and tested computer security incident response plans, so if any attacks do occur, they are contained as soon as possible.

Leasing / buying

Leasing hardware is when enterprises obtain equipment on a rental basis, in order to retain working capital that can be invested in other areas. In the past, as many as 80 percent of businesses chose to lease their hardware. The trend is less popular today, as SaaS products have proven to be more tailored and scalable.

Still, leasing is beneficial because rather than jeopardizing cybersecurity to purchase large volumes of hardware, enterprises can rent fully covered devices. Likewise, because the latest software typically requires the latest hardware, companies can rent the most recent tech at a fraction of the retail cost.

Comparable to DaaS providers, leasing companies are responsible for device maintenance and have to ensure that every laptop, phone, and tablet has the appropriate security software. Again, however, this does not absolve enterprises from taking an active role in cybersecurity implementation and surveillance.

Unlike leasing, where there can be uncertainty over who owns the cybersecurity strategy, buying is more straightforward. Purchasing hardware outright means companies have complete control over devices and can cherry-pick cybersecurity features to include. It also means they can be more flexible with cybersecurity partners, running trials with different solutions to evaluate which is the best fit.

That said, buying hardware has a noticeable downside where equipment becomes obsolete once new versions are released. 73 percent of senior leaders from enterprises actually agree that an abundance of outdated equipment leaves companies vulnerable to data security breaches. Considering that, on average, a product cycle takes only 12 to 24 months, and there are thousands of hardware manufacturers at work, devices can swiftly become outdated.

Additionally, because buying is a more permanent action, enterprises run the risk of being stuck with hardware that has been compromised. As opposed to software which can be relatively easily patched to fix, hardware often has to be sent off-site for repairs. This may result in enterprises with limited hardware continuing to use damaged or unprotected devices to avoid downtime in workflows.

If and when a company does decide to dispose of hardware, there are complications around guaranteeing that systems are totally blocked and databases or networks cannot be accessed afterwards. In contrast, providers from DaaS and leasing models expertly wipe devices at the end of contracts or when disposing of them, so enterprises don’t have to be concerned about unauthorized access.

Putting cybersecurity front-and-center

DaaS, BYOD, and leasing/buying all have their own unique benefits when it comes to cybersecurity. Despite all the perks, it has to be acknowledged that BYOD and leasing pose the biggest obstacles for enterprises because they take cybersecurity monitoring and control out of companies’ hands. Nevertheless, for all the options mentioned, UEM is a valuable way to bridge gaps and empower businesses to be in control of cybersecurity, while still being agile.

Ultimately, the most impactful cybersecurity measures are the ones that enterprises are firmly vested in, whatever hardware model they adopt. Businesses should never underestimate the power of a transparent, well-researched, and constantly evolving security framework – one which a hardware model complements, not solely creates.

Secure data sharing in a world concerned with privacy

The ongoing debate surrounding privacy protection in the global data economy reached a fever pitch with July’s “Schrems II” ruling at the European Court of Justice, which struck down the Privacy Shield – a legal mechanism enabling companies to transfer personal data from the EU to the US for processing – potentially disrupting the business of thousands of companies.

The plaintiff, Austrian privacy advocate Max Schrems, claimed that US privacy legislation was insufficiently robust to prevent national security and intelligence authorities from acquiring – and misusing – Europeans’ personal data. The EU’s top court agreed, abolishing the Privacy Shield and requiring American companies that exchange data with European partners to comply with the standards set out by the GDPR, the EU’s data privacy law.

Following this landmark ruling, ensuring the secure flow of data from one jurisdiction to another will be a significant challenge, given the lack of an international regulatory framework for data transfers and emerging conflicts between competing data privacy regulations.

This comes at a time when the COVID-19 crisis has further underscored the urgent need for collaborative international research involving the exchange of personal data – in this case, sensitive health data.

Will data protection regulations stand in the way of this and other vital data sharing?

The Privacy Shield was a stopgap measure to facilitate data-sharing between the US and the EU which ultimately did not withstand legal scrutiny. Robust, compliant-by-design tools beyond contractual frameworks will be required in order to protect individual privacy while allowing data-driven research on regulated data and business collaboration across jurisdictions.

Fortunately, innovative privacy-enhancing technologies (PETs) can be the stable bridge connecting differing – and sometimes conflicting – privacy frameworks. Here’s why policy alone will not suffice to resolve existing data privacy challenges – and how PETs can deliver the best of both worlds:

A new paradigm for ethical and secure data sharing

The abolition of the Privacy Shield poses major challenges for over 5,000 American and European companies which previously relied on its existence and must now confront a murky legal landscape. While big players like Google and Zoom have the resources to update their compliance protocols and negotiate legal contracts between transfer partners, smaller innovators lack these means and may see their activities slowed or even permanently halted. Privacy legislation has already impeded vital cross-border research collaborations – one prominent example is the joint American-Finnish study regarding the genomic causes of diabetes, which “slowed to a crawl” due to regulations, according to the head of the US National Institutes of Health (NIH).

One response to the Schrems II ruling might be expediting moves towards a federal data privacy law in the US. But this would take time: in Europe, over two years passed between the adoption of GDPR and its enforcement. Given that smaller companies are facing an immediate legal threat to their regular operations, a federal privacy law might not come quickly enough.

Even if such legislation were to be approved in Washington, it is unlikely to be fully compatible with GDPR – not to mention widening privacy regulations in other countries. The CCPA, the major statewide data protection initiative, is generally considered less stringent than GDPR, meaning that even CCPA-compliant businesses would still have to adapt to European standards.

In short, the existing legislative toolbox is insufficient to protect the operations of thousands of businesses in the US and around the world, which is why it’s time for a new paradigm for privacy-preserving data sharing based on Privacy-Enhancing Technologies.

The advantages of privacy-enhancing technologies

Compliance costs and legal risks are prompting companies to consider an innovative data sharing method based on PETs: a new genre of technologies which can help them bridge competing privacy frameworks. PETs are a category of technologies that protect data along its lifecycle while maintaining its utility, even for advanced AI and machine learning processes. PETs allow their users to harness the benefits of big data while protecting personally identifiable information (PII) and other sensitive information, thus maintaining stringent privacy standards.

One such PET playing a growing role in privacy-preserving information sharing is Homomorphic Encryption (HE), a technique regarded by many as the holy grail of data protection. HE enables multiple parties to securely collaborate on encrypted data by conducting analysis on data which remains encrypted throughout the process, never exposing personal or confidential information. Through HE, companies can derive the necessary insights from big data while protecting individuals’ personal details – and, crucially, while remaining compliant with privacy legislation because the data is never exposed.

Jim Halpert, a data regulation lawyer who helped draft the CCPA and is Global Co-Chair of the Data Protection, Privacy and Security practice at DLA Piper, views certain solutions based on HE as effective compliance tools.

“Homomorphic Encryption encrypts data elements in such a way that they cannot identify, describe or in any way relate to a person or household. As a result, homomorphically encrypted data cannot be considered ‘personal information’ and is thus exempt from CCPA requirements,” Halpert says. “Companies which encrypt data through HE minimize the risk of legal threats, avoid CCPA obligations, and eliminate the possibility that a third party could mistakenly be exposed to personal data.”

The same principle applies to GDPR, which requires any personally identifiable information to be protected.

HE is applicable to any industry and activity which requires sensitive data to be analyzed by third parties; for example, research such as genomic investigations into individuals’ susceptibility to COVID-19 and other health conditions, and secure data analysis in the financial services industry, including financial crime investigations across borders and institutions. In these cases, HE enables users to legally collaborate across different jurisdictions and regulatory frameworks, maximizing data value while minimizing privacy and compliance risk.

PETs will be crucial in allowing data to flow securely even after the Privacy Shield has been lowered. The EU and the US have already entered negotiations aimed at replacing the Privacy Shield, but while a palliative solution might satisfy business interests in the short term, it won’t remedy the underlying problems inherent to competing privacy frameworks. Any replacement would face immediate legal challenges in a potential “Schrems III” case. Tech is in large part responsible for the growing data privacy quandary. The onus, then, is on tech itself to help facilitate the free flow of data without undermining data protection.

What are the traits of an effective CISO?

Only 12% of CISOs excel in all four categories of the Gartner CISO Effectiveness Index.

effective CISO

“Today’s CISOs must demonstrate a higher level of effectiveness than ever before,” said Sam Olyaei, research director at Gartner.

“As the push to digital deepens, CISOs are responsible for supporting a rapidly evolving set of information risk decisions, while also facing greater oversight from regulators, executive teams and boards of directors. These challenges are further compounded by the pressure that COVID-19 has put on the information security function to be more agile and flexible.”

The survey was conducted among 129 heads of information risk functions, across all industries, globally in January 2020. The measure of CISO effectiveness is determined by a CISO’s ability to execute against a set of outcomes in the four categories of functional leadership, information security service delivery, scaled governance and enterprise responsiveness.

Each respondent’s score in each category was added together to calculate their overall effectiveness score. “Effective CISOs” are those who scored in the top one-third of the CISO effectiveness measure.

Top-performing CISOs demonstrate five key behaviors

Of the factors that impact CISO effectiveness, five behaviors significantly differentiate top-performing CISOs from bottom performers. On average, each of these behaviors is twice as prevalent in top performers than in bottom performers.

“A clear trend among top-performing CISOs is demonstrating a high level of proactiveness, whether that’s staying abreast of evolving threats, communicating emerging risks with stakeholders or having a formal succession plan,” said Mr. Olyaei. “CISOs should prioritize these kinds of proactive activities to boost their effectiveness.”

The survey also found that top performing CISOs regularly meet with three times as many non-IT stakeholders as they do IT stakeholders. Two-thirds of these top performers meet at least once per month with business unit leaders, while 43% meet with the CEO, 45% meet with the head of marketing and 30% meet with the head of sales.

“CISOs have historically built fruitful relationships with IT executives, but digital transformation has further democratized information security decision making,” added Daria Krilenko, senior research director at Gartner.

“Effective CISOs keep a close eye on how risks are evolving across the enterprise and develop strong relationships with the owners of that risk – senior business leaders outside of IT.”

Effective CISOs are better at managing stress

The survey also found that highly effective CISOs better manage workplace stressors. Just 27% of top performing CISOs feel overloaded with security alerts, compared with 62% of bottom performers. Furthermore, less than a third of top performers feel that they face unrealistic expectations from stakeholders, compared with half of bottom performing CISOs.

“As the CISO role becomes increasingly demanding, the most effective security leaders are those who can manage the stressors that they face daily,” said Mr. Olyaei.

“Actions such as keeping a clear distinction between work and nonwork, setting explicit expectations with stakeholders, and delegating or automating tasks are essential for enabling CISOs to function at a high level.”

5 simple steps to bring cyber threat intelligence sharing to your organization

Cyber threat intelligence (CTI) sharing is a critical tool for security analysts. It takes the learnings from a single organization and shares it across the industry to strengthen the security practices of all.

cyber threat intelligence sharing

By sharing CTI, security teams can alert each other to new findings across the threat landscape and flag active cybercrime campaigns and indicators of compromise (IOCs) that the cybersecurity community should be immediately aware of. As this intel spreads, organizations can work together to build upon each other’s defenses to combat the latest threat. This creates a herd-like immunity for networks as defensive capabilities are collectively raised.

Blue teams need to act more like red teams

A recent survey by Exabeam showed that 62 percent of blue teams have difficulty stopping red teams during adversary simulation exercises. A blue team is charged with defending one network. They have the benefit of knowing the ins and outs of their network better than any red team or cybercriminal, so they are well-equipped to spot abnormalities and IOCs and act fast to mitigate threats.

But blue teams have a bigger disadvantage: they mostly work in silos consisting only of members of their immediate team. They typically don’t share their threat intelligence with other security teams, vendors, or industry groups. This means they see cyber threats from a single lens. They lack the broader view of the real threat landscape external to their organization.

This disadvantage is where red teams and cybercriminals thrive. Not only do they choose the rules of the game – the when, where, and how the attack will be executed – they share their successes and failures with each other to constantly adapt and evolve tactics. They thrive in a communications-rich environment, sharing frameworks, toolkits, guidelines, exploits, and even offering each other customer support-like help.

For blue teams to move from defense to prevention, they need to take defense to the attacker’s front door. This proactive approach can only work by having timely, accurate, and contextual threat intelligence. And that requires a community, not a company. But many companies are hesitant to join the CTI community. The SANS 2020 Cyber Threat Intelligence Survey shows that more than 40% of respondents both produce and consume intelligence, leaving much room for improvement over the next few years.

Common challenges for beginning a cyber threat intelligence sharing program

One of the biggest challenges to intelligence sharing is that businesses don’t understand how sharing some of their network data can actually strengthen their own security over time. Much like the early days of open-source software, there’s a fear that if you have anything open to exposure it makes you inherently more vulnerable. But as open source eventually proved, more people collaborating in the open can lead to many positive outcomes, including better security.

Another major challenge is that blue teams don’t have the lawless luxury of sharing threat intelligence with reckless abandon: we have legal teams. And legal teams aren’t thrilled with the notion of admitting to IOCs on their network. And there is a lot of business-sensitive information that shouldn’t be shared, and the legal team is right to protect this.

The opportunity is in finding an appropriate line to walk, where you can share intelligence that contributes to bolstering cyber defense in the larger community without doing harm to your organization.

If you’re new to CTI sharing and want to get involved here are a few pieces of advice.

Clear it with your manager

If you or your organization are new to CTI sharing the first thing to do is to get your manager’s blessing before you move forward. Being overconfident in your organization’s appetite to share their network data (especially if they don’t understand the benefits) can be a costly, yet avoidable mistake.

Start sharing small

Don’t start by asking permission to share details on a data exfiltration event that currently has your company in crisis mode. Instead, ask if it’s ok to share a range of IPs that have been brute forcing logins on your site. Or perhaps you’ve seen a recent surge of phishing emails originating from a new domain and want to share that. Make continuous, small asks and report back any useful findings.

Share your experience when you can’t share intelligence

When you join a CTI group, you’re going to want to show that you’re an active, engaged member. But sometimes you just don’t have any useful intelligence to share. You can still add value to the group by lending your knowledge and experience. Your perspective might change someone’s mind on their process and make them a better practitioner, thus adding to the greater good.

Demonstrate value of sharing CTI

Tie your participation in CTI groups to any metrics that demonstrate your organization’s security posture has increased during that time. For example, show any time that participation in a CTI group has directly led to intelligence that helped decrease alerted events and helped your team get ahead of a new attack.

There’s a CTI group for everyone

From disinformation and dark web to medical devices and law enforcement, there’s a CTI segment for everything you ever wanted to be involved in. Some are invite-only, so the more active you are in public groups the more likely you’ll be asked to join groups that you’ve shown interest in or have provided useful intelligence about. These hyper-niche groups can provide big value to your organization as you can get expert consulting from top minds in the field.

The more data you have, the more points you can correlate faster. Joining a CTI sharing group gives you access to data you’d never even know about to inform better decision making when it comes to your defensive actions. More importantly, CTI sharing makes all organizations more secure and unites us under a common cause.

Justifying your 2021 cybersecurity budget

Sitting in the midst of an unstable economy, a continued public health emergency, and facing an uptick in successful cyber attacks, CISOs find themselves needing to enhance their cybersecurity posture while remaining within increasingly scrutinized budgets.

2021 cybersecurity budget

Senior leadership recognizes the value of cybersecurity but understanding how to best allocate financial resources poses an issue for IT professionals and executive teams. As part of justifying a 2021 cybersecurity budget, CISOs need to focus on quick wins, cost-effective SaaS solutions, and effective ROI predictions.

Finding the “quick wins” for your 2021 cybersecurity budget

Cybersecurity, particularly with organizations suffering from technology debt, can be time-consuming. Legacy technologies, including internally designed tools, create security challenges for organizations of all sizes.

The first step to determining the “quick wins” for 2021 lies in reviewing the current IT stack for areas that have become too costly to support. For example, as workforce members moved off-premises during the current public health crisis, many organizations found that their technology debt made this shift difficult. With workers no longer accessing resources from inside the organization’s network, organizations with rigid technology stacks struggled to pivot their work models.

Going forward, remote work appears to be one way through the current health and economic crises. Even major technology leaders who traditionally relied on in-person workforces have moved to remote models through mid-2021, with Salesforce the most recent to announce this decision.

Looking for gaps in security, therefore, should be the first step in any budget analysis. As part of this gap analysis, CISOs can look in the following areas:

  • VPN and data encryption
  • Data and user access
  • Cloud infrastructure security

Each of these areas can provide quick wins if done correctly because as organizations accelerate their digital transformation strategies to match these new workplace situations, they can now leverage cloud-native security solutions.

Adopting SaaS security solutions for accelerating security and year-over-year value

The SaaS-delivered security solution market exploded over the last five to ten years. As organizations moved their mission-critical business operations to the cloud, cybercriminals focused their activities on these resources.

Interestingly, a CNBC article from July 14, 2020 noted that for the first half of 2020, the number of reported data breaches dropped by 33%. Meanwhile, another CNBC article from July 29, 2020 notes that during the first quarter, large scale data breaches increased by 273% compared to the same time period in 2019. Although the data appears conflicting, the Identity Theft Research Center research that informed the July 14th article specifically notes, “This is not expected to be a long-term trend as threat actors are likely to return to more traditional attack patterns to replace and update identity information needed to commit future identity and financial crimes.” In short, rapidly closing security gaps as part of a 2021 cybersecurity budget plan needs to include the fast wins that SaaS-delivered solutions provide.

SaaS security solutions offer two distinct budget wins for CISOs. First, they offer rapid integration into the organization’s IT stack. In some cases, CISOs can get a SaaS tool deployed within a few weeks, in other cases within a few months. Deployment time depends on the complexity of the problem being solved, the type of integrations necessary, and the enterprise’s size. However, in the same way that agile organizations leverage cloud-based business applications, security teams can leverage rapid deployment of cloud-based security solutions.

The second value that SaaS security solutions offer is YoY savings. Subscription models offer budget conscious organizations several distinct value propositions. First, the organization can reduce hardware maintenance costs, including operational costs, upgrade costs, software costs, and servicing costs. Second, SaaS solutions often enable companies to focus on their highest risk assets and then increase their usage in the future. Third, they allow organizations to pivot more effectively because the reduced up-front capital outlay reduces the commitment to the project.

Applying a dollar value to these during the budget justification process might feel difficult, but the right key performance indicators (KPIs) can help establish baseline cost savings estimates.

Choosing the KPIs for effective ROI predictions

During an economic downturn, justifying the cybersecurity budget requests might be increasingly difficult. Most cybersecurity ROI predictions rely on risk evaluations and applying probability of a data breach to projected cost of a data breach. As organizations look to reduce costs to maintain financially viable, a “what if” approach may not be as appealing.

However, as part of budgeting, CISOs can look to several value propositions to bolster their spending. Cybersecurity initiatives focus on leveraging resources effectively so that they can ensure the most streamlined process possible while maintaining a robust security program. Aligning purchase KPIs with specific reduced operational costs can help gain buy-in for the solution.

A quick hypothetical can walk through the overarching value of SaaS-based security spending. Continuous monitoring for external facing vulnerabilities is time-consuming and often incorporates inefficiency. Hypothetical numbers based on research indicate:

A poll of C-level security executives noted that 37% said they received more than 10,000 alerts each month with 52% of those alerts identified as false positives.

  • The average security analyst spends ten minutes responding to a single alert.
  • The average security analyst makes approximately $91,000 per year.

Bringing this data together shows the value of SaaS-based solutions that reduce the number of false positives:

  • Every month enterprise security analysts spend 10 minutes for each of the 5,2000 false positives.
  • This equates to approximately 866 hours.
  • 866 hours, assuming a 40-hour week, is 21.65 weeks.
  • Assuming 4 weeks per month, the enterprise needs at least 5 security analysts to manage false positive responses.
  • These 5 security analysts cost a total of $455,000 per year in salary, not including bonuses and other benefits.

Although CISOs may not want to reduce their number of team members, they may not want to add additional ones, or they may be seeking to optimize the team they have. Tracking KPIs such reduction in false positives per month can provide the type of long-term cost value necessary for other senior executives and the board of directors.

Securing a 2021 cybersecurity budget

While the number of attacks may have stalled during 2020, cybercriminals have not stopped targeting enterprise data. Phishing attacks and malware attacks have moved away from the enterprise network level and now look to infiltrate end-user devices. As organizations continue to pivot their operating models, they need to look for cost-effective ways to secure their sensitive resources and data. However, budget constrictions arising from 2020’s economic instability may make it difficult for CISOs to gain the requisite dollars to continue to apply best security practices.

As organizations start looking toward their 2021 roadmap, CISOs will increasingly need to be specific about not only the costs associated with purchases but also the cost savings that those purchases provide from both data incident risk and operational cost perspective.