Adapt cybersecurity programs to protect remote work environments

Earlier this year, businesses across the globe transitioned to a remote work environment almost overnight at unprecedented scale and speed. Security teams worked around the clock to empower and protect their newly distributed teams.

protect remote work

Protect and support a remote workforce

Cisco’s report found the majority of organizations around the world were at best only somewhat prepared in supporting their remote workforce. But, it has accelerated the adoption of technologies that enable employees to work securely from anywhere and on any device – preparing businesses to be flexible for whatever comes next. The survey found that:

  • 85% of organizations said that cybersecurity is extremely important or more important than before COVID-19
  • Secure access is the top cybersecurity challenge faced by the largest proportion of organizations (62%) when supporting remote workers
  • One in two respondents said endpoints, including corporate laptops and personal devices, are a challenge to protect in a remote environment
  • 66% of respondents indicated that the COVID-19 situation will result in an increase in cybersecurity investments

“Security and privacy are among the most significant social and economic issues of our lifetime,” said Jeetu Patel, SVP and GM of Cisco’s Security & Applications business.

“Cybersecurity historically has been overly complex. With this new way of working here to stay and organizations looking to increase their investment in cybersecurity, there’s a unique opportunity to transform the way we approach security as an industry to better meet the needs of our customers and end-users.”

People worried about the privacy of their tools

People are worried about the privacy of remote work tools and are skeptical whether companies are doing what is needed to keep their data safe. Despite the pandemic, they want little or no change to privacy requirements, and they want to see companies be more transparent regarding how they use their customer’s data.

Organizations have the opportunity to build confidence and trust by embedding privacy into their products and communicating their practices clearly and simply to their customers. The survey found that:

  • 60% of respondents were concerned about the privacy of remote collaboration tools
  • 53% want little or no change to existing privacy laws
  • 48% feel they are unable to effectively protect their data today, and the main reason is that they can’t figure out what companies are doing with their data
  • 56% believe governments should play a primary role in protecting consumer data, and consumers are highly supportive of the privacy laws enacted in their country

“Privacy is much more than just a compliance obligation. It is a fundamental human right and business imperative that is critical to building and maintaining customer trust,” said Harvey Jang, VP, Chief Privacy Officer, Cisco. “The core privacy and ethical principles of transparency, fairness, and accountability will guide us in this new, digital-first world.”

Enterprises should strive for composability to be resilient during uncertainty

CIOs and IT leaders who use composability to deal with continuing business disruption due to the COVID-19 pandemic and other factors will make their enterprises more resilient, more sustainable and make more meaningful contributions, according to Gartner.

composable business resilience

Analysts said that composable business means architecting for resilience and accepting that disruptive change is the norm. It supports a business that exploits the disruptions digital technology brings by making things modular – mixing and matching business functions to orchestrate the proper outcomes.

It supports a business that senses – or discovers – when change needs to happen; and then uses autonomous business units to creatively respond.

For some enterprises digital strategies became real for the first time

According to the 2021 Gartner Board of Directors survey, 69% of corporate directors want to accelerate enterprise digital strategies and implementations to help deal with the ongoing disruption. For some enterprises that means that their digital strategies became real for the first time, and for others that means rapidly scaling digital investments.

“Composable business is a natural acceleration of the digital business that organizations live every day,” said Daryl Plummer, research VP, Chief of Research and Gartner Fellow. “It allows organizations to finally deliver the resilience and agility that these interesting times demand.”

Don Scheibenreif, research VP at Gartner, explained that composable business starts with three building blocks — composable thinking, which ensures creative thinking is never lost; composable business architecture, which ensure flexibility and resiliency; and composable technologies, which are the tools for today and tomorrow.

“The world today demands something different from us. Composing – flexible, fluid, continuous, even improvisational – is how we will move forward. That is why composable business is more important than ever,” said Mr. Scheibenreif.

“During the COVID-19 pandemic crisis, most CIOs leveraged their organizations existing digital investments, and some CIOs accelerated their digital strategies by investing in some of the three composable building blocks,” said Tina Nunno, research VP and Gartner Fellow.

“To ensure their organizations were resilient, many CIOs also applied at least one of the four critical principles of composability, gaining more speed through discovery, greater agility through modularity, better leadership through orchestration, and resilience through autonomy.”

Composable business resilience

Analysts said that these four principles can be viewed differently depending on which building block organizations are working with:

  • In composable thinking, these are design principles. They guide an organization’s approach to conceptualizing what to compose, and when.
  • In composable business architecture, they are structural capabilities, giving an organization the mechanisms to use in architecting its business.
  • In composable technologies, they are product design goals driving the features of technology that support the notions of composability.

“In the end, organizations need the principles and the building blocks to intentionally make composability real,” said Mr. Plummer.

The building blocks of composability can be used to pivot quickly to a new opportunity, industry, customer base or revenue stream. For example, a large Chinese retailer used composability when the pandemic hit to help re-architect their business. They used composable thinking and chose to pivot to live streaming sales activities.

They embraced social marketing technology and successfully retained over 5,000 in-store sales and customer support staff to become live streaming hosts. The retailer suffered no layoffs and minimal revenue loss.

“Throughout 2020, CIOs and IT leaders maintained their composure and delivered tremendous value,” said Ms. Nunno. “The next step is to create a more composable business using the three building blocks and applying the four principles. With composability, organizations can achieve digital acceleration, greater resiliency and the ability to innovate through disruption.”

5 tips to reduce the risk of email impersonation attacks

Email attacks have moved past standard phishing and become more targeted over the years. In this article, I will focus on email impersonation attacks, outline why they are dangerous, and provide some tips to help individuals and organizations reduce their risk exposure to impersonation attacks.

email impersonation attacks

What are email impersonation attacks?

Email impersonation attacks are malicious emails where scammers pretend to be a trusted entity to steal money and sensitive information from victims. The trusted entity being impersonated could be anyone – your boss, your colleague, a vendor, or a consumer brand you get automated emails from.

Email impersonation attacks are tough to catch and worryingly effective because we tend to take quick action on emails from known entities. Scammers use impersonation in concert with other techniques to defraud organizations and steal account credentials, sometimes without victims realizing their fate for days after the fraud.

Fortunately, we can all follow some security hygiene best practices to reduce the risk of email impersonation attacks.

Tip #1 – Look out for social engineering cues

Email impersonation attacks are often crafted with language that induces a sense of urgency or fear in victims, coercing them into taking the action the email wants them to take. Not every email that makes us feel these emotions will be an impersonation attack, of course, but it’s an important factor to keep an eye out for, nonetheless.

Here are some common phrases and situations you should look out for in impersonation emails:

  • Short deadlines given at short notice for processes involving the transfer of money or sensitive information.
  • Unusual purchase requests (e.g., iTunes gift cards).
  • Employees requesting sudden changes to direct deposit information.
  • Vendor sharing new.

email impersonation attacks

This email impersonation attack exploits the COVID-19 pandemic to make an urgent request for gift card purchases.

Tip #2 – Always do a context check on emails

Targeted email attacks bank on victims being too busy and “doing before thinking” instead of stopping and engaging with the email rationally. While it may take a few extra seconds, always ask yourself if the email you’re reading – and what the email is asking for – make sense.

  • Why would your CEO really ask you to purchase iTunes gift cards at two hours’ notice? Have they done it before?
  • Why would Netflix emails come to your business email address?
  • Why would the IRS ask for your SSN and other sensitive personal information over email?

To sum up this tip, I’d say: be a little paranoid while reading emails, even if they’re from trusted entities.

Tip #3 – Check for email address and sender name deviations

To stop email impersonation, many organizations have deployed keyword-based protection that catches emails where the email addresses or sender names match those of key executives (or other related keywords). To get past these security controls, impersonation attacks use email addresses and sender names with slight deviations from those of the entity the attacks are impersonating. Some common deviations to look out for are:

  • Changes to the spelling, especially ones that are missed at first glance (e.g., “ei” instead of “ie” in a name).
  • Changes based on visual similarities to trick victims (e.g. replacing “rn” with “m” because they look alike).
  • Business emails sent from personal accounts like Gmail or Yahoo without advance notice. It’s advisable to validate the identity of the sender through secondary channels (text, Slack, or phone call) if they’re emailing you with requests from their personal account for the first time.
  • Descriptive changes to the name, even if the changes fit in context. For example, attackers impersonating a Chief Technology Officer named Ryan Fraser may send emails with the sender name as “Ryan Fraser, Chief Technology Officer”.
  • Changes to the components of the sender name (e.g., adding or removing a middle initial, abbreviating Mary Jane to MJ).

Tip #4 – Learn the “greatest hits” of impersonation phrases

Email impersonation has been around for long enough that there are well-known phrases and tactics we need to be aware of. The emails don’t always have to be directly related to money or data – the first email is sometimes a simple request, just to see who bites and buys into the email’s faux legitimacy. Be aware of the following phrases/context:

  • “Are you free now?”, “Are you at your desk?” and related questions are frequent opening lines in impersonation emails. Because they seem like harmless emails with simple requests, they get past email security controls and lay the bait.
  • “I need an urgent favor”, “Can you do something for me within the next 15 minutes?”, and other phrases implying the email is of a time-sensitive nature. If you get this email from your “CEO”, your instinct might be to respond quickly and be duped by the impersonation in the process.
  • “Can you share your personal cell phone number?”, “I need your personal email”, and other out-of-context requests for personal information. The objective of these requests is to harvest information and build out a profile of the victim; once adversaries have enough information, they have another entity to impersonate.

Tip #5 – Use secondary channels of authentication

Enterprise adoption of two-factor authentication (2FA) has grown considerably over the years, helping safeguard employee accounts and reduce the impact of account compromise.

Individuals should try to replicate this best practice for any email that makes unusual requests related to money or data. For example:

  • Has a vendor emailed you with a sudden change in their bank account details, right when an invoice is due? Call or text the vendor and confirm that they sent the email.
  • Did your manager email you asking for gift card purchases? Send them a Slack message (or whatever productivity app you use) to confirm the request.
  • Did your HR representative email you a COVID resource document that needs email account credentials to be viewed? Check the veracity of the email with the HR rep.

Even if you’re reaching out to very busy people for this additional authentication, they will understand and appreciate your caution.

These tips are meant as starting points for individuals and organizations to better understand email impersonation and start addressing its risk factors. But effective protection against email impersonation can’t be down to eye tests alone. Enterprise security teams should conduct a thorough audit of their email security stack and explore augments to native email security that offer specific protection against impersonation.

With email more important to our digital lives than ever, it’s vital that we are able to believe people are who their email says they are. Email impersonation attacks exploit this sometimes-misplaced belief. Stopping email impersonation attacks will require a combination of security hygiene, email security solutions that provide specific impersonation protection, and some healthy paranoia while reading emails – even if they seem to be from people you trust.

Cybersecurity is failing due to ineffective technology

A failing cybersecurity market is contributing to ineffective performance of cybersecurity technology, a Debate Security research reveals.

cybersecurity market failing

Based on over 100 comprehensive interviews with business and cybersecurity leaders from large enterprises, together with vendors, assessment organizations, government agencies, industry associations and regulators, the research shines a light on why technology vendors are not incentivized to deliver products that are more effective at reducing cyber risk.

The report supports the view that efficacy problems in the cybersecurity market are primarily due to economic issues, not technological ones. The research addresses three key themes and ultimately arrives at a consensus for how to approach a new model.

Cybersecurity technology is not as effective as it should be

90% of participants reported that cybersecurity technology is not as effective as it should be when it comes to protecting organizations from cyber risk. Trust in technology to deliver on its promises is low, and yet when asked how organizations evaluate cybersecurity technology efficacy and performance, there was not a single common definition.

Pressure has been placed on improving people and process related issues, but ineffective technology has become accepted as normal – and shamefully – inevitable.

The underlying problem is one of economics, not technology

92% of participants reported that there is a breakdown in the market relationship between buyers and vendors, with many seeing deep-seated information asymmetries.

Outside government, few buyers today use detailed, independent cybersecurity efficacy assessment as part of their cybersecurity procurement process, and not even the largest organizations reported having the resources to conduct all the assessments themselves.

As a result, vendors are incentivized to focus on other product features, and on marketing, deprioritizing cybersecurity technology efficacy – one of several classic signs of a “market for lemons”.

Coordinated action between stakeholders only achieved through regulation

Unless buyers demand greater efficacy, regulation may be the only way to address the issue. Overcoming first-mover disadvantages will be critical to fixing the broken cybersecurity technology market.

Many research participants believe that coordinated action between all stakeholders can only be achieved through regulation – though some hold out hope that coordination could be achieved through sectoral associations.

In either case, 70% of respondents feel that independent, transparent assessment of technology would help solve the market breakdown. Setting standards on technology assessment rather than on technology itself could prevent stifling innovation.

Defining cybersecurity technology efficacy

Participants in this research broadly agree that four characteristics are required to comprehensively define cybersecurity technology efficacy.

To be effective, cybersecurity solutions need to have the capability to deliver the stated security mission (be fit-for-purpose), have the practicality that enterprises need to implement, integrate, operate and maintain them (be fit-for-use), have the quality in design and build to avoid vulnerabilities and negative impact, and the provenance in the vendor company, its people and supply chain such that these do not introduce additional security risk.

“In cybersecurity right now, trust doesn’t always sell, and good security doesn’t always sell and isn’t always easy to buy. That’s a real problem,” said Ciaran Martin, advisory board member, Garrison Technology.

“Why we’re in this position is a bit of a mystery. This report helps us understand it. Fixing the problem is harder. But our species has fixed harder problems and we badly need the debate this report calls for, and industry-led action to follow it up.”

“Company boards are well aware that cybersecurity poses potentially existential risk, but are generally not well equipped to provide oversight on matters of technical detail,” said John Cryan, Chairman Man Group.

“Boards are much better equipped when it comes to the issues of incentives and market dynamics revealed by this research. Even if government regulation proves inevitable, I would encourage business leaders to consider these findings and to determine how, as buyers, corporates can best ensure that cybersecurity solutions offered by the market are fit for purpose.”

“As a technologist and developer of cybersecurity products, I really feel for cybersecurity professionals who are faced with significant challenges when trying to select effective technologies,” said Henry Harrison, CSO of Garrison Technology.

“We see two noticeable differences when selling to our two classes of prospects. For security-sensitive government customers, technology efficacy assessment is central to buying behavior – but we rarely see anything similar when dealing with even the most security-sensitive commercial customers. We take from this study that in many cases this has less to do with differing risk appetites and more to do with structural market issues.”

How tech trends and risks shape organizations’ data protection strategy

Trustwave released a report which depicts how technology trends, compromise risks and regulations are shaping how organizations’ data is stored and protected.

data protection strategy

Data protection strategy

The report is based on a recent survey of 966 full-time IT professionals who are cybersecurity decision makers or security influencers within their organizations.

Over 75% of respondents work in organizations with over 500 employees in key geographic regions including the U.S., U.K., Australia and Singapore.

“Data drives the global economy yet protecting databases, where the most critical data resides, remains one of the least focused-on areas in cybersecurity,” said Arthur Wong, CEO at Trustwave.

“Our findings illustrate organizations are under enormous pressure to secure data as workloads migrate off-premises, attacks on cloud services increases and ransomware evolves. Gaining complete visibility of data either at rest or in motion and eliminating threats as they occur are top cybersecurity challenges all industries are facing.”

More sensitive data moving to the cloud

Types of data organizations are moving into the cloud have become increasingly sensitive, therefore a solid data protection strategy is crucial. Ninety-six percent of total respondents stated they plan to move sensitive data to the cloud over the next two years with 52% planning to include highly sensitive data with Australia at 57% leading the regions surveyed.

Not surprisingly, when asked to rate the importance of securing data regarding digital transformation initiatives, an average score of 4.6 out of a possible high of five was tallied.

Hybrid cloud model driving digital transformation and data storage

Of those surveyed, most at 55% use both on-premises and public cloud to store data with 17% using public cloud only. Singapore organizations use the hybrid cloud model most frequently at 73% or 18% higher than the average and U.S. organizations employ it the least at 45%.

Government respondents store data on-premises only the most at 39% or 11% higher than average. Additionally, 48% of respondents stored data using the hybrid cloud model during a recent digital transformation project with only 29% relying solely on their own databases.

Most organizations use multiple cloud services

Seventy percent of organizations surveyed were found to use between two and four public cloud services and 12% use five or more. At 14%, the U.S. had the most instances of using five or more public cloud services followed by the U.K. at 13%, Australia at 9% and Singapore at 9%. Only 18% of organizations queried use zero or just one public cloud service.

Perceived threats do not match actual incidents

Thirty-eight percent of organizations are most concerned with malware and ransomware followed by phishing and social engineering at 18%, application threats 14%, insider threats at 9%, privilege escalation at 7% and misconfiguration attack at 6%.

Interestingly, when asked about actual threats experienced, phishing and social engineering came in first at 27% followed by malware and ransomware at 25%. The U.K. and Singapore experienced the most phishing and social engineering incidents at 32% and 31% and the U.S. and Australia experienced the most malware and ransomware attacks at 30% and 25%.

Respondents in the government sector had the highest incidents of insider threats at 13% or 5% above the average.

Patching practices show room for improvement

A resounding 96% of respondents have patching policies in place, however, of those, 71% rely on automated patching and 29% employ manual patching. Overall, 61% of organizations patched within 24 hours and 28% patched between 24 and 48 hours.

The highest percentage patching within a 24-hour window came from Australia at 66% and the U.K. at 61%. Unfortunately, 4% of organizations took a week to over a month to patch.

Reliance on automation driving key security processes

In addition to a high percentage of organizations using automated patching processes, findings show 89% of respondents employ automation to check for overprivileged users or lock down access credentials once an individual has left their job or changed roles.

This finding correlates to low concern for insider threats and data compromise due to privilege escalation according to the survey. Organizations must exercise caution when assuming removal of user access to applications to also include databases, which is often not the case.

Data regulations having minor impact on database security strategies

When asked if data regulations such as GDPR and CCPA impacted database security strategies, a surprising 60% of respondents said no.

These findings may suggest a lack of alignment between information technology and other departments, such as legal, responsible for helping ensure stipulations like ‘the right to be forgotten’ are properly enforced to avoid severe penalties.

Small teams with big responsibilities

Of those surveyed, 47% had a security team size of only six to 15 members. Respondents from Singapore had the smallest teams with 47% reporting between one and ten members and the U.S. had the largest teams with 22% reporting team size of 21 or more, 2% higher than the average.

Thirty-two percent of government respondents surprisingly run security operations with teams between just six and ten members.

Moving to the cloud with a security-first, zero trust approach

Many companies tend to jump into the cloud before thinking about security. They may think they’ve thought about security, but when moving to the cloud, the whole concept of security changes. The security model must transform as well.

moving to the cloud

Moving to the cloud and staying secure

Most companies maintain a “castle, moat, and drawbridge” attitude to security. They put everything inside the “castle” (datacenter); establish a moat around it, with sharks and alligators, guns on turrets; and control access by raising the drawbridge. The access protocol involves a request for access, vetting through firewall rules where the access is granted or denied. That’s perimeter security.

When moving to the cloud, perimeter security is still important, but identity-based security is available to strengthen the security posture. That’s where a cloud partner skilled at explaining and operating a different security model is needed.

Anybody can grab a virtual machine, build the machine in the cloud, and be done, but establishing a VM and transforming the machine to a service with identity-based security is a different prospect. When identity is added to security, the model looks very different, resulting in cost savings and an increased security posture.

Advanced technology, cost of security, and lack of cybersecurity professionals place a strain on organizations. Cloud providers invest heavily in infrastructure, best-in-class tools, and a workforce uniquely focused on security. As a result, organizations win operationally, financially, and from a security perspective, when moving to the cloud. To be clear, moving applications and servers, as is, to the cloud does not make them secure.

Movement to the cloud should be a standardized process and should use a Cloud Center of Excellence (CCoE) or Cloud Business Office (CBO); however, implemented within a process focused on security first, organizations can reap the security benefits.

Shared responsibility

Although security is marketed as a shared responsibility in the cloud, ultimately, the owner of the data (customer) is responsible and the responsibility is non-transferrable. In short, the customer must understand the responsibility matrix (RACI) involved to accomplish their end goals. Every cloud provider has a shared responsibility matrix, but organizations often misunderstand the responsibilities or the lines fall into a grey area. Regardless of responsibility models, the data owner has a responsibility to protect the information and systems. As a result, the enterprise must own an understanding of all stakeholders, their responsibilities, and their status.

When choosing a partner, it’s vital for companies to identify their exact needs, their weaknesses, and even their culture. No cloud vendor will cover it all from the beginning, so it’s essential that organizations take control and ask the right questions (see Cloud Security Alliance’s CAIQ), in order to place trust in any cloud provider. If it’s to be a managed service, for example, it’s crucial to ask detailed questions about how the cloud provider intends to execute the offering.

It’s important to develop a standard security questionnaire and probe multiple layers deep into the service model until the provider is unable to meet the need. Looking through a multilayer deep lens allows the customer and service provider to understand the exact lines of responsibility and the details around task accomplishment.

Trust-as-a-Service

It might sound obvious, but it’s worth stressing: trust is a shared responsibility between the customer and cloud provider. Trust is also earned over time and is critical to the success of the customer-cloud provider relationship. That said, zero trust is a technical term that means, from a technology viewpoint, assume danger and breach. Organizations must trust their cloud provider but should avoid blind trust and validate. Trust as a Service (TaaS) is a newer acronym that refers to third-party endorsement of a provider’s security practices.

Key influencers of a customer’s trust in their cloud provider include:

  • Data location
  • Investigation status and location of data
  • Data segregation (keeping cloud customers’ data separated from others)
  • Availability
  • Privileged access
  • Backup and recovery
  • Regulatory compliance
  • Long-term viability

A TaaS example: Google Cloud

Google has taken great strides to earn customer trust, designing the Google Cloud Platform with a key eye on zero trust and its implementation of the model BeyondCorp. For example, Google has implemented two core concepts including:

  • Delivery of services and data: ensuring that people with the correct identity and the right purpose can access the required data every time
  • Prioritization and focus: access and innovation are placed ahead of threats and risks, meaning that as products are innovated, security is built into the environment

Transparency is very important to the trust relationship. Google has enabled transparency through strong visibility and control of data. When evaluating cloud providers, understanding their transparency related to access and service status is crucial. Google ensures transparency by using specific controls including:

  • Limited data center access from a physical standpoint, adhering to strict access controls
  • Disclosing how and why customer data is accessed
  • Incorporating a process of access approvals

Multi-layered security for a trusted infrastructure

Finally, cloud services must provide customers with an understanding of how each layer of infrastructure works and build rules into each. This includes operational and device security, encrypting data at rest, multiple layers of identity, and finally storage services: multi-layered, and supported by security by default.

Cloud native companies have a security-first approach and naturally have a higher security understanding and posture. That said, when choosing a cloud provider, enterprises should always understand, identify, and ensure that their cloud solution addresses each one of their security needs, and who’s responsible for what.

Essentially, every business must find a cloud partner that can answer all the key questions, provide transparency, and establish a trusted relationship in the zero trust world where we operate.

Preventing cybersecurity’s perfect storm

Zerologon might have been cybersecurity’s perfect storm: that moment when multiple conditions collide to create a devastating disaster. Thanks to Secura and Microsoft’s rapid response, it wasn’t.

Zerologon scored a perfect 10 CVSS score. Threats rating a perfect 10 are easy to execute and have deep-reaching impact. Fortunately, they aren’t frequent, especially in prominent software brands such as Windows. Still, organizations that perpetually lag when it comes to patching become prime targets for cybercriminals. Flaws like Zerologon are rare, but there’s no reason to assume that the next attack will not be using a perfect 10 CVSS vulnerability, this time a zero-day.

Zerologon: Unexpected squall

Zerologon escalates a domain user beyond their current role and permissions to a Windows Domain Administrator. This vulnerability is trivially easy to exploit. While it seems that the most obvious threat is a disgruntled insider, attackers may target any average user. The most significant risk comes from a user with an already compromised system.

In this scenario, a bad actor has already taken over an end user’s system but is constrained only to their current level of access. By executing this exploit, the bad actor can break out of their existing permissions box. This attack grants them the proverbial keys to the kingdom in a Windows domain to access whatever Windows-based devices they wish.

Part of why Zerologon is problematic is that many organizations rely on Windows as an authoritative identity for a domain. To save time, they promote their Windows Domain Administrators to an Administrator role throughout the organizational IT ecosystem and assign bulk permissions, rather than adding them individually. This method eases administration by removing the need to update the access permissions frequently as these users change jobs. This practice violates the principle of least privilege, leaving an opening for anyone with a Windows Domain Administrator role to exercise broad-reaching access rights beyond what they require to fulfill the role.

Beware of sharks

Advanced preparation for attacks like these requires a fundamental paradigm shift in organizational boundary definitions away from a legacy mentality to a more modern cybersecurity mindset. The traditional castle model assumes all threats remain outside the firewall boundary and trust everything either natively internal or connected via VPN to some degree.

Modern cybersecurity professionals understand the advantage of controls like zero standing privilege (ZSP), which authorizes no one and requires that each user request access and evaluation before granting privileged access. Think of it much like the security check at an airport. To get in, everyone —passenger, pilot, even store staff— needs to be inspected, prove they belong and have nothing questionable in their possession.

This continual re-certification prevents users from gaining access once they’ve experienced an event that alters their eligibility, such as leaving the organization or changing positions. Checking permissions before approving them ensures only those who currently require a resource can access it.

My hero zero (standing privilege)

Implementing the design concept of zero standing privilege is crucial to hardening against privilege escalation attacks, as it removes the administrator’s vast amounts of standing power and access. Users acquire these rights for a limited period and only on an as-needed basis. This Just-In-Time (JIT) method of provisioning creates a better access review process. Requests are either granted time-bound access or flagged for escalation to a human approver, ensuring automation oversight.

An essential component of zero standing privilege is avoiding super-user roles and access. Old school practitioners may find it odd and question the impact on daily administrative tasks that keep the ecosystem running. Users manage these tasks through heavily logged time-limited permission assignments. Reliable user behavior analytics, combined with risk-based privileged access management (PAM) and machine learning supported log analysis, offers organizations better contextual identity information. Understanding how their privileged access is leveraged and identifying access misuse before it takes root is vital to preventing a breach.

Peering into the depths

To even start with zero standing privilege, an organization must understand what assets they consider privileged. The categorization of digital assets begins the process. The next step is assigning ownership of these resources. Doing this allows organizations to configure the PAM software to accommodate the policies and access rules defined organizationally, ensuring access rules meet governance and compliance requirements.

The PAM solution requires in-depth visibility of each individual’s full access across all cloud and SaaS environments, as well as throughout the internal IT infrastructure. This information improves the identification of toxic combinations, where granted permissions create compliance issues such as segregation of duties (SoD) violations.

AI & UEBA to the rescue

Zero standing privilege generates a large number of user logs and behavioral information over time. Manual log review becomes unsustainable very quickly. Leveraging the power of AI and machine learning to derive intelligent analytics allows organizations to identify risky behaviors and locate potential breaches far faster than human users.

Integration of a user and entity behavior analytics (UEBA) software establishes baselines of behavior, triggering alerts when deviations occur. UEBA systems detect insider threats and advanced persistent threats (APTs) while generating contextual identity information.

UEBA systems track all behavior linked back to an entity and identify anomalous behaviors such as spikes in access requests, requesting access to data that would typically not be allowed for that user’s roles, or systematically accessing numerous items. Contextual information helps organizations identifying situations that might indicate a breach or point to unauthorized exfiltration of data.

Your compass points to ZTA

Protecting against privilege escalation threats requires more than merely staying up to date on patches. Part of stopping attacks like Zerologon is to re-imagine how security is architected in an organization. Centering identity as the new security perimeter and implementing zero standing privilege are essential to the foundation of a security model known as zero trust architecture (ZTA).

Zero trust architecture has existed for a while in the corporate world. It is gaining attention from the public sector since NIST’s recent approval of SP-207 outlined ZTA and how to leverage it for the government agencies. NIST’s sanctification of ZTA opened the doors for government entities and civilian contractors to incorporate it into their security model. Taking this route helps to close the privilege escalation pathway providing your organization a secure harbor in the event of another cybersecurity perfect storm.

Can we trust passwordless authentication?

We are beginning to shift away from what has long been our first and last line of defense: the password. It’s an exciting time. Since the beginning, passwords have aggravated people. Meanwhile, passwords have become the de facto first step in most attacks. Yet I can’t help but think, what will the consequences of our actions be?

trust passwordless

Intended and unintended consequences

Back when overhead cameras came to the express toll routes in Ontario, Canada, it wasn’t long before the SQL injection to drop tables made its way onto bumper stickers. More recently in California, researcher Joe Tartaro purchased a license plate that said NULL. With the bumper stickers, the story goes, everyone sharing the road would get a few hours of toll-free driving. But with the NULL license plate? Tartaro ended up on the hook for every traffic ticket with no plate specified, to the tune of thousands of dollars.

One organization I advised recently completed an initiative to reduce the number of agents on the endpoint. In a year when many are extending the lifespan and performance of endpoints while eliminating location-dependent security controls, this shift makes strategic sense.

Another CISO I spoke with recently consolidated multi-factor authenticators onto a single platform. Standardizing the user experience and reducing costs is always a pragmatic move. Yet these moves limited future moves. In both cases, any initiative by the security team which changed authenticators or added agents ended up stuck in park, waiting for a greenlight.

Be careful not to limit future moves

To make moves that open up possibilities, security teams think along two lines: usability and defensibility. That is, how will the change impact the workforce, near term and long term? On the opposite angle, how will the change affect criminal behavior, near term and long term?

Whether decreasing the number of passwords required through single sign-on (SSO) or eliminating the password altogether in favor of a strong authentication factor (passwordless), the priority is on the workforce experience. The number one reason for tackling the password problem given by security leaders is improving the user experience. It is a rare security control that makes people’s lives easier and leadership wants to take full advantage.

There are two considerations when planning for usability. The first is ensuring the tactic addresses the common friction points. For example, with passwordless, does the approach provide access to devices and applications people work with? Is it more convenient and faster what they do today? The second consideration is evaluating what the tactic allows the security team to do next. Does the approach to passwordless or SSO block a future initiative due to lock-in? Or will the change enable us to take future steps to secure authentication?

Foiling attackers

The one thing we know for certain is, whatever steps we take, criminals will take steps to get around us. In the sixty years since the first password leak, we’ve done everything we can, using both machine and man. We’ve encrypted passwords. We’ve hashed them. We increased key length and algorithm strength. At the same time, we’ve asked users to create longer passwords, more complex passwords, unique passwords. We’ve provided security awareness training. None of these steps were taken in a vacuum. Criminals cracked files, created rainbow tables, brute-forced and phished credentials. Sixty years of experience suggests the advancement we make will be met with an advanced attack.

We must increase the trust in authentication while increasing usability, and we must take steps that open up future options. Security teams can increase trust by pairing user authentication with device authentication. Now the adversary must both compromise the authentication and gain access to the device.

To reduce the likelihood of device compromise, set policies to prevent unpatched, insecure, infected, or compromised devices from authenticating. The likelihood can be even further reduced by capturing telemetry, modeling activity, and comparing activity to the user’s baseline. Now the adversary must compromise authentication, gain access to the endpoint device, avoid endpoint detection, and avoid behavior analytics.

Conclusion

Technology is full of unintended consequences. Some lead to tollfree drives and others lead to unexpected fees. Some open new opportunities, others new vulnerabilities. Today, many are moving to improve user experience by reducing or removing passwords. The consequences won’t be known immediately. We must ensure our approach meets the use cases the workforce cares about while positioning us to address longer-term goals and challenges.

Additionally, we must get ahead of adversaries and criminals. With device trust and behavior analytics, we must increase trust in passwordless authentication. We can’t predict what is to come, but these are steps security teams can take today to better position and protect our organizations.

What is confidential computing? How can you use it?

What is confidential computing? Can it strengthen enterprise security? Sam Lugani, Lead Security PMM, Google Workspace & GCP, answers these and other questions in this Help Net Security interview.

what is confidential computing

How does confidential computing enhance the overall security of a complex enterprise architecture?

We’ve all heard about encryption in-transit and at-rest, but as organizations prepare to move their workloads to the cloud, one of the biggest challenges they face is how to process sensitive data while still keeping it private. However, when data is being processed, there hasn’t been an easy solution to keep it encrypted.

Confidential computing is a breakthrough technology which encrypts data in-use – while it is being processed. It creates a future where private and encrypted services become the cloud standard.

At Google Cloud, we believe this transformational technology will help instill confidence that customer data is not being exposed to cloud providers or susceptible to insider risks.

Confidential computing has moved from research projects into worldwide deployed solutions. What are the prerequisites for delivering confidential computing across both on-prem and cloud environments?

Running workloads confidentially will differ based on what services and tools you use, but one thing is given – organizations don’t want to compromise on usability and performance, at the cost of security.

Those running Google Cloud can seamlessly take advantage of the products in our portfolio, Confidential VMs and Confidential GKE Nodes.

All customer workloads that run in VMs or containers today, can run as a confidential without significant performance impact. The best part is that we have worked hard to simplify the complexity. One checkbox—it’s that simple.

what is confidential computing

What type of investments does confidential computing require? What technologies and techniques are involved?

To deliver on the promise of confidential computing, customers need to take advantage of security technology offered by modern, high-performance CPUs, which is why Google Cloud’s Confidential VMs run on N2D series VMs powered by 2nd Gen AMD EPYC processors.

To support these environments, we also had to update our own hypervisor and low-level platform stack while also working closely with the open source Linux community and modern operating system distributors to ensure that they can support the technology.

Networking and storage drivers are also critical to the deployment of secure workloads and we had to ensure we were capable of handling confidential computing traffic.

How is confidential computing helping large organizations with a massive work-from-home movement?

As we entered the first few months of dealing with COVID-19, many organizations expected a slowdown in their digital strategy. Instead, we saw the opposite – most customers accelerated their use of cloud-based services. Today, enterprises have to manage a new normal which includes a distributed workforce and new digital strategies.

With workforces dispersed, confidential computing can help organizations collaborate on sensitive workloads in the cloud across geographies and competitors, all while preserving privacy of confidential datasets. This can lead to the development of transformation technologies – imagine, for example, being able to more quickly build vaccines and cure diseases as a result of this secure collaboration.

How do you see the work of the Confidential Computing Consortium evolving in the near future?

Google was among the founding members of the Confidential Computing Consortium, operating under the umbrella of the Linux Foundation to facilitate adoption of confidential computing.

Cloud providers, hardware manufacturers, and software vendors all need to work together to define standards to advance confidential computing. As the technology garners more interest, sustained industry collaboration such as the Consortium will be key to helping realize the true potential of confidential computing.

Global adoption of data and privacy programs still maturing

The importance of privacy and data protection is a critical issue for organizations as it transcends beyond legal departments to the forefront of an organization’s strategic priorities.

adoption privacy programs

A FairWarning research, based on survey results from more than 550 global privacy and data protection, IT, and compliance professionals outlines the characteristics and behaviors of advanced privacy and data protection teams.

By examining the trends of privacy adoption and maturity across industries, the research uncovers adjustments that security and privacy leaders need to make to better protect their organization’s data.

The prevalence of data and privacy attacks

Insights from the research reinforce the importance of privacy and data protection as 67% of responding organizations documented at least one privacy incident within the past three years, and over 24% of those experienced 30 or more.

Additionally, 50% of all respondents reported at least one data breach in the last three years, with 10% reporting 30 or more.

Overall immaturity of privacy programs

Despite increased regulations, breaches and privacy incidents, organizations have not rapidly accelerated the advancement of their privacy programs as 44% responded they are in the early stages of adoption and 28% are in middle stages.

Healthcare and software rise to the top

Despite an overall lack of maturity across industries, healthcare and software organizations reflect more maturity in their privacy programs, as compared to insurance, banking, government, consulting services, education institutions and academia.

Harnessing the power of data and privacy programs

Respondents understand the significant benefits of a mature privacy program as organizations experience greater gains across every area measured including: increased employee privacy awareness, mitigating data breaches, greater consumer trust, reduced privacy complaints, quality and innovation, competitive advantage, and operational efficiency.

Of note, more mature companies believe they experience the largest gain in reducing privacy complaints (30.3% higher than early stage respondents).

Attributes and habits of mature privacy and data protection programs

Companies with more mature privacy programs are more likely to have C-Suite privacy and security roles within their organization than those in the mid- to early-stages of privacy program development.

Additionally, 88.2% of advanced stage organizations know where most or all of their personally identifiable information/personal health information is located, compared to 69.5% of early stage respondents.

Importance of automated tools to monitor user activity

Insights reveal a clear distinction between the maturity levels of privacy programs and related benefits of automated tools as 54% of respondents with more mature programs have implemented this type of technology compared with only 28.1% in early stage development.

Automated tools enable organizations to monitor all user activity in applications and efficiently identify anomalous activity that signals a breach or privacy violation.

“This research revealed a major gap between mature and early stage privacy programs and the benefits they receive,” said Ed Holmes, CEO, FairWarning.

“It is exciting to see healthcare at the top when it comes to privacy maturity. However, as we dig deeper into the data, we find that 37% of respondents with 30 or more breaches are from healthcare, indicating that there is still more work to be done.

“This study highlights useful guidance on steps all organizations can take regardless of industry or size to advance their program and ensure they are at the forefront of privacy and data protection.”

“In today’s fast-paced and increasingly digitized world, organizations regardless of size or industry, need to prioritize data and privacy protection,” said IAPP President & CEO J. Trevor Hughes.

“As the research has demonstrated, it is imperative that security and privacy professionals recognize the importance of implementing privacy and data protection programs to not only reduce privacy complaints and data breaches, but increase operational efficiency.”

Most cybersecurity pros believe automation will make their jobs easier

Despite 88% of cybersecurity professionals believing automation will make their jobs easier, younger staffers are more concerned that the technology will replace their roles than their veteran counterparts, according to a research by Exabeam.

cybersecurity automation jobs

Overall, satisfaction levels continued a 3-year positive trend, with 96% of respondents indicating they are happy with role and responsibilities and 87% reportedly pleased with salary and earnings. Additionally, there was improvement in gender diversity with female respondents increasing from 9% in 2019 to 21% this year.

“The concern for automation among younger professionals in cybersecurity was surprising to us. In trying to understand this sentiment, we could partially attribute it to lack of on-the-job training using automation technology,” said Samantha Humphries, security strategist at Exabeam.

“As we noted earlier this year in our State of the SOC research, ambiguity around career path or lack of understanding about automation can have an impact on job security. It’s also possible that this is a symptom of the current economic climate or a general lack of experience navigating the workforce during a global recession.”

AI and ML: A threat to job security?

Of respondents under the age of 45, 53% agreed or strongly agreed that AI and ML are a threat to their job security. This is contrasted with just 25% of respondents 45 and over who feel the same, possibly indicating that subsets of security professionals in particular prefer to write rules and manually investigate.

Interestingly, when asked directly about automation software, 89% of respondents under 45 years old believed it would improve their jobs, yet 47% are still threatened by its use. This is again in contrast with the 45 and over demographic, where 80% believed automation would simplify their work, and only 22% felt threatened by its use.

Examining the sentiments around automation by region, 47% of US respondents were concerned about job security when automation software is in use, as well as SG (54%), DE (42%), AUS (40%) and UK (33%).

In the survey, which drew insights from professionals throughout the US, the UK, AUS, Canada, India and the Netherlands, only 10% overall believed that AI and automation were a threat to their jobs.

On the flip side, there were noticeable increases in job approval across the board, with an upward trend in satisfaction around role and responsibilities (96%), salary (87%) and work/life balance (77%).

Diversity showing positive signs of improvement

When asked what else they enjoyed about their jobs, respondents listed working in an environment with professional growth (15%) as well as opportunities to challenge oneself (21%) as top motivators.

53% reported jobs that are either stressful or very stressful, which is down from last year (62%). Interestingly, despite being among those that are generally threatened by automation software, 100% of respondents aged 18-24 reported feeling secure in their roles and were happiest with their salaries (93%).

Though the number of female respondents increased this year, it remains to be seen whether this will emerge as a trend. This year’s male respondents (78%) are down 13% from last year (91%).

In 2019, nearly 41% were in the profession for at least 10 years or more. This year, a larger percentage (83%) have 10 years or less, and 34% have been in the cybersecurity industry for five years or less. Additionally, one-third do not have formal cybersecurity degrees.

“There is evidence that automation and AI/ML are being embraced, but this year’s survey exposed fascinating generational differences when it comes to professional openness and using all available tools to do their jobs,” said Phil Routley, senior product marketing manager, APJ, Exabeam.

“And while gender diversity is showing positive signs of improvement, it’s clear we still have a very long way to go in breaking down barriers for female professionals in the security industry.”

Cloud environment complexity has surpassed human ability to manage

IT leaders are increasingly concerned accelerated digital transformation, combined with the complexity of modern multicloud environments, is putting already stretched digital teams under too much pressure, a Dynatrace survey of 700 CIOs reveals.

cloud environment complexity

This leaves little time for innovation, and limits teams’ ability to prioritize tasks that drive greater value and better outcomes for the business and its customers.

Key findings

  • 89% of CIOs say digital transformation has accelerated in the last 12 months, and 58% predict it will continue to speed up.
  • 86% of organizations are using cloud-native technologies, including microservices, containers, and Kubernetes, to accelerate innovation and achieve more successful business outcomes.
  • 63% of CIOs say the complexity of their cloud environment has surpassed human ability to manage.
  • 44% of IT and cloud operations teams’ time is spent on manual, routine work just ‘keeping the lights on’, costing organizations an average of $4.8 million per year.
  • 56% of CIOs say they are almost never able to complete everything the business needs from IT.
  • 70% of CIOs say their team is forced to spend too much time doing manual tasks that could be automated if only they had the means.

“The benefits of IT and business automation extend far beyond cost savings. Organizations need this capability – to drive revenue, stay connected with customers, and keep employees productive – or they face extinction,” said Bernd Greifeneder, CTO at Dynatrace.

“Increased automation enables digital teams to take full advantage of the ever-growing volume and variety of observability data from their increasingly complex, multicloud, containerized environments. With the right observability platform, teams can turn this data into actionable answers, driving a cultural change across the organization and freeing up their scarce engineering resources to focus on what matters most – customers and the business.”

Cloud environment complexity

  • Organizations are using cloud-native technologies including microservices (70%), containers (70%) and Kubernetes (54%) to advance innovation and achieve more successful business outcomes.
  • However, 74% of CIOs say the growing use of cloud-native technologies will lead to more manual effort and time spent ‘keeping the lights on’.

Traditional tools and manual effort cannot keep up

  • On average, organizations are using 10 monitoring solutions across their technology stacks. However, digital teams only have full observability into 11% of their application and infrastructure environments.
  • 90% of CIOs say there are barriers preventing them from monitoring a greater proportion of their applications.
  • The dynamic nature of today’s hybrid, multicloud ecosystems amplifies complexity. 61% of CIOs say their IT environment changes every minute or less, while 32% say their environment changes at least once every second.

CIOs call for radical change

  • 74% of CIOs say their organization will lose its competitive edge if IT is unable to spend less time ‘keeping the lights on’.
  • 84% said the only effective way forward is to reduce the number of tools and amount of manual effort IT teams invest in monitoring and managing the cloud and user-experience.
  • 72% said they cannot keep plugging monitoring tools together to maintain observability. Instead, they need a single platform covering all use cases and offering a consistent source of truth.

cloud environment complexity

Observability, automation, and AI are key

  • 93% of CIOs said AI-assistance will be critical to IT’s ability to cope with increasing workloads and deliver maximum value to the business.
  • CIOs expect automation in cloud and IT operations will reduce the amount of time spent ‘keeping the lights on’ by 38%, saving organizations $2 million per year, on average.
  • Despite this advantage, just 19% of all repeatable operations processes for digital experience management and observability have been automated.

“History has shown successful organizations use disruptive moments to their advantage,” added Greifeneder. “Now is the time to break silos, establish a true BizDevOps approach, and deliver agile processes across a consistent, continuous delivery stack.

“This is essential for effective and intelligent automation and, more importantly, to enable engineers to take more end-to-end responsibility for the outcomes and value they create for the business.”

Is the skills gap preventing you from executing your enterprise strategy?

As many business leaders look to close the skills gap and cultivate a sustainable workforce amid COVID-19, an IBM Institute for Business Value (IBV) study reveals less than 4 in 10 human resources (HR) executives surveyed report they have the skills needed to achieve their enterprise strategy.

skills gap enterprise

COVID-19 exacerbated the skills gap in the enterprise

Pre-pandemic research in 2018 found as many as 120 million workers surveyed in the world’s 12 largest economies may need to be retrained or reskilled because of AI and automation in the next three years.

That challenge has only been exacerbated in the midst of the COVID-19 pandemic – as many C-suite leaders accelerate digital transformation, they report inadequate skills is one of their biggest hurdles to progress.

Employers should shift to meet new employee expectations

Ongoing consumer research also shows surveyed employees’ expectations for their employers have significantly changed during the COVID-19 pandemic but there’s a disconnect in how effective leaders and employees believe companies have been in addressing these gaps.

74% of executives surveyed believe their employers have been helping them learn the skills needed to work in a new way, compared to just 38% of employees surveyed, and 80% of executives surveyed said their company is supporting employees’ physical and emotional health, but only 46% of employees surveyed agreed.

“Today perhaps more than ever, organizations can either fail or thrive based on their ability to enable the agility and resiliency of their greatest competitive advantage – their people,” said Amy Wright, managing partner, IBM Talent & Transformation.

“Business leaders should shift to meet new employee expectations brought on by the COVID-19 pandemic, such as holistic support for their well-being, development of new skills and a truly personalized employee experiences even while working remotely.

“It’s imperative to bring forward a new era of HR – and those companies that were already on the path are better positioned to succeed amid disruption today and in the future.”

The study includes insights from more than 1,500 global HR executives surveyed in 20 countries and 15 industries. Based on those insights, the study provides a roadmap for the journey to the next era of HR, with practical examples of how HR leaders at surveyed “high-performing companies” – meaning those that outpace all others in profitability, revenue growth and innovation – can reinvent their function to build a more sustainable workforce.

Additional highlights

  • Nearly six in 10 high performing companies surveyed report using AI and analytics to make better decisions about their talent, such as skilling programs and compensation decisions. 41% are leveraging AI to identify skills they’ll need for the future, versus 8% of responding peers.
  • 65% of surveyed high performing companies are looking to AI to identify behavioral skills like growth mindset and creativity for building diverse adaptable teams, compared to 16% of peers.
  • More than two thirds of all respondents said agile practices are essential to the future of HR. However, less than half of HR units in participating organizations have capabilities in design thinking and agile practices.
  • 71% of high performing companies surveyed report they are widely deploying a consistent HR technology architecture, compared to only 11% of others.

“In order to gain long-term business alignment between leaders and employees, this moment requires HR to operate as a strategic advisor – a new role for many HR organizations,” said Josh Bersin, global independent analyst and dean of the Josh Bersin Academy.

“Many HR departments are looking to technology, such as the cloud and analytics, to support a more cohesive and self-service approach to traditional HR responsibilities. Offering employee empowerment through holistic support can drive larger strategic change to the greater business.”

skills gap enterprise

Three core elements to promote lasting change

According to the report, surveyed HR executives from high-performing companies were eight times as likely as their surveyed peers to be driving disruption in their organizations. Among those companies, the following actions are a clear priority:

  • Accelerating the pace of continuous learning and feedback
  • Cultivating empathetic leadership to support employees’ holistic well-being
  • Reinventing their HR function and technology architecture to make more real-time data-driven decisions

Banks risk losing customers with anti-fraud practices

Many banks across the U.S. and Canada are failing to meet their customers’ online identity fraud and digital banking needs, according to a survey from FICO.

banking fraud

Despite COVID-19 quickly turning online banking into an essential service, the survey found that financial institutions across North America are struggling to establish practices that combat online identity fraud and money laundering, without negatively impacting customer experience.

For example, 51 percent of North American banks are still asking customers to prove their identities by visiting branches or posting documents when opening digital accounts. This also applies to 25 percent of mortgages or home loans and 15 percent of credit cards opened digitally.

“The pandemic has forced industries to fully embrace digital. We now are seeing North American banks that relied on face-to-face interactions to prove customers’ identities rethinking how to adapt to the digital first economy,” said Liz Lasher, vice president of portfolio marketing for Fraud at FICO.

“Today’s consumers expect a seamless and secure online experience, and banks need to be equipped to meet those expectations. Engaging valuable new customers, then having them abandon applications when identity proofing becomes expensive and difficult.”

Identity verification process issues

The study found that only up to 16 percent of U.S. and Canadian banks employ the type of fully integrated, real-time digital capture and validation tools required for consumers to securely open a financial account online.

Even when digital methods are used to verify identity, the experience still raises barriers with customers expected to use email or visit an “identity portal” to verify their identities.

Creating a frictionless process is key to meeting consumers current expectation. For example, according to a recent Consumer Digital Banking study, while 75 percent of consumers said they would open a financial account online, 23 percent of prospective customers would abandon the process due to an inconsistent identity verification process.

Lack of automation is a problem for banks too

The lack of automation when verifying customers’ identity isn’t just a pain point for customers – 53 percent of banks reported it problematic for them too.

Regulation intended to prevent criminal activity such as money laundering typically requires banks to review customer identities in a consistent, robust manner and this is harder to achieve for institutions relying on inconsistent manual resources.

Fortunately, 75 percent of banks in the U.S. and Canada reported plans to invest in an identity management platform within the next three years.

By moving to a more integrated and strategic approach to identity proofing and identity authentication, banks will be able to meet customer expectations and deliver consistently positive digital banking experiences across online channels.

Three best practices for responsible open source usage in the COVID-19 era

COVID-19 has forced developer agility into overdrive, as the tech industry’s quick push to adapt to changing dynamics has accelerated digital transformation efforts and necessitated the rapid introduction of new software features, patches, and functionalities.

open source code

During this time, organizations across both the private and public sector have been turning to open source solutions as a means to tackle emerging challenges while retaining the rapidity and agility needed to respond to evolving needs and remain competitive.

Since well before the pandemic, software developers have leveraged open source code as a means to speed development cycles. The ability to leverage pre-made packages of code rather than build software from the ground up has enabled them to save valuable time. However, the rapid adoption of open source has not come without its own security challenges, which developers and organizations should resolve safely.

Here are some best practices developers should follow when implementing open source code to promote security:

Know what and where open source code is in use

First and foremost, developers should create and maintain a record of where open source code is being used across the software they build. Applications today are usually designed using hundreds of unique open source components, which then reside in their software and workspaces for years.

As these open source packages age, there is an increasing likelihood of vulnerabilities being discovered in them and publicly disclosed. If the use of components is not closely tracked against the countless new vulnerabilities discovered every year, software leveraging these components becomes open to exploitation.

Attackers understand all too well how often teams fall short in this regard, and software intrusions via known open source vulnerabilities are a highly common sources of breaches. Tracking open source code usage along with vigilance around updates and vulnerabilities will go a long way in mitigating security risk.

Understand the risks before adopting open source

Aside from tracking vulnerabilities in the code that’s already in use, developers must do their research on open source components before adopting them to begin with. While an obvious first step is ensuring that there are no known vulnerabilities in the component in question, other factors should be considered focused on the longevity of the software being built.

Teams should carefully consider the level of support offered for a given component. It’s important to get satisfactory answers to questions such as:

  • How often is the component patched?
  • Are the patches of high quality and do they address the most pressing security issues when released?
  • Once implemented, are they communicated effectively and efficiently to the user base?
  • Is the group or individual who built the component a trustworthy source?

Leverage automation to mitigate risk

It’s no secret that COVID-19 has altered developers’ working conditions. In fact, 38% of developers are now releasing software monthly or faster, up from 27% in 2018. But this increased pace often comes paired with unwanted budget cuts and organizational changes. As a result, the imperative to “do more with less” has become a rallying cry for business leaders. In this context, it is indisputable that automation across the entire IT security portfolio has skyrocketed to the top of the list of initiatives designed to improve operational efficiency.

While already an important asset for achieving true DevSecOps agility, automated scanning technology has become near-essential for any organization attempting to stay secure while leveraging open source code. Manually tracking and updating open source vulnerabilities across an organization’s entire software suite is hard work that only increases in difficulty with the scale of an organization’s software deployments. And what was inefficient in normal times has become unfeasible in the current context.

Automated scanning technologies alleviate the burden of open source security by handling processes that would otherwise take up precious time and resources. These tools are able to detect and identify open source components within applications, provide detailed risk metrics regarding open source vulnerabilities, and flag outdated libraries for developers to address. Furthermore, they provide detailed insight into thousands of public open source vulnerabilities, security advisories and bugs, to ensure that when components are chosen they are secure and reputable.

Finally, these tools help developers prioritize and triage remediation efforts once vulnerabilities are identified. Equipped with the knowledge of which vulnerabilities present the greatest risk, developers are able to allocate resources most efficiently to ensure security does not get in the way of timely release cycles.

Confidence in a secure future

When it comes to open source security, vigilance is the name of the game. Organizations must be sure to reiterate the importance of basic best practices to developers as they push for greater speed in software delivery.

While speed has long been understood to come at the cost of software security, this type of outdated thinking cannot persist, especially when technological advancements in automation have made such large strides in eliminating this classically understood tradeoff. By following the above best practices, organizations can be more confident that their COVID-19 driven software rollouts will be secure against issues down the road.

As attackers evolve their tactics, continuous cybersecurity education is a must

As the Information Age slowly gives way to the Fourth Industrial Revolution, and the rise of IoT and IIoT, on-demand availability of computer system resources, big data and analytics, and cyber attacks aimed at business environments impact on our everyday lives, there’s an increasing need for knowledgeable cybersecurity professionals and, unfortunately, an increasing cybersecurity workforce skills gap.

continuous cybersecurity education

The cybersecurity skills gap is huge

A year ago, (ISC)² estimated that the global cybersecurity workforce numbered 2.8 million professionals, when there’s an actual need for 4.07 million.

According to a recent global study of cybersecurity professionals by the Information Systems Security Association (ISSA) and analyst firm Enterprise Strategy Group (ESG), there has been no significant progress towards a solution to this problem in the last four years.

“What’s needed is a holistic approach of continuous cybersecurity education, where each stakeholder needs to play a role versus operating in silos,” ISSA and ESG stated.

Those starting their career in cybersecurity need many years to develop real cybersecurity proficiency, the respondents agreed. They need cybersecurity certifications and hands-on experience (i.e., jobs) and, ideally, a career plan and guidance.

Continuous cybersecurity training and education are key

Aside from the core cybersecurity talent pool, new job recruits are new graduates from universities, consultants/contractors, employees at other departments within an organization, security/hardware vendors and career changers.

One thing they all have in common is the need for constant additional training, as technology advances and changes and attackers evolve their tactics, techniques and procedures.

Though most IT and security professionals use their own free time to improve their cyber skills, they must learn on the job and get effective support from their employers for their continued career development.

Times are tough – there’s no doubt of that – but organizations must continue to invest in their employee’s career and skills development if they want to retain their current cybersecurity talent, develop it, and attract new, capable employees.

“The pandemic has shown us just how critical cybersecurity is to the successful operation of our respective economies and our individual lifestyles,” noted Deshini Newman, Managing Director EMEA, (ISC)².

Certifications show employers that cybersecurity professionals have the knowledge and skills required for the job, but also indicate that they are invested in keeping pace with a myriad of evolving issues.

“Maintaining a cybersecurity certification, combined with professional membership is evidence that professionals are constantly improving and developing new skills to add value to the profession and taking ownership for their careers. This new knowledge and understanding can be shared throughout an organisation to support security best practice, as well as ensuring cyber safety in our homes and communities,” she pointed out.

SMBs’ size doesn’t make them immune to cyberattacks

78% of SMBs indicated that having a privileged access management (PAM) solution in place is important to a cybersecurity program – yet 76% of respondents said that they do not have one that is fully deployed, a Devolutions survey reveals.

size cyberattacks

While it’s a positive trend that the majority of SMBs recognize the importance of having a PAM solution, the fact that most of the respondents don’t have a PAM solution in place reflects that there is inertia when it comes to deployment.

SMBs are not immune, company size doesn’t protect from cyberattacks

Global cybercrime revenues have reached $1.5 trillion per year. And according to IBM, the average price tag of a data breach is now $3.86 million per incident. Despite these staggering figures, there remains a common (and inaccurate) belief among many SMBs that the greatest security vulnerabilities exist in large companies.

However, there is mounting evidence that SMBs are more vulnerable than enterprises to cyberthreats – and the complacency regarding this reality can have disastrous consequences.

“SMBs must not assume that their relative smaller size will protect them from cyberattacks. On the contrary, hackers, rogue employees and others are increasingly targeting SMBs because they typically have weaker – and, in some cases, virtually non-existent – defense systems.

“SMBs cannot afford to take a reactive wait-and-see approach to cybersecurity because they may not survive a cyberattack. And even if they do, it could take several years to recover costs, reclaim customers and repair reputation damage,” said Devolutions CEO David Hervieux.

Key findings from the survey

To dig deeper into the mindset of SMBs about cybersecurity, Devolutions conducted a survey of 182 SMBs from a variety of industries – including IT, healthcare, education, and finance. Some notable findings include:

  • 62% of SMBs do not conduct a security audit at least once a year – and 14% never conduct an audit at all.
  • 57% of SMBs indicated they have experienced a phishing attack in the last three years.
  • 47% of SMBs allow end users to reuse passwords across personal and professional accounts.

These findings reinforce the need for better cybersecurity education for smaller companies.

“Conducting this survey reaffirmed to us that while progress is being made, there is a still a lot of work to do for many SMBs to protect themselves from cybercrime. We plan to conduct a survey like this each year so that we can identify the most current trends and in turn help our customers address their most pressing needs,” added Hervieux.

size cyberattacks

Protect from cyberattacks: The role of MSPs

One way for SMBs to close the cybersecurity gap is to seek out a trusted managed service provider (MSP) for guidance and implementation of cybersecurity solutions, monitoring and training programs. Because SMBs do not typically have huge IT departments like their enterprise counterparts, they often look to outside resources.

MSPs have an opportunity to strengthen their relationship with existing customers and expand their client base by becoming cyber experts who can advise SMBs on various cybersecurity issues, trends and solutions – as well as offer the ability to promptly respond to any security incidents that may arise and take swift action.

“We expect more and more MSPs will be adding cybersecurity solutions and expertise to their portfolio of offerings to meet this demand,” Hervieux concluded.

Prevent privileged account abuse

Organizations must keep critical assets secure, control and monitor sensitive information and privileged access, and vault and manage business-user passwords – all while ensuring that employees are productive and efficient. This is not an easy task for SMBs without the right solution in place.

Many PAM and password management solutions on the market are prohibitively expensive or too complex for what SMBs need.

With database attacks on the rise, how can companies protect themselves?

Misconfigured or unsecured databases exposed on the open web are a fact of life. We hear about some of them because security researchers tell us how they discovered them, pinpointed their owners and alerted them, but many others are found by attackers first.

exposed databases

It used to take months to scan the Internet looking for open systems, but attackers now have access to free and easy-to-use scanning tools that can find them in less than an hour.

“As one honeypot experiment showed, open databases are targeted hundreds of times within a few hours,” Josh Bressers, product security lead at Elastic, told Help Net Security.

“There’s no way to leave unsecured data online without opening the data up to attack. This is why it’s crucial to always enable security and authentication features when setting up databases, so that your organization avoids this risk altogether.”

What do attackers do with exposed databases?

Bressers has been involved in the security of products and projects – especially open-source – for a very long time. In the past two decades, he created the product security division at Progeny Linux Systems and worked as a manager of the Red Hat product security team and headed the security strategy in Red Hat’s Platform Business Unit.

He now manages bug bounties, penetration testing and security vulnerability programs for Elastic’s products, as well as the company’s efforts to improve application security, add new and improve existing security features as needed or requested by customers.

The problem with exposed Elasticsearch (MariaDB, MongoDB, etc.) databases, he says, is that they are often left unsecured by developers by mistake and companies don’t discover the exposure quickly.

“The scanning tools do most of the work, so it’s up to the attacker to decide if the database has any data worth stealing,” he noted, and pointed out that this isn’t hacking, exactly – it’s mining of open services.

Attackers can quickly exfiltrate the accessible data, hold it for ransom, sell it to the highest bidder, modify it or simply delete it all.

“Sometimes there’s no clear advantage or motive. For example, this summer saw a string of cyberattacks called the Meow Bot attacks that have affected at least 25,000 databases so far. The attacker replaced the contents of every afflicted database with the word ‘meow’ but has not been identified or revealed anything behind the purpose of the attack,” he explained.

Advice for organizations that use clustered databases

Open-source database platforms such as Elasticsearch have built-in security to prevent attacks of this nature, but developers often disable those features in haste or due to a lack of understanding that their actions can put customer data at risk, Bressers says.

“The most important thing to keep in mind when trying to secure data is having a clear understanding of what you are securing and what it means to your organization. How sensitive is the data? What level of security needs to be applied? Who should have access?” he explained.

“Sometimes working with a partner who is an expert at running a modern database is a more secure alternative than doing it yourself. Sometimes it’s not. Modern data management is a new problem for many organizations; make sure your people understand the opportunities and challenges. And most importantly, make sure they have the tools and training.”

Secondly, he says, companies should set up external scanning systems that continuously check for exposed databases.

“These may be the same tools used by attackers, but they immediately notify security teams when a developer has mistakenly left sensitive data unlocked. For example, a free scanner is available from Shadowserver.”

Elastic offers information and documentation on how to enable the security features of Elasticsearch databases and prevent exposure, he adds and points out that security is enabled by default in their Elasticsearch Service on Elastic Cloud and cannot be disabled.

Defense in depth

No organization will ever be 100% safe, but steps can be taken to decrease a company’s attack surface. “Defense in depth” is the name of the game, Bressers says, and in this case, it should include the following security layers:

  • Discovery of data exposure (using the previously mentioned external scanning systems)
  • Strong authentication (SSO or usernames/passwords)
  • Prioritization of data access (e.g., HR may only need access to employee information and the accounting department may only need access to budget and tax data)
  • Deployment of monitoring infrastructures and automated solutions that can quickly identify potential problems before they become emergencies, isolate infected databases, and flag to support and IT teams for next steps

He also advises organizations that don’t have the internal expertise to set security configurations and managing a clustered database to hire of service providers that can handle data management and have a strong security portfolio, and to always have a mitigation plan in place and rehearse it with their IT and security teams so that when something does happen, they can execute a swift and intentional response.

The brain of the SIEM and SOAR

SIEM and SOAR solutions are important tools in a cybersecurity stack. They gather a wealth of data about potential security incidents throughout your system and store that info for review. But just like nerve endings in the body sending signals, what good are these signals if there is no brain to process, categorize and correlate this information?

siem soar tools

A vendor-agnostic XDR (Extended Detection and Response) solution is a necessary component for solving the data overload problem – a “brain” that examines all of the past and present data collected and assigns a collective meaning to the disparate pieces. Without this added layer, organizations are unable to take full advantage of their SIEM and SOAR solutions.

So, how do organizations implement XDR? Read on.

SIEM and SOAR act like nerves

It’s easy for solutions with acronyms to cause confusion. SOAR and SIEM are perfect examples, as they are two very different technologies that often get lumped together. They aren’t the same thing, and they do bring complementary capabilities to the security operations center, but they still don’t completely close the automation gap.

The SIEM is a decades-old solution that uses technology from that era to solve specific problems. At their core, SIEMs are data collection, workflow and rules engines that enable users to sift through alerts and group things together for investigation.

In the last several years, SOAR has been the favorite within the security industry’s marketing landscape. Just as the SIEM runs on rules, the SOAR runs on playbooks. These playbooks let an analyst automate steps in the event detection, enrichment, investigation and remediation process. And just like with SIEM rules, someone has to write and update them.

Because many organizations already have a SIEM, it seemed reasonable for the SOAR providers to start with automating the output from the SIEM tool or security platform console. So: Security controls send alerts to a SIEM > the SIEM uses rules written by the security team to filter down the number of alerts to a much smaller number, usually 1,000,000:1 > SIEM events are sent to the SOAR, where playbooks written by the security team use workflow automation to investigate and respond to the alerts.

SOAR investigation playbooks attempt to contextualize the events with additional data – often the same data that the SIEM has filtered out. Writing these investigation playbooks can occupy your security team for months, and even then, they only cover a few scenarios and automate simple tasks like virus total lookups.

The verdict is that SOARs and SIEMs purport to perform all the actions necessary to automate the screening of alerts, but the technology in itself cannot do this. It requires trained staff to bring forth this capability by writing rules and playbooks.

Coming back to the analogy, this data can be compared to the nerves flowing through the human body. They fire off alerts that something has happened – alerts that mean nothing without a processing system that can gather context and explain what has happened.

Giving the nerves a brain

What the nerves need is a brain that can receive and interpret their signals. An XDR engine, powered by Bayesian reasoning, is a machine-powered brain that can investigate any output from the SIEM or SOAR at speed and scale. This replaces the traditional Boolean logic (that is searching for things that IT teams know to be somewhat suspicious) with a much richer way to reason about the data.

This additional layer of understanding will work out of the box with the products an organization already has in place to provide key correlation and context. For instance, imagine that a malicious act occurs. That malicious act is going to be observed by multiple types of sensors. All of that information needs to be put together, along with the context of the internal systems, the external systems and all of the other things that integrate at that point. This gives the system the information needed to know the who, what, when, where, why and how of the event.

This is what the system’s brain does. It boils all of the data down to: “I see someone bad doing something bad. I have discovered them. And now I am going to manage them out.” What the XDR brain is going to give the IT security team is more accurate, consistent results, fewer false positives and faster investigation times.

How to apply an XDR brain

To get started with integrating XDR into your current system, take these three steps:

1. Deploy a solution that is vendor-agnostic and works out of the box. This XDR layer of security doesn’t need playbooks or rules. It changes the foundation of your security program and how your staff do their work. This reduces your commitment in time and budget for security engineering, or at least enables you to redirect it.

2. It has become much easier in the last several years to collect, store and – to some extent – analyze data. In particular, cloud architectures offer simple and cost-effective options for collecting and storing vast quantities of data. For this reason, it’s now possible to turn your sensors all the way up rather than letting in just a small stream of data.

3. Decide which risk reduction projects are critical for the team. Automation should release security professionals from mundane tasks so they can focus on high-value actions that truly reduce risk, like incident response, hunting and tuning security controls. There may also be budget that is freed up for new technology or service purchases.

Reading the signals

To make the most of SOARs and SIEMs, you XDR – a tool that will take the data collected and add the context needed to turn thousands of alerts into one complete situation that is worth investigating.

The XDR layer is an addition to a company’s cybersecurity strategy that will most effectively use SIEM and SOAR, giving all those nerve signals a genius brain that can sort them out and provide the context needed in today’s cyber threat landscape.

In the era of AI, standards are falling behind

According to a recent study, only a minority of software developers are actually working in a software development company. This means that nowadays literally every company builds software in some form or another.

standards development

As a professional in the field of information security, it is your task to protect information, assets, and technologies. Obviously, the software built by or for your company that is collecting, transporting, storing, processing, and finally acting upon your company’s data, is of high interest. Secure development practices should be enforced early on and security must be tested during the software’s entire lifetime.

Within the (ISC)² common body of knowledge for CISSPs, software development security is listed as an individual domain. Several standards and practices covering security in the Software Development Lifecycle (SDLC) are available: ISO/IEC 27024:2011, ISO/IEC TR 15504, or NIST SP800-64 Revision 2, to name some.

All of the above ask for continuous assessment and control of artifacts on the source-code level, especially regarding coding standards and Common Weakness Enumerations (CWE), but only briefly mention static application security testing (SAST) as a possible way to address these issues. In the search for possible concrete tools, NIST provides SP 500-268 v1.1 “Source Code Security Analysis Tool Function Specification Version 1.1”.

In May 2019, NIST withdrew the aforementioned SP800-64 Rev2. NIST SP 500-268 was published over nine years ago. This seems to be symptomatic for an underlying issue we see: the standards cannot keep up with the rapid pace of development and change in the field.

A good example is the dawn of the development language Rust, which addresses a major source of security issues presented by the classically used language C++ – namely memory management. Major players in the field such as Microsoft and Google saw great advantages and announced that they would focus future developments towards Rust. While the standards mention development languages superior to others, neither the mechanisms used by Rust nor Rust itself is mentioned.

In the field of Static Code Analysis, the information in NIST SP 500-268 is not wrong, but the paper simply does not mention advances in the field.

Let us briefly discuss two aspects: First, the wide use of open source software gave us insight into a vast quantity of source code changes and the reasoning behind them (security, performance, style). On top of that, we have seen increasing capacities of CPU power to process this data, accompanied by algorithmic improvements. Nowadays, we have a large lake of training data available. To use our company as an example, in order to train our underlying model for C++ alone, we are scanning changes in over 200,000 open source projects with millions of files containing rich history.

Secondly, in the past decade, we’ve witnessed tremendous advances in machine learning. We see tools like GPT-3 and their applications in source code being discussed widely. Classically, static source code analysis was the domain of Symbolic AI—facts and rules applied to source code. The realm of source code is perfectly suited for this approach since software source code has a well-defined syntax and grammar. The downside is that these rules were developed by engineers, which limits the pace in which rules can be generated. The idea would be to automate the rule construction by using machine learning.

Recently, we see research in the field of machine learning being applied to source code. Again, let us use our company as an example: By using the vast amount of changes in open source, our system looks out for patterns connected to security. It presents possible rules to an engineer together with found cases in the training set—both known and fixed, as well as unknown.

Also, the system supports parameters in the rules. Possible values for these parameters are collected by the system automatically. As a practical example, taint analysis follows incoming data to its use inside of the application to make sure the data is sanitized before usage. The system automatically learns possible sources, sanitization, and sink functions.

Back to the NIST Special Papers: With the withdrawal of SP 800-64 Rev 2, users were pointed to NIST SP 800-160 Vol 1 for the time being until a new, updated white paper is published. This was at the end of May 2019. The nature of these papers is to only describe high-level best practices, list some examples, and stay rather vague in concrete implementation. Yet, the documents are the basis for reviews and audits. Given the importance of the field, it seems as if a major component is missing. It is also time to think about processes that would help us to keep up with the pace of technology.

How to build up cybersecurity for medical devices

Manufacturing medical devices with cybersecurity firmly in mind is an endeavor that, according to Christopher Gates, an increasing number of manufacturers is trying to get right.

cybersecurity medical devices

Healthcare delivery organizations have started demanding better security from medical device manufacturers (MDMs), he says, and many have have implemented secure procurement processes and contract language for MDMs that address the cybersecurity of the device itself, secure installation, cybersecurity support for the life of the product in the field, liability for breaches caused by a device not following current best practice, ongoing support for events in the field, and so on.

“For someone like myself who has been focused on cybersecurity at MDMs for over 12 years, this is excellent progress as it will force MDMs to take security seriously or be pushed out of the market by competitors who do take it seriously. Positive pressure from MDMs is driving cybersecurity forward more than any other activity,” he told Help Net Security.

Gates is a principal security architect at Velentium and one of the authors of the recently released Medical Device Cybersecurity for Engineers and Manufacturers, a comprehensive guide to medical device secure lifecycle management, aimed at engineers, managers, and regulatory specialists.

In this interview, he shares his knowledge regarding the cybersecurity mistakes most often made by manufacturers, on who is targeting medical devices (and why), his view on medical device cybersecurity standards and initiatives, and more.

[Answers have been edited for clarity.]

Are attackers targeting medical devices with a purpose other than to use them as a way into a healthcare organization’s network?

The easy answer to this is “yes,” since many MDMs in the medical device industry perform “competitive analysis” on their competitors’ products. It is much easier and cheaper for them to have a security researcher spend a few hours extracting an algorithm from a device for analysis than to spend months or even years of R&D work to pioneer a new algorithm from scratch.

Also, there is a large, hundreds-of-millions-of-dollars industry of companies who “re-enable” consumed medical disposables. This usually requires some fairly sophisticated reverse-engineering to return the device to its factory default condition.

Lastly, the medical device industry, when grouped together with the healthcare delivery organizations, constitutes part of critical national infrastructure. Other industries in that class (such as nuclear power plants) have experienced very directed and sophisticated attacks targeting safety backups in their facilities. These attacks seem to be initial testing of a cyber weapon that may be used later.

While these are clearly nation-state level attacks, you have to wonder if these same actors have been exploring medical devices as a way to inhibit our medical response in an emergency. I’m speculating: we have no evidence that this has happened. But then again, if it has happened there likely wouldn’t be any evidence, as we haven’t been designing medical devices and infrastructure with the ability to detect potential cybersecurity events until very recently.

What are the most often exploited vulnerabilities in medical devices?

It won’t come as a surprise to anyone in security when I say “the easiest vulnerabilities to exploit.” An attacker is going to start with the obvious ones, and then increasingly get more sophisticated. Mistakes made by developers include:

Unsecured firmware updating

I personally always start with software updates in the field, as they are so frequently implemented incorrectly. An attacker’s goal here is to gain access to the firmware with the intent of reverse-engineering it back into easily-readable source code that will yield more widely exploitable vulnerabilities (e.g., one impacting every device in the world). All firmware update methods have at least three very common potential design vulnerabilities. They are:

  • Exposure of the binary executable (i.e., it isn’t encrypted)
  • Corrupting the binary executable with added code (i.e., there isn’t an integrity check)
  • A rollback attack which downgrades the version of firmware to a version with known exploitable vulnerabilities (there isn’t metadata conveying the version information).

Overlooking physical attacks

Physical attack can be mounted:

  • Through an unsecured JTAG/SWD debugging port
  • Via side-channel (power monitoring, timing, etc.) exploits to expose the values of cryptographic keys
  • By sniffing internal busses, such as SPI and I2C
  • Exploiting flash memory external to the microcontroller (a $20 cable can get it to dump all of its contents)

Manufacturing support left enabled

Almost every medical device needs certain functions to be available during manufacturing. These are usually for testing and calibration, and none of them should be functional once the device is fully deployed. Manufacturing commands are frequently documented in PDF files used for maintenance, and often only have minor changes across product/model lines inside the same manufacturer, so a little experimentation goes a long way in letting an attacker get access to all kinds of unintended functionality.

No communication authentication

Just because a communications medium connects two devices doesn’t mean that the device being connected to is the device that the manufacturer or end-user expects it to be. No communications medium is inherently secure; it’s what you do at the application level that makes it secure.

Bluetooth Low Energy (BLE) is an excellent example of this. Immediately following a pairing (or re-pairing), a device should always, always perform a challenge-response process (which utilizes cryptographic primitives) to confirm it has paired with the correct device.

I remember attending an on-stage presentation of a new class II medical device with a BLE interface. From the audience, I immediately started to explore the device with my smartphone. This device had no authentication (or authorization), so I was able to perform all operations exposed on the BLE connection. I was engrossed in this interface when I suddenly realized there was some commotion on stage as they couldn’t get their demonstration to work: I had accidentally taken over the only connection the device supported. (I then quickly terminated the connection to let them continue with the presentation.)

What things must medical device manufacturers keep in mind if they want to produce secure products?

There are many aspects to incorporating security into your development culture. These can be broadly lumped into activities that promote security in your products, versus activities that convey a false sense of security and are actually a waste of time.

Probably the most important thing that a majority of MDMs need to understand and accept is that their developers have probably never been trained in cybersecurity. Most developers have limited knowledge of how to incorporate cybersecurity into the development lifecycle, where to invest time and effort into securing a device, what artifacts are needed for premarket submission, and how to proper utilize cryptography. Without knowing the details, many managers assume that security is being adequately included somewhere in their company’s development lifecycle; most are wrong.

To produce secure products, MDMs must follow a secure “total product life cycle,” which starts on the first day of development and ends years after the product’s end of life or end of support.

They need to:

  • Know the three areas where vulnerabilities are frequently introduced during development (design, implementation, and through third-party software components), and how to identify, prevent, or mitigate them
  • Know how to securely transfer a device to production and securely manage it once in production
  • Recognize an MDM’s place in the device’s supply chain: not at the end, but in the middle. An MDMs cybersecurity responsibilities extend up and down the chain. They have to contractually enforce cybersecurity controls on their suppliers, and they have to provide postmarket support for their devices in the field, up through and after end-of-life
  • Ccreate and maintain Software Bills of Materials (SBOMs) for all products, including legacy products. Doing this work now will help them stay ahead of regulation and save them money in the long run.

They must avoid mistakes like:

  • Not thinking that a medical device needs to be secured
  • Assuming their development team ‘can’ and ‘is’ securing their product
  • Not designing-in the ability to update the device in the field
  • Assuming that all vulnerabilities can be mitigated by a field update
  • Only considering the security of one aspect of your design (e.g., its wireless communication protocol). Security is a chain: for the device to be secure, all the links of the chain need to be secure. Attackers are not going to consider certain parts of the target device ‘out of bounds’ for exploiting.

Ultimately, security is about protecting the business model of an MDM. This includes the device’s safety and efficacy for the patient, which is what the regulations address, but it also includes public opinion, loss of business, counterfeit accessories, theft of intellectual property, and so forth. One mistake I see companies frequently make is doing the minimum on security to gain regulatory approval, but neglecting to protect their other business interests along the way – and those can be very expensive to overlook.

What about the developers? Any advice on skills they should acquire or brush up on?

First, I’d like to take some pressure off developers by saying that it’s unreasonable to expect that they have some intrinsic knowledge of how to implement cybersecurity in a product. Until very recently, cybersecurity was not part of traditional engineering or software development curriculum. Most developers need additional training in cybersecurity.

And it’s not only the developers. More than likely, project management has done them a huge disservice by creating a system-level security requirement that says something like, “Prevent ransomware attacks.” What is the development team supposed to do with that requirement? How is it actionable?

At the same time, involving the company’s network or IT cybersecurity team is not going to be an automatic fix either. IT Cybersecurity diverges from Embedded Cybersecurity in many respects, from detection to implementation of mitigations. No MDM is going to be putting a firewall on a device that is powered by a CR2032 battery anytime soon; yet there are ways to secure such a low-resource device.

In addition to the how-to book we wrote, Velentium will soon offer training available specifically for the embedded device domain, geared toward creating a culture of cybersecurity in development teams. My audacious goal is that within 5 years every medical device developer I talk to will be able to converse intelligently on all aspects of securing a medical device.

What cybersecurity legislation/regulation must companies manufacturing medical devices abide by?

It depends on the markets you intend to sell into. While the US has had the Food and Drug Administration (FDA) refining its medical device cybersecurity position since 2005, others are more recent entrants into this type of regulation, including Japan, China, Germany, Singapore, South Korea, Australia, Canada, France, Saudi Arabia, and the greater EU.

While all of these regulations have the same goal of securing medical devices, how they get there is anything but harmonized among them. Even the level of abstraction varies, with some focused on processes while others on technical activities.

But there are some common concepts represented in all these regulations, such as:

  • Risk management
  • Software bill of materials (SBOM)
  • Monitoring
  • Communication
  • “Total Product Lifecycle”
  • Testing

But if you plan on marketing in the US, the two most important document should be FDA’s:

  • 2018 – Draft Guidance: Content of Premarket Submissions for Management of Cybersecurity in Medical Devices
  • 2016 – Final Guidance: Postmarket Management of Cybersecurity in Medical Devices (The 2014 version of the guidance on premarket submissions can be largely ignored, as it no longer represents the FDA’s current expectations for cybersecurity in new medical devices).
What are some good standards for manufacturers to follow if they want to get cybersecurity right?

The Association for the Advancement of Medical Instrumentation’s standards are excellent. I recommend AAMI TIR57: 2016 and AAMI TIR97: 2019.

Also very good is the Healthcare & Public Health Sector Coordinating Council’s (HPH SCC) Joint Security Plan. And, to a lesser extent, the NIST Cyber Security Framework.

The work being done at the US Department of Commerce / NTIA on SBOM definition for vulnerability management and postmarket surveillance is very good as well, and worth following.

What initiatives exist to promote medical device cybersecurity?

Notable initiatives I’m familiar with include, first, the aforementioned NTIA work on SBOMs, now in its second year. There are also several excellent working groups at HSCC, including the Legacy Medical Device group and the Security Contract Language for Healthcare Delivery Organizations group. I’d also point to numerous working groups in the H-ISAC Information Sharing and Analysis Organization (ISAO), including the Securing the Medical Device Lifecycle group.

And I have to include the FDA itself here, which is in the process of revising its 2018 premarket draft guidance; we hope to see the results of that effort in early 2021.

What changes do you expect to see in the medical devices cybersecurity field in the next 3-5 years?

So much is happening at high and low levels. For instance, I hope to see the FDA get more of a direct mandate from Congress to enforce security in medical devices.

Also, many working groups of highly talented people are working on ways to improve the security posture of devices, such as the NTIA SBOM effort to improve the transparency of software “ingredients” in a medical device, allowing end-users to quickly assess their risk level when new vulnerabilities are discovered.

Semiconductor manufacturers continue to give us great mitigation tools in hardware, such as side-channel protections, cryptographic accelerators, virtualized security cores. Trustzone is a great example.

And at the application level, we’ll continue to see more and better packaged tools, such as cryptographic libraries and processes, to help developers avoid cryptography mistakes. Also, we’ll see more and better process tools to automate the application of security controls to a design.

HDOs and other medical device purchasers are better informed than ever before about embedded cybersecurity features and best practices. That trend will continue, and will further accelerate demand for better-secured products.

I hope to see some effort at harmonization between all the federal, state, and foreign regulations that have been recently released with those currently under consideration.

One thing is certain: legacy medical devices that can’t be secured will only go away when we can replace them with new medical devices that are secure by design. Bringing new devices to market takes a long time. There’s lots of great innovation underway, but really, we’re just getting started!