Moving to the cloud with a security-first, zero trust approach

Many companies tend to jump into the cloud before thinking about security. They may think they’ve thought about security, but when moving to the cloud, the whole concept of security changes. The security model must transform as well.

moving to the cloud

Moving to the cloud and staying secure

Most companies maintain a “castle, moat, and drawbridge” attitude to security. They put everything inside the “castle” (datacenter); establish a moat around it, with sharks and alligators, guns on turrets; and control access by raising the drawbridge. The access protocol involves a request for access, vetting through firewall rules where the access is granted or denied. That’s perimeter security.

When moving to the cloud, perimeter security is still important, but identity-based security is available to strengthen the security posture. That’s where a cloud partner skilled at explaining and operating a different security model is needed.

Anybody can grab a virtual machine, build the machine in the cloud, and be done, but establishing a VM and transforming the machine to a service with identity-based security is a different prospect. When identity is added to security, the model looks very different, resulting in cost savings and an increased security posture.

Advanced technology, cost of security, and lack of cybersecurity professionals place a strain on organizations. Cloud providers invest heavily in infrastructure, best-in-class tools, and a workforce uniquely focused on security. As a result, organizations win operationally, financially, and from a security perspective, when moving to the cloud. To be clear, moving applications and servers, as is, to the cloud does not make them secure.

Movement to the cloud should be a standardized process and should use a Cloud Center of Excellence (CCoE) or Cloud Business Office (CBO); however, implemented within a process focused on security first, organizations can reap the security benefits.

Shared responsibility

Although security is marketed as a shared responsibility in the cloud, ultimately, the owner of the data (customer) is responsible and the responsibility is non-transferrable. In short, the customer must understand the responsibility matrix (RACI) involved to accomplish their end goals. Every cloud provider has a shared responsibility matrix, but organizations often misunderstand the responsibilities or the lines fall into a grey area. Regardless of responsibility models, the data owner has a responsibility to protect the information and systems. As a result, the enterprise must own an understanding of all stakeholders, their responsibilities, and their status.

When choosing a partner, it’s vital for companies to identify their exact needs, their weaknesses, and even their culture. No cloud vendor will cover it all from the beginning, so it’s essential that organizations take control and ask the right questions (see Cloud Security Alliance’s CAIQ), in order to place trust in any cloud provider. If it’s to be a managed service, for example, it’s crucial to ask detailed questions about how the cloud provider intends to execute the offering.

It’s important to develop a standard security questionnaire and probe multiple layers deep into the service model until the provider is unable to meet the need. Looking through a multilayer deep lens allows the customer and service provider to understand the exact lines of responsibility and the details around task accomplishment.

Trust-as-a-Service

It might sound obvious, but it’s worth stressing: trust is a shared responsibility between the customer and cloud provider. Trust is also earned over time and is critical to the success of the customer-cloud provider relationship. That said, zero trust is a technical term that means, from a technology viewpoint, assume danger and breach. Organizations must trust their cloud provider but should avoid blind trust and validate. Trust as a Service (TaaS) is a newer acronym that refers to third-party endorsement of a provider’s security practices.

Key influencers of a customer’s trust in their cloud provider include:

  • Data location
  • Investigation status and location of data
  • Data segregation (keeping cloud customers’ data separated from others)
  • Availability
  • Privileged access
  • Backup and recovery
  • Regulatory compliance
  • Long-term viability

A TaaS example: Google Cloud

Google has taken great strides to earn customer trust, designing the Google Cloud Platform with a key eye on zero trust and its implementation of the model BeyondCorp. For example, Google has implemented two core concepts including:

  • Delivery of services and data: ensuring that people with the correct identity and the right purpose can access the required data every time
  • Prioritization and focus: access and innovation are placed ahead of threats and risks, meaning that as products are innovated, security is built into the environment

Transparency is very important to the trust relationship. Google has enabled transparency through strong visibility and control of data. When evaluating cloud providers, understanding their transparency related to access and service status is crucial. Google ensures transparency by using specific controls including:

  • Limited data center access from a physical standpoint, adhering to strict access controls
  • Disclosing how and why customer data is accessed
  • Incorporating a process of access approvals

Multi-layered security for a trusted infrastructure

Finally, cloud services must provide customers with an understanding of how each layer of infrastructure works and build rules into each. This includes operational and device security, encrypting data at rest, multiple layers of identity, and finally storage services: multi-layered, and supported by security by default.

Cloud native companies have a security-first approach and naturally have a higher security understanding and posture. That said, when choosing a cloud provider, enterprises should always understand, identify, and ensure that their cloud solution addresses each one of their security needs, and who’s responsible for what.

Essentially, every business must find a cloud partner that can answer all the key questions, provide transparency, and establish a trusted relationship in the zero trust world where we operate.

Preventing cybersecurity’s perfect storm

Zerologon might have been cybersecurity’s perfect storm: that moment when multiple conditions collide to create a devastating disaster. Thanks to Secura and Microsoft’s rapid response, it wasn’t.

Zerologon scored a perfect 10 CVSS score. Threats rating a perfect 10 are easy to execute and have deep-reaching impact. Fortunately, they aren’t frequent, especially in prominent software brands such as Windows. Still, organizations that perpetually lag when it comes to patching become prime targets for cybercriminals. Flaws like Zerologon are rare, but there’s no reason to assume that the next attack will not be using a perfect 10 CVSS vulnerability, this time a zero-day.

Zerologon: Unexpected squall

Zerologon escalates a domain user beyond their current role and permissions to a Windows Domain Administrator. This vulnerability is trivially easy to exploit. While it seems that the most obvious threat is a disgruntled insider, attackers may target any average user. The most significant risk comes from a user with an already compromised system.

In this scenario, a bad actor has already taken over an end user’s system but is constrained only to their current level of access. By executing this exploit, the bad actor can break out of their existing permissions box. This attack grants them the proverbial keys to the kingdom in a Windows domain to access whatever Windows-based devices they wish.

Part of why Zerologon is problematic is that many organizations rely on Windows as an authoritative identity for a domain. To save time, they promote their Windows Domain Administrators to an Administrator role throughout the organizational IT ecosystem and assign bulk permissions, rather than adding them individually. This method eases administration by removing the need to update the access permissions frequently as these users change jobs. This practice violates the principle of least privilege, leaving an opening for anyone with a Windows Domain Administrator role to exercise broad-reaching access rights beyond what they require to fulfill the role.

Beware of sharks

Advanced preparation for attacks like these requires a fundamental paradigm shift in organizational boundary definitions away from a legacy mentality to a more modern cybersecurity mindset. The traditional castle model assumes all threats remain outside the firewall boundary and trust everything either natively internal or connected via VPN to some degree.

Modern cybersecurity professionals understand the advantage of controls like zero standing privilege (ZSP), which authorizes no one and requires that each user request access and evaluation before granting privileged access. Think of it much like the security check at an airport. To get in, everyone —passenger, pilot, even store staff— needs to be inspected, prove they belong and have nothing questionable in their possession.

This continual re-certification prevents users from gaining access once they’ve experienced an event that alters their eligibility, such as leaving the organization or changing positions. Checking permissions before approving them ensures only those who currently require a resource can access it.

My hero zero (standing privilege)

Implementing the design concept of zero standing privilege is crucial to hardening against privilege escalation attacks, as it removes the administrator’s vast amounts of standing power and access. Users acquire these rights for a limited period and only on an as-needed basis. This Just-In-Time (JIT) method of provisioning creates a better access review process. Requests are either granted time-bound access or flagged for escalation to a human approver, ensuring automation oversight.

An essential component of zero standing privilege is avoiding super-user roles and access. Old school practitioners may find it odd and question the impact on daily administrative tasks that keep the ecosystem running. Users manage these tasks through heavily logged time-limited permission assignments. Reliable user behavior analytics, combined with risk-based privileged access management (PAM) and machine learning supported log analysis, offers organizations better contextual identity information. Understanding how their privileged access is leveraged and identifying access misuse before it takes root is vital to preventing a breach.

Peering into the depths

To even start with zero standing privilege, an organization must understand what assets they consider privileged. The categorization of digital assets begins the process. The next step is assigning ownership of these resources. Doing this allows organizations to configure the PAM software to accommodate the policies and access rules defined organizationally, ensuring access rules meet governance and compliance requirements.

The PAM solution requires in-depth visibility of each individual’s full access across all cloud and SaaS environments, as well as throughout the internal IT infrastructure. This information improves the identification of toxic combinations, where granted permissions create compliance issues such as segregation of duties (SoD) violations.

AI & UEBA to the rescue

Zero standing privilege generates a large number of user logs and behavioral information over time. Manual log review becomes unsustainable very quickly. Leveraging the power of AI and machine learning to derive intelligent analytics allows organizations to identify risky behaviors and locate potential breaches far faster than human users.

Integration of a user and entity behavior analytics (UEBA) software establishes baselines of behavior, triggering alerts when deviations occur. UEBA systems detect insider threats and advanced persistent threats (APTs) while generating contextual identity information.

UEBA systems track all behavior linked back to an entity and identify anomalous behaviors such as spikes in access requests, requesting access to data that would typically not be allowed for that user’s roles, or systematically accessing numerous items. Contextual information helps organizations identifying situations that might indicate a breach or point to unauthorized exfiltration of data.

Your compass points to ZTA

Protecting against privilege escalation threats requires more than merely staying up to date on patches. Part of stopping attacks like Zerologon is to re-imagine how security is architected in an organization. Centering identity as the new security perimeter and implementing zero standing privilege are essential to the foundation of a security model known as zero trust architecture (ZTA).

Zero trust architecture has existed for a while in the corporate world. It is gaining attention from the public sector since NIST’s recent approval of SP-207 outlined ZTA and how to leverage it for the government agencies. NIST’s sanctification of ZTA opened the doors for government entities and civilian contractors to incorporate it into their security model. Taking this route helps to close the privilege escalation pathway providing your organization a secure harbor in the event of another cybersecurity perfect storm.

What is confidential computing? How can you use it?

What is confidential computing? Can it strengthen enterprise security? Sam Lugani, Lead Security PMM, Google Workspace & GCP, answers these and other questions in this Help Net Security interview.

what is confidential computing

How does confidential computing enhance the overall security of a complex enterprise architecture?

We’ve all heard about encryption in-transit and at-rest, but as organizations prepare to move their workloads to the cloud, one of the biggest challenges they face is how to process sensitive data while still keeping it private. However, when data is being processed, there hasn’t been an easy solution to keep it encrypted.

Confidential computing is a breakthrough technology which encrypts data in-use – while it is being processed. It creates a future where private and encrypted services become the cloud standard.

At Google Cloud, we believe this transformational technology will help instill confidence that customer data is not being exposed to cloud providers or susceptible to insider risks.

Confidential computing has moved from research projects into worldwide deployed solutions. What are the prerequisites for delivering confidential computing across both on-prem and cloud environments?

Running workloads confidentially will differ based on what services and tools you use, but one thing is given – organizations don’t want to compromise on usability and performance, at the cost of security.

Those running Google Cloud can seamlessly take advantage of the products in our portfolio, Confidential VMs and Confidential GKE Nodes.

All customer workloads that run in VMs or containers today, can run as a confidential without significant performance impact. The best part is that we have worked hard to simplify the complexity. One checkbox—it’s that simple.

what is confidential computing

What type of investments does confidential computing require? What technologies and techniques are involved?

To deliver on the promise of confidential computing, customers need to take advantage of security technology offered by modern, high-performance CPUs, which is why Google Cloud’s Confidential VMs run on N2D series VMs powered by 2nd Gen AMD EPYC processors.

To support these environments, we also had to update our own hypervisor and low-level platform stack while also working closely with the open source Linux community and modern operating system distributors to ensure that they can support the technology.

Networking and storage drivers are also critical to the deployment of secure workloads and we had to ensure we were capable of handling confidential computing traffic.

How is confidential computing helping large organizations with a massive work-from-home movement?

As we entered the first few months of dealing with COVID-19, many organizations expected a slowdown in their digital strategy. Instead, we saw the opposite – most customers accelerated their use of cloud-based services. Today, enterprises have to manage a new normal which includes a distributed workforce and new digital strategies.

With workforces dispersed, confidential computing can help organizations collaborate on sensitive workloads in the cloud across geographies and competitors, all while preserving privacy of confidential datasets. This can lead to the development of transformation technologies – imagine, for example, being able to more quickly build vaccines and cure diseases as a result of this secure collaboration.

How do you see the work of the Confidential Computing Consortium evolving in the near future?

Google was among the founding members of the Confidential Computing Consortium, operating under the umbrella of the Linux Foundation to facilitate adoption of confidential computing.

Cloud providers, hardware manufacturers, and software vendors all need to work together to define standards to advance confidential computing. As the technology garners more interest, sustained industry collaboration such as the Consortium will be key to helping realize the true potential of confidential computing.

New research shows risk in healthcare supply chain

Exposures and cybersecurity challenges can turn out to be costly, according to statistics from the US Department of Health and Human Services (HHS), 861 breaches of protected health information have been reported over the last 24 months.

healthcare supply chain

New research from RiskRecon and the Cyentia Institute pinpointed risk in third-party healthcare supply chain and showed that healthcare’s high exposure rate indicates that managing a comparatively small Internet footprint is a big challenge for many organizations in that sector.

But there is a silver lining: gaining the visibility needed to pinpoint and rectify exposures in the healthcare risk surface is feasible.

Key findings

The research and report are based on RiskRecon’s assessment of more than five million of internet-facing systems across approximately 20,000 organizations, focusing exclusively on the healthcare sector.

Highest rate

Healthcare has one of the highest average rates of severe security findings relative to other industries. Furthermore, those rates vary hugely across institutions, meaning the worst exposure rates in healthcare are worse than the worst exposure rates in other sectors.

Size matters

Severe security findings decrease as employees increase. For example, the rate of severe security findings in the smallest healthcare providers is 3x higher than that of the largest providers.

Sub sectors vary

Sub sectors within healthcare reveal different risk trends. The research shows that hospitals have a much larger Internet surface area (hosts, providers, countries), but maintain relatively low rates of security findings. Additionally, nursing and residential care sub-sector has the smallest Internet footprint yet the highest levels of exposure. Outpatient (ambulatory) and social services mostly fall in between hospitals and nursing facilities.

Cloud deployment impacts

As digital transformation ushers in a plethora of changes, critical areas of risk exposure are also changing and expanding. While most healthcare firms host a majority of their Internet-facing systems on-prem, they do also leverage the cloud. We found that healthcare’s severe finding rate for high-value assets in the cloud is 10 times that of on-prem. This is the largest on-prem versus cloud exposure imbalance of any sector.

It must also be noted that not all cloud environments are the same. A previous RiskRecon report on the cloud risk surface discovered an average 12 times the difference between cloud providers with the highest and lowest exposure rates. This says more about the users and use cases of various cloud platforms than intrinsic security inequalities. In addition, as healthcare organizations look to migrate to the cloud, they should assess their own capabilities for handling cloud security.

The healthcare supply chain is at risk

It’s important to realize that the broader healthcare ecosystem spans numerous industries and these entities often have deep connections into the healthcare provider’s facilities, operations, and information systems. Meaning those organizations can have significant ramifications for third-party risk management.

When you dig into it, even though big pharma has the biggest footprint (hosts, third-party service providers, and countries of operation), they keep it relatively hygienic. Manufacturers of various types of healthcare apparatus and instruments show a similar profile of extensive assets yet fewer findings. Unfortunately, the information-heavy industries of medical insurance, EHR systems providers, and collection agencies occupy three of the top four slots for the highest rate of security findings.

“In 2020, Health Information Sharing and Analysis Center (H-ISAC) members across healthcare delivery, big pharma, payers and medical device manufacturers saw increased cyber risks across their evolving and sometimes unfamiliar supply chains,” said Errol Weiss, CSO at H-ISAC.

“Adjusting to the new operating environment presented by COVID-19 forced healthcare companies to rapidly innovate and adopt solutions like cloud technology that also added risk with an expanded digital footprint to new suppliers and partners with access to sensitive patient data.”

As attackers evolve their tactics, continuous cybersecurity education is a must

As the Information Age slowly gives way to the Fourth Industrial Revolution, and the rise of IoT and IIoT, on-demand availability of computer system resources, big data and analytics, and cyber attacks aimed at business environments impact on our everyday lives, there’s an increasing need for knowledgeable cybersecurity professionals and, unfortunately, an increasing cybersecurity workforce skills gap.

continuous cybersecurity education

The cybersecurity skills gap is huge

A year ago, (ISC)² estimated that the global cybersecurity workforce numbered 2.8 million professionals, when there’s an actual need for 4.07 million.

According to a recent global study of cybersecurity professionals by the Information Systems Security Association (ISSA) and analyst firm Enterprise Strategy Group (ESG), there has been no significant progress towards a solution to this problem in the last four years.

“What’s needed is a holistic approach of continuous cybersecurity education, where each stakeholder needs to play a role versus operating in silos,” ISSA and ESG stated.

Those starting their career in cybersecurity need many years to develop real cybersecurity proficiency, the respondents agreed. They need cybersecurity certifications and hands-on experience (i.e., jobs) and, ideally, a career plan and guidance.

Continuous cybersecurity training and education are key

Aside from the core cybersecurity talent pool, new job recruits are new graduates from universities, consultants/contractors, employees at other departments within an organization, security/hardware vendors and career changers.

One thing they all have in common is the need for constant additional training, as technology advances and changes and attackers evolve their tactics, techniques and procedures.

Though most IT and security professionals use their own free time to improve their cyber skills, they must learn on the job and get effective support from their employers for their continued career development.

Times are tough – there’s no doubt of that – but organizations must continue to invest in their employee’s career and skills development if they want to retain their current cybersecurity talent, develop it, and attract new, capable employees.

“The pandemic has shown us just how critical cybersecurity is to the successful operation of our respective economies and our individual lifestyles,” noted Deshini Newman, Managing Director EMEA, (ISC)².

Certifications show employers that cybersecurity professionals have the knowledge and skills required for the job, but also indicate that they are invested in keeping pace with a myriad of evolving issues.

“Maintaining a cybersecurity certification, combined with professional membership is evidence that professionals are constantly improving and developing new skills to add value to the profession and taking ownership for their careers. This new knowledge and understanding can be shared throughout an organisation to support security best practice, as well as ensuring cyber safety in our homes and communities,” she pointed out.

With database attacks on the rise, how can companies protect themselves?

Misconfigured or unsecured databases exposed on the open web are a fact of life. We hear about some of them because security researchers tell us how they discovered them, pinpointed their owners and alerted them, but many others are found by attackers first.

exposed databases

It used to take months to scan the Internet looking for open systems, but attackers now have access to free and easy-to-use scanning tools that can find them in less than an hour.

“As one honeypot experiment showed, open databases are targeted hundreds of times within a few hours,” Josh Bressers, product security lead at Elastic, told Help Net Security.

“There’s no way to leave unsecured data online without opening the data up to attack. This is why it’s crucial to always enable security and authentication features when setting up databases, so that your organization avoids this risk altogether.”

What do attackers do with exposed databases?

Bressers has been involved in the security of products and projects – especially open-source – for a very long time. In the past two decades, he created the product security division at Progeny Linux Systems and worked as a manager of the Red Hat product security team and headed the security strategy in Red Hat’s Platform Business Unit.

He now manages bug bounties, penetration testing and security vulnerability programs for Elastic’s products, as well as the company’s efforts to improve application security, add new and improve existing security features as needed or requested by customers.

The problem with exposed Elasticsearch (MariaDB, MongoDB, etc.) databases, he says, is that they are often left unsecured by developers by mistake and companies don’t discover the exposure quickly.

“The scanning tools do most of the work, so it’s up to the attacker to decide if the database has any data worth stealing,” he noted, and pointed out that this isn’t hacking, exactly – it’s mining of open services.

Attackers can quickly exfiltrate the accessible data, hold it for ransom, sell it to the highest bidder, modify it or simply delete it all.

“Sometimes there’s no clear advantage or motive. For example, this summer saw a string of cyberattacks called the Meow Bot attacks that have affected at least 25,000 databases so far. The attacker replaced the contents of every afflicted database with the word ‘meow’ but has not been identified or revealed anything behind the purpose of the attack,” he explained.

Advice for organizations that use clustered databases

Open-source database platforms such as Elasticsearch have built-in security to prevent attacks of this nature, but developers often disable those features in haste or due to a lack of understanding that their actions can put customer data at risk, Bressers says.

“The most important thing to keep in mind when trying to secure data is having a clear understanding of what you are securing and what it means to your organization. How sensitive is the data? What level of security needs to be applied? Who should have access?” he explained.

“Sometimes working with a partner who is an expert at running a modern database is a more secure alternative than doing it yourself. Sometimes it’s not. Modern data management is a new problem for many organizations; make sure your people understand the opportunities and challenges. And most importantly, make sure they have the tools and training.”

Secondly, he says, companies should set up external scanning systems that continuously check for exposed databases.

“These may be the same tools used by attackers, but they immediately notify security teams when a developer has mistakenly left sensitive data unlocked. For example, a free scanner is available from Shadowserver.”

Elastic offers information and documentation on how to enable the security features of Elasticsearch databases and prevent exposure, he adds and points out that security is enabled by default in their Elasticsearch Service on Elastic Cloud and cannot be disabled.

Defense in depth

No organization will ever be 100% safe, but steps can be taken to decrease a company’s attack surface. “Defense in depth” is the name of the game, Bressers says, and in this case, it should include the following security layers:

  • Discovery of data exposure (using the previously mentioned external scanning systems)
  • Strong authentication (SSO or usernames/passwords)
  • Prioritization of data access (e.g., HR may only need access to employee information and the accounting department may only need access to budget and tax data)
  • Deployment of monitoring infrastructures and automated solutions that can quickly identify potential problems before they become emergencies, isolate infected databases, and flag to support and IT teams for next steps

He also advises organizations that don’t have the internal expertise to set security configurations and managing a clustered database to hire of service providers that can handle data management and have a strong security portfolio, and to always have a mitigation plan in place and rehearse it with their IT and security teams so that when something does happen, they can execute a swift and intentional response.

The brain of the SIEM and SOAR

SIEM and SOAR solutions are important tools in a cybersecurity stack. They gather a wealth of data about potential security incidents throughout your system and store that info for review. But just like nerve endings in the body sending signals, what good are these signals if there is no brain to process, categorize and correlate this information?

siem soar tools

A vendor-agnostic XDR (Extended Detection and Response) solution is a necessary component for solving the data overload problem – a “brain” that examines all of the past and present data collected and assigns a collective meaning to the disparate pieces. Without this added layer, organizations are unable to take full advantage of their SIEM and SOAR solutions.

So, how do organizations implement XDR? Read on.

SIEM and SOAR act like nerves

It’s easy for solutions with acronyms to cause confusion. SOAR and SIEM are perfect examples, as they are two very different technologies that often get lumped together. They aren’t the same thing, and they do bring complementary capabilities to the security operations center, but they still don’t completely close the automation gap.

The SIEM is a decades-old solution that uses technology from that era to solve specific problems. At their core, SIEMs are data collection, workflow and rules engines that enable users to sift through alerts and group things together for investigation.

In the last several years, SOAR has been the favorite within the security industry’s marketing landscape. Just as the SIEM runs on rules, the SOAR runs on playbooks. These playbooks let an analyst automate steps in the event detection, enrichment, investigation and remediation process. And just like with SIEM rules, someone has to write and update them.

Because many organizations already have a SIEM, it seemed reasonable for the SOAR providers to start with automating the output from the SIEM tool or security platform console. So: Security controls send alerts to a SIEM > the SIEM uses rules written by the security team to filter down the number of alerts to a much smaller number, usually 1,000,000:1 > SIEM events are sent to the SOAR, where playbooks written by the security team use workflow automation to investigate and respond to the alerts.

SOAR investigation playbooks attempt to contextualize the events with additional data – often the same data that the SIEM has filtered out. Writing these investigation playbooks can occupy your security team for months, and even then, they only cover a few scenarios and automate simple tasks like virus total lookups.

The verdict is that SOARs and SIEMs purport to perform all the actions necessary to automate the screening of alerts, but the technology in itself cannot do this. It requires trained staff to bring forth this capability by writing rules and playbooks.

Coming back to the analogy, this data can be compared to the nerves flowing through the human body. They fire off alerts that something has happened – alerts that mean nothing without a processing system that can gather context and explain what has happened.

Giving the nerves a brain

What the nerves need is a brain that can receive and interpret their signals. An XDR engine, powered by Bayesian reasoning, is a machine-powered brain that can investigate any output from the SIEM or SOAR at speed and scale. This replaces the traditional Boolean logic (that is searching for things that IT teams know to be somewhat suspicious) with a much richer way to reason about the data.

This additional layer of understanding will work out of the box with the products an organization already has in place to provide key correlation and context. For instance, imagine that a malicious act occurs. That malicious act is going to be observed by multiple types of sensors. All of that information needs to be put together, along with the context of the internal systems, the external systems and all of the other things that integrate at that point. This gives the system the information needed to know the who, what, when, where, why and how of the event.

This is what the system’s brain does. It boils all of the data down to: “I see someone bad doing something bad. I have discovered them. And now I am going to manage them out.” What the XDR brain is going to give the IT security team is more accurate, consistent results, fewer false positives and faster investigation times.

How to apply an XDR brain

To get started with integrating XDR into your current system, take these three steps:

1. Deploy a solution that is vendor-agnostic and works out of the box. This XDR layer of security doesn’t need playbooks or rules. It changes the foundation of your security program and how your staff do their work. This reduces your commitment in time and budget for security engineering, or at least enables you to redirect it.

2. It has become much easier in the last several years to collect, store and – to some extent – analyze data. In particular, cloud architectures offer simple and cost-effective options for collecting and storing vast quantities of data. For this reason, it’s now possible to turn your sensors all the way up rather than letting in just a small stream of data.

3. Decide which risk reduction projects are critical for the team. Automation should release security professionals from mundane tasks so they can focus on high-value actions that truly reduce risk, like incident response, hunting and tuning security controls. There may also be budget that is freed up for new technology or service purchases.

Reading the signals

To make the most of SOARs and SIEMs, you XDR – a tool that will take the data collected and add the context needed to turn thousands of alerts into one complete situation that is worth investigating.

The XDR layer is an addition to a company’s cybersecurity strategy that will most effectively use SIEM and SOAR, giving all those nerve signals a genius brain that can sort them out and provide the context needed in today’s cyber threat landscape.

In the era of AI, standards are falling behind

According to a recent study, only a minority of software developers are actually working in a software development company. This means that nowadays literally every company builds software in some form or another.

standards development

As a professional in the field of information security, it is your task to protect information, assets, and technologies. Obviously, the software built by or for your company that is collecting, transporting, storing, processing, and finally acting upon your company’s data, is of high interest. Secure development practices should be enforced early on and security must be tested during the software’s entire lifetime.

Within the (ISC)² common body of knowledge for CISSPs, software development security is listed as an individual domain. Several standards and practices covering security in the Software Development Lifecycle (SDLC) are available: ISO/IEC 27024:2011, ISO/IEC TR 15504, or NIST SP800-64 Revision 2, to name some.

All of the above ask for continuous assessment and control of artifacts on the source-code level, especially regarding coding standards and Common Weakness Enumerations (CWE), but only briefly mention static application security testing (SAST) as a possible way to address these issues. In the search for possible concrete tools, NIST provides SP 500-268 v1.1 “Source Code Security Analysis Tool Function Specification Version 1.1”.

In May 2019, NIST withdrew the aforementioned SP800-64 Rev2. NIST SP 500-268 was published over nine years ago. This seems to be symptomatic for an underlying issue we see: the standards cannot keep up with the rapid pace of development and change in the field.

A good example is the dawn of the development language Rust, which addresses a major source of security issues presented by the classically used language C++ – namely memory management. Major players in the field such as Microsoft and Google saw great advantages and announced that they would focus future developments towards Rust. While the standards mention development languages superior to others, neither the mechanisms used by Rust nor Rust itself is mentioned.

In the field of Static Code Analysis, the information in NIST SP 500-268 is not wrong, but the paper simply does not mention advances in the field.

Let us briefly discuss two aspects: First, the wide use of open source software gave us insight into a vast quantity of source code changes and the reasoning behind them (security, performance, style). On top of that, we have seen increasing capacities of CPU power to process this data, accompanied by algorithmic improvements. Nowadays, we have a large lake of training data available. To use our company as an example, in order to train our underlying model for C++ alone, we are scanning changes in over 200,000 open source projects with millions of files containing rich history.

Secondly, in the past decade, we’ve witnessed tremendous advances in machine learning. We see tools like GPT-3 and their applications in source code being discussed widely. Classically, static source code analysis was the domain of Symbolic AI—facts and rules applied to source code. The realm of source code is perfectly suited for this approach since software source code has a well-defined syntax and grammar. The downside is that these rules were developed by engineers, which limits the pace in which rules can be generated. The idea would be to automate the rule construction by using machine learning.

Recently, we see research in the field of machine learning being applied to source code. Again, let us use our company as an example: By using the vast amount of changes in open source, our system looks out for patterns connected to security. It presents possible rules to an engineer together with found cases in the training set—both known and fixed, as well as unknown.

Also, the system supports parameters in the rules. Possible values for these parameters are collected by the system automatically. As a practical example, taint analysis follows incoming data to its use inside of the application to make sure the data is sanitized before usage. The system automatically learns possible sources, sanitization, and sink functions.

Back to the NIST Special Papers: With the withdrawal of SP 800-64 Rev 2, users were pointed to NIST SP 800-160 Vol 1 for the time being until a new, updated white paper is published. This was at the end of May 2019. The nature of these papers is to only describe high-level best practices, list some examples, and stay rather vague in concrete implementation. Yet, the documents are the basis for reviews and audits. Given the importance of the field, it seems as if a major component is missing. It is also time to think about processes that would help us to keep up with the pace of technology.

CPRA: More opportunity than threat for employers

Increasingly demanded by consumers, data privacy laws can create onerous burdens on even the most well-meaning businesses. California presents plenty of evidence to back up this statement, as more than half of organizations that do business in California still aren’t compliant with the California Consumer Privacy Act (CCPA), which went into effect earlier this year.

CPRA

As companies struggle with their existing compliance requirements, many fear that a new privacy ballot initiative – the California Privacy Rights Act (CPRA) – could complicate matters further. While it’s true that if passed this November, the CPRA would fundamentally change the way businesses in California handle both customer and employee data, companies shouldn’t panic. In fact, this law presents an opportunity for organizations to change their relationship with employee data to their benefit.

CPRA, the Californian GDPR?

Set to appear on the November 2020 ballot, the CPRA, also known as CCPA 2.0 or Prop 24 (its name on the ballot), builds on what is already the most comprehensive data protection law in the US. In essence, the CPRA will bring data protection in California nearer to the current European legal standard, the General Data Protection Regulation (GDPR).

In the process of “getting closer to GDPR,” the CCPA would gain substantial new components. Besides enhancing consumer rights, the CPRA also creates new provisions for employee data as it relates to their employers, as well as data that businesses collect from B2B business partners.

Although controversial, the CPRA is likely to pass. August polling shows that more than 80% of voters support the measure. However, many businesses do not. This is because, at first glance, the CPRA appears to create all kinds of legal complexities in how employers can and cannot collect information from workers.

Fearful of having to meet the same demanding requirements as their European counterparts, many organizations’ natural reaction towards the prospect of CPRA becoming law is fear. However, this is unfounded. In reality, if the CPRA passes, it might not be as scary as some businesses think.

CPRA and employment data

The CPRA is actually a lot more lenient than the GDPR in regard to how it polices the relationship between employers and employees’ data. Unlike for its EU equivalent, there are already lots of exceptions written into the proposed Californian law acknowledging that worker-employer relations are not like consumer-vendor relations.

Moreover, the CPRA extends the CCPA exemption for employers, set to end on January 1, 2021. This means that if the CPRA passes into law, employers would be released from both their existing and potential new employee data protection obligations for two more years, until January 1, 2023. This exemption would apply to most provisions under the CPRA, including the personal information collected from individuals acting as job applicants, staff members, employees, contractors, officers, directors, and owners.

However, employers would still need to provide notice of data collection and maintain safeguards for personal information. It’s highly likely that during this two-year window, additional reforms would be passed that might further ease employer-employee data privacy requirements.

Nonetheless, employers should act now

While the CPRA won’t change much overnight, impacted organizations shouldn’t wait to take action, but should take this time to consider what employee data they collect, why they do so, and how they store this information.

This is especially pertinent now that businesses are collecting more data than ever on their employees. With companies like the workplace monitoring company Prodoscore reporting that interest from prospective customers rose by 600% since the pandemic began, we are seeing rapid growth in companies looking to monitor how, where, and when their employees work.

This trend emphasizes the fact that the information flow between companies and their employees is mostly one-sided (i.e., from the worker to the employer). Currently, businesses have no legal requirement to be transparent about this information exchange. That will change for California-based companies if the CPRA comes into effect and they will have no choice but to disclose the type of data they’re collecting about their staff.

The only sustainable solution for impacted businesses is to be transparent about their data collection with employees and work towards creating a “culture of privacy” within their organization.

Creating a culture of privacy

Rather than viewing employee data privacy as some perfunctory obligation where the bare minimum is done for the sake of appeasing regulators, companies need to start thinking about worker privacy as a benefit. Presented as part of a benefits package, comprehensive privacy protection is a perk that companies can offer prospective and existing employees.

Privacy benefits can include access to privacy protection services that give employees privacy benefits beyond the workplace. Packaged alongside privacy awareness training and education, these can create privacy plus benefits that can be offered to employees alongside standard perks like health or retirement plans. Doing so will build a culture of privacy which can help companies ensure they’re in regulatory compliance, while also making it easier to attract qualified talent and retain workers.

It’s also worth bearing in mind that creating a culture of privacy doesn’t necessarily mean that companies have to stop monitoring employee activity. In fact, employees are less worried about being watched than they are by the possibility of their employers misusing their data. Their fears are well-founded. Although over 60% of businesses today use workforce data, only 3 in 10 business leaders are confident that this data is treated responsibly.

For this reason, companies that want to keep employee trust and avoid bad PR need to prioritize transparency. This could mean drawing up a “bill of rights” that lets employees know what data is being collected and how it will be used.

Research into employee satisfaction backs up the value of transparency. Studies show that while only 30% of workers are comfortable with their employer monitoring their email, the number of employees open to the use of workforce data goes up to 50% when the employer explains the reasons for doing so. This number further jumps to 92% if employees believe that data collection will improve their performance or well-being or come with other personal benefits, like fairer pay.

On the other hand, most employees would leave an organization if its leaders did not use workplace data responsibly. Moreover, 55% of candidates would not even apply for a job with such an organization in the first place.

Final thoughts

With many exceptions for workplace data management already built-in and more likely to come down the line, most employers should be able to easily navigate the stipulations CPRA entails.

That being said, if it becomes law this November, employers shouldn’t misuse the two-year window they have to prepare for new compliance requirements. Rather than seeing this time as breathing space before a regulatory crackdown, organizations should instead use it to be proactive in their approach to how they manage their employees’ data. As well as just ensuring they comply with the law, businesses should look at how they can turn employee privacy into an asset.

As data privacy stays at the forefront of employees’ minds, businesses that can show they have a genuine privacy culture will be able to gain an edge when it comes to attracting and retaining talent and, ultimately, coming out on top.

How to build up cybersecurity for medical devices

Manufacturing medical devices with cybersecurity firmly in mind is an endeavor that, according to Christopher Gates, an increasing number of manufacturers is trying to get right.

cybersecurity medical devices

Healthcare delivery organizations have started demanding better security from medical device manufacturers (MDMs), he says, and many have have implemented secure procurement processes and contract language for MDMs that address the cybersecurity of the device itself, secure installation, cybersecurity support for the life of the product in the field, liability for breaches caused by a device not following current best practice, ongoing support for events in the field, and so on.

“For someone like myself who has been focused on cybersecurity at MDMs for over 12 years, this is excellent progress as it will force MDMs to take security seriously or be pushed out of the market by competitors who do take it seriously. Positive pressure from MDMs is driving cybersecurity forward more than any other activity,” he told Help Net Security.

Gates is a principal security architect at Velentium and one of the authors of the recently released Medical Device Cybersecurity for Engineers and Manufacturers, a comprehensive guide to medical device secure lifecycle management, aimed at engineers, managers, and regulatory specialists.

In this interview, he shares his knowledge regarding the cybersecurity mistakes most often made by manufacturers, on who is targeting medical devices (and why), his view on medical device cybersecurity standards and initiatives, and more.

[Answers have been edited for clarity.]

Are attackers targeting medical devices with a purpose other than to use them as a way into a healthcare organization’s network?

The easy answer to this is “yes,” since many MDMs in the medical device industry perform “competitive analysis” on their competitors’ products. It is much easier and cheaper for them to have a security researcher spend a few hours extracting an algorithm from a device for analysis than to spend months or even years of R&D work to pioneer a new algorithm from scratch.

Also, there is a large, hundreds-of-millions-of-dollars industry of companies who “re-enable” consumed medical disposables. This usually requires some fairly sophisticated reverse-engineering to return the device to its factory default condition.

Lastly, the medical device industry, when grouped together with the healthcare delivery organizations, constitutes part of critical national infrastructure. Other industries in that class (such as nuclear power plants) have experienced very directed and sophisticated attacks targeting safety backups in their facilities. These attacks seem to be initial testing of a cyber weapon that may be used later.

While these are clearly nation-state level attacks, you have to wonder if these same actors have been exploring medical devices as a way to inhibit our medical response in an emergency. I’m speculating: we have no evidence that this has happened. But then again, if it has happened there likely wouldn’t be any evidence, as we haven’t been designing medical devices and infrastructure with the ability to detect potential cybersecurity events until very recently.

What are the most often exploited vulnerabilities in medical devices?

It won’t come as a surprise to anyone in security when I say “the easiest vulnerabilities to exploit.” An attacker is going to start with the obvious ones, and then increasingly get more sophisticated. Mistakes made by developers include:

Unsecured firmware updating

I personally always start with software updates in the field, as they are so frequently implemented incorrectly. An attacker’s goal here is to gain access to the firmware with the intent of reverse-engineering it back into easily-readable source code that will yield more widely exploitable vulnerabilities (e.g., one impacting every device in the world). All firmware update methods have at least three very common potential design vulnerabilities. They are:

  • Exposure of the binary executable (i.e., it isn’t encrypted)
  • Corrupting the binary executable with added code (i.e., there isn’t an integrity check)
  • A rollback attack which downgrades the version of firmware to a version with known exploitable vulnerabilities (there isn’t metadata conveying the version information).

Overlooking physical attacks

Physical attack can be mounted:

  • Through an unsecured JTAG/SWD debugging port
  • Via side-channel (power monitoring, timing, etc.) exploits to expose the values of cryptographic keys
  • By sniffing internal busses, such as SPI and I2C
  • Exploiting flash memory external to the microcontroller (a $20 cable can get it to dump all of its contents)

Manufacturing support left enabled

Almost every medical device needs certain functions to be available during manufacturing. These are usually for testing and calibration, and none of them should be functional once the device is fully deployed. Manufacturing commands are frequently documented in PDF files used for maintenance, and often only have minor changes across product/model lines inside the same manufacturer, so a little experimentation goes a long way in letting an attacker get access to all kinds of unintended functionality.

No communication authentication

Just because a communications medium connects two devices doesn’t mean that the device being connected to is the device that the manufacturer or end-user expects it to be. No communications medium is inherently secure; it’s what you do at the application level that makes it secure.

Bluetooth Low Energy (BLE) is an excellent example of this. Immediately following a pairing (or re-pairing), a device should always, always perform a challenge-response process (which utilizes cryptographic primitives) to confirm it has paired with the correct device.

I remember attending an on-stage presentation of a new class II medical device with a BLE interface. From the audience, I immediately started to explore the device with my smartphone. This device had no authentication (or authorization), so I was able to perform all operations exposed on the BLE connection. I was engrossed in this interface when I suddenly realized there was some commotion on stage as they couldn’t get their demonstration to work: I had accidentally taken over the only connection the device supported. (I then quickly terminated the connection to let them continue with the presentation.)

What things must medical device manufacturers keep in mind if they want to produce secure products?

There are many aspects to incorporating security into your development culture. These can be broadly lumped into activities that promote security in your products, versus activities that convey a false sense of security and are actually a waste of time.

Probably the most important thing that a majority of MDMs need to understand and accept is that their developers have probably never been trained in cybersecurity. Most developers have limited knowledge of how to incorporate cybersecurity into the development lifecycle, where to invest time and effort into securing a device, what artifacts are needed for premarket submission, and how to proper utilize cryptography. Without knowing the details, many managers assume that security is being adequately included somewhere in their company’s development lifecycle; most are wrong.

To produce secure products, MDMs must follow a secure “total product life cycle,” which starts on the first day of development and ends years after the product’s end of life or end of support.

They need to:

  • Know the three areas where vulnerabilities are frequently introduced during development (design, implementation, and through third-party software components), and how to identify, prevent, or mitigate them
  • Know how to securely transfer a device to production and securely manage it once in production
  • Recognize an MDM’s place in the device’s supply chain: not at the end, but in the middle. An MDMs cybersecurity responsibilities extend up and down the chain. They have to contractually enforce cybersecurity controls on their suppliers, and they have to provide postmarket support for their devices in the field, up through and after end-of-life
  • Ccreate and maintain Software Bills of Materials (SBOMs) for all products, including legacy products. Doing this work now will help them stay ahead of regulation and save them money in the long run.

They must avoid mistakes like:

  • Not thinking that a medical device needs to be secured
  • Assuming their development team ‘can’ and ‘is’ securing their product
  • Not designing-in the ability to update the device in the field
  • Assuming that all vulnerabilities can be mitigated by a field update
  • Only considering the security of one aspect of your design (e.g., its wireless communication protocol). Security is a chain: for the device to be secure, all the links of the chain need to be secure. Attackers are not going to consider certain parts of the target device ‘out of bounds’ for exploiting.

Ultimately, security is about protecting the business model of an MDM. This includes the device’s safety and efficacy for the patient, which is what the regulations address, but it also includes public opinion, loss of business, counterfeit accessories, theft of intellectual property, and so forth. One mistake I see companies frequently make is doing the minimum on security to gain regulatory approval, but neglecting to protect their other business interests along the way – and those can be very expensive to overlook.

What about the developers? Any advice on skills they should acquire or brush up on?

First, I’d like to take some pressure off developers by saying that it’s unreasonable to expect that they have some intrinsic knowledge of how to implement cybersecurity in a product. Until very recently, cybersecurity was not part of traditional engineering or software development curriculum. Most developers need additional training in cybersecurity.

And it’s not only the developers. More than likely, project management has done them a huge disservice by creating a system-level security requirement that says something like, “Prevent ransomware attacks.” What is the development team supposed to do with that requirement? How is it actionable?

At the same time, involving the company’s network or IT cybersecurity team is not going to be an automatic fix either. IT Cybersecurity diverges from Embedded Cybersecurity in many respects, from detection to implementation of mitigations. No MDM is going to be putting a firewall on a device that is powered by a CR2032 battery anytime soon; yet there are ways to secure such a low-resource device.

In addition to the how-to book we wrote, Velentium will soon offer training available specifically for the embedded device domain, geared toward creating a culture of cybersecurity in development teams. My audacious goal is that within 5 years every medical device developer I talk to will be able to converse intelligently on all aspects of securing a medical device.

What cybersecurity legislation/regulation must companies manufacturing medical devices abide by?

It depends on the markets you intend to sell into. While the US has had the Food and Drug Administration (FDA) refining its medical device cybersecurity position since 2005, others are more recent entrants into this type of regulation, including Japan, China, Germany, Singapore, South Korea, Australia, Canada, France, Saudi Arabia, and the greater EU.

While all of these regulations have the same goal of securing medical devices, how they get there is anything but harmonized among them. Even the level of abstraction varies, with some focused on processes while others on technical activities.

But there are some common concepts represented in all these regulations, such as:

  • Risk management
  • Software bill of materials (SBOM)
  • Monitoring
  • Communication
  • “Total Product Lifecycle”
  • Testing

But if you plan on marketing in the US, the two most important document should be FDA’s:

  • 2018 – Draft Guidance: Content of Premarket Submissions for Management of Cybersecurity in Medical Devices
  • 2016 – Final Guidance: Postmarket Management of Cybersecurity in Medical Devices (The 2014 version of the guidance on premarket submissions can be largely ignored, as it no longer represents the FDA’s current expectations for cybersecurity in new medical devices).
What are some good standards for manufacturers to follow if they want to get cybersecurity right?

The Association for the Advancement of Medical Instrumentation’s standards are excellent. I recommend AAMI TIR57: 2016 and AAMI TIR97: 2019.

Also very good is the Healthcare & Public Health Sector Coordinating Council’s (HPH SCC) Joint Security Plan. And, to a lesser extent, the NIST Cyber Security Framework.

The work being done at the US Department of Commerce / NTIA on SBOM definition for vulnerability management and postmarket surveillance is very good as well, and worth following.

What initiatives exist to promote medical device cybersecurity?

Notable initiatives I’m familiar with include, first, the aforementioned NTIA work on SBOMs, now in its second year. There are also several excellent working groups at HSCC, including the Legacy Medical Device group and the Security Contract Language for Healthcare Delivery Organizations group. I’d also point to numerous working groups in the H-ISAC Information Sharing and Analysis Organization (ISAO), including the Securing the Medical Device Lifecycle group.

And I have to include the FDA itself here, which is in the process of revising its 2018 premarket draft guidance; we hope to see the results of that effort in early 2021.

What changes do you expect to see in the medical devices cybersecurity field in the next 3-5 years?

So much is happening at high and low levels. For instance, I hope to see the FDA get more of a direct mandate from Congress to enforce security in medical devices.

Also, many working groups of highly talented people are working on ways to improve the security posture of devices, such as the NTIA SBOM effort to improve the transparency of software “ingredients” in a medical device, allowing end-users to quickly assess their risk level when new vulnerabilities are discovered.

Semiconductor manufacturers continue to give us great mitigation tools in hardware, such as side-channel protections, cryptographic accelerators, virtualized security cores. Trustzone is a great example.

And at the application level, we’ll continue to see more and better packaged tools, such as cryptographic libraries and processes, to help developers avoid cryptography mistakes. Also, we’ll see more and better process tools to automate the application of security controls to a design.

HDOs and other medical device purchasers are better informed than ever before about embedded cybersecurity features and best practices. That trend will continue, and will further accelerate demand for better-secured products.

I hope to see some effort at harmonization between all the federal, state, and foreign regulations that have been recently released with those currently under consideration.

One thing is certain: legacy medical devices that can’t be secured will only go away when we can replace them with new medical devices that are secure by design. Bringing new devices to market takes a long time. There’s lots of great innovation underway, but really, we’re just getting started!

The anatomy of an endpoint attack

Cyberattacks are becoming increasingly sophisticated as tools and services on the dark web – and even the surface web – enable low-skill threat actors to create highly evasive threats. Unfortunately, most of today’s modern malware evades traditional signature-based anti-malware services, arriving to endpoints with ease. As a result, organizations lacking a layered security approach often find themselves in a precarious situation. Furthermore, threat actors have also become extremely successful at phishing users out of their credentials or simply brute forcing credentials thanks to the widespread reuse of passwords.

A lot has changed across the cybersecurity threat landscape in the last decade, but one thing has remained the same: the endpoint is under siege. What has changed is how attackers compromise endpoints. Threat actors have learned to be more patient after gaining an initial foothold within a system (and essentially scope out their victim).

Take the massive Norsk Hydro ransomware attack as an example: The initial infection occurred three months prior to the attacker executing the ransomware and locking down much of the manufacturer’s computer systems. That was more than enough time for Norsk to detect the breach before the damage could done, but the reality is most organization simply don’t have a sophisticated layered security strategy in place.

In fact, the most recent IBM Cost of a Data Breach Report found it took organizations an average of 280 days to identify and contain a breach. That’s more than 9 months that an attacker could be sitting on your network planning their coup de grâce.

So, what exactly are attackers doing with that time? How do they make their way onto the endpoint undetected?

It usually starts with a phish. No matter what report you choose to reference, most point out that around 90% of cyberattacks start with a phish. There are several different outcomes associated with a successful phish, ranging from compromised credentials to a remote access trojan running on the computer. For credential phishes, threat actors have most recently been leveraging customizable subdomains of well-known cloud services to host legitimate-looking authentication forms.

anatomy endpoint attack

The above screenshot is from a recent phish WatchGuard Threat Lab encountered. The link within the email was customized to the individual recipient, allowing the attacker to populate the victim’s email address into the fake form to increase credibility. The phish was even hosted on a Microsoft-owned domain, albeit on a subdomain (servicemanager00) under the attacker’s control, so you can see how an untrained user might fall for something like this.

In the case of malware phishes, attackers (or at least the successful ones) have largely stopped attaching malware executables to emails. Most people these days recognize that launching an executable email attachment is a bad idea, and most email services and clients have technical protections in place to stop the few that still click. Instead, attackers leverage dropper files, usually in the form of a macro-laced Office document or a JavaScript file.

The document method works best when recipients have not updated their Microsoft Office installations or haven’t been trained to avoid macro-enabled documents. The JavaScript method is a more recently popular method that leverages Windows’ built-in scripting engine to initiate the attack. In either case, the dropper file’s only job is to identify the operating system and then call home and grab a secondary payload.

That secondary payload is usually a remote-access trojan or botnet of some form that includes a suite of tools like keyloggers, shell script-injectors, and the ability to download additional modules. The infection isn’t usually limited to the single endpoint for long after this. Attackers can use their foothold to identify other targets on the victim’s network and rope them in as well.

It’s even easier if the attackers manage to get hold of a valid set of credentials and the organization hasn’t deployed multi-factor authentication. It allows the threat actor to essentially walk right in through the digital front door. They can then use the victim’s own services – like built-in Windows scripting engines and software deployment services – in a living-off-the-land attack to carry out malicious actions. We commonly see threat actors leverage PowerShell to deploy fileless malware in preparation to encrypt and/or exfiltrate critical data.

The WatchGuard Threat Lab recently identified an ongoing infection while onboarding a new customer. By the time we arrived, the threat actor had already been on the victim’s network for some time thanks to compromising at least one local account and one domain account with administrative permissions. Our team was not able to identify how exactly the threat actor obtained the credentials, or how long they had been present on the network, but as soon as our threat hunting services were turned on, indicators immediately lit up identifying the breach.

In this attack, the threat actors used a combination of Visual Basic Scripts and two popular PowerShell toolkits – PowerSploit and Cobalt Strike – to map out the victim’s network and launch malware. One behavior we saw came from Cobalt Strike’s shell code decoder enabled the threat actors to download malicious commands, load them into memory, and execute them directly from there, without the code ever touching the victim’s hard drive. These fileless malware attacks can range from difficult to impossible to detect with traditional endpoint anti-malware engines that rely on scanning files to identify threats.

anatomy endpoint attack

Elsewhere on the network our team saw the threat actors using PsExec, a built in Windows tool, to launch a remote access trojan with SYSTEM-level privileges thanks to the compromised domain admin credentials. The team also identified the threat actors attempts to exfiltrate sensitive data to a DropBox account using a command-line based cloud storage management tool.

Fortunately, they were able to identify and clean up the malware quickly. However, without the victim changing the stolen credentials, the attacker could have likely re-initiated their attack at-will. Had the victim deployed an advanced Endpoint Detection and Response (EDR) engine as part of their layered security strategy, they could have stopped or slowed the damage created from those stolen credentials.

Attackers are targeting businesses indiscriminately, even small organizations. Relying on a single layer of protection simply no longer works to keep a business secure. No matter the size of an organization, it’s important to adopt a layered security approach that can detect and stop modern endpoint attacks. This means protections from the perimeter down to the endpoint, including user training in the middle. And, don’t forget about the role of multifactor authentication (MFA) – could be the difference between stopping an attack and becoming another breach statistic.

How to avoid the most common mistakes of an identity governance program

It’s a story I have seen play out many times over two decades in the Identity and Access Management (IAM) field: An organization determines that it needs a more robust Identity Governance and Administration (IGA) program, they kick off a project to realize this goal, but after a promising start, the whole effort falls apart within six to twelve months.

IGA program

What an IGA program does

People become frustrated about wasted time and money. The audit and compliance teams who need IGA grow disappointed, perhaps even anxious. The regulatory risks they are trying to mitigate continue to loom large. Finger pointing ensues, arguing and discord follow.

Don’t get me wrong, a fine-tuned and efficient IGA program is well worth it. An IGA program helps ensure an organization’s data security, assist in completing audits, and support significant boosts in operational agility.

The three common IGA project mistakes

The specific things that can go wrong vary by company, but they follow a sadly familiar pattern. Three common mistakes stand out in particular:

1. Underestimating the costs

An IGA project is an IT project, but it’s so much more. Viewing IGA simply as a matter of buying and installing software is an avoidable error. To work, IGA usually needs advisory services on top of in-house resources. Application integration costs may get under-counted as well, as project stakeholders fail to grasp the interconnected nature of the IGA process. For example, the IGA solution usually has to link with HR management systems and so forth. Training costs can be higher than people predict. Finding people with IGA skills also tends to take longer and cost more than anyone might guess at the outset.

2. Not building for user experience (UX)

IGA end users need to feel comfortable and confident on the system, or the whole project finds itself in jeopardy. People want to get their jobs done. They generally don’t have the time or interest in learning a new system and lexicon. If using the solution isn’t a basically effortless part of their day-to-day work lives, users will seek ways around it. They’ll call the help desk or contact a colleague, claiming they cannot complete IGA tasks. This sort of slow-building mutiny can wreck an IGA program.

3. Failing to secure or sustain C-suite sponsorship

IGA projects can be challenging. They require collaboration across departments. Strong executive sponsorship is critical for success for overcoming potential points of friction. In my experience, one can predict that trouble is on the horizon as soon as the executive sponsor stops coming to status meetings. This usually isn’t the executive’s fault. He or she is simply quite busy and has not been properly briefed on the importance of his or her role in ensuring a good outcome for the investment in IGA.

How to avoid IGA project problems

These pitfalls need not sink an IGA program. Being conscious of the potential problems and addressing them in the project planning stage helps a great deal. Budgeting accurately, thinking through UX, and making expectations clear with executive sponsors provide the basis for IGA success.

There’s also a new approach in IGA implementation that can make a huge difference. It involves integrating the IGA toolset with the existing application platform, a system that everyone is already using for IT-related workloads. These platforms exist in most organizations, a popular example is ServiceNow.

Building IGA on top of an existing platform delivers a number of distinct advantages for the program:

  • It maximizes the current investment in the platform
  • It’s less expensive than purchasing an IGA solution that is its own stack—a savings that applies to both the build and manage phases of its life cycle
  • No new skillsets are required, either, which avoids the costly recruit/train/retain struggles that can arise with standalone IGA solutions
  • Changes to the IGA system are more economical as well when it runs atop a familiar incumbent platform in the organization.

Employees are already using the platform interfaces, so there are few training issues or UX problems inherent in launching an IGA program that is seamlessly integrated into existing processes. Knowledge workers know the interfaces and workflows to request and approve identity governance services. They won’t have to bookmark a new URL or learn a new way of doing things, speeding overall acceptance.

Application platforms are also increasingly becoming one of the main vehicles for digital transformation (DX) projects. This makes sense, given the importance of IT agility and smooth IT operations in the DX vision. By linking IGA with DX, it becomes easier to attract sustainable executive interest in the IGA program.

C-level executives sponsor DX projects, bonuses may hinge on them. They know DX projects are ambitious and potential generators of strong return on investment. With IGA built into DX, the identity governance program will not be neglected.

Avoiding the common pitfalls inherent in launching an IGA program will take some focus and work, but the resulting benefits are well worth the effort. As you look to refresh or improve your current IGA program, consider leveraging what platforms you already have in place to achieve the most successful outcome.

Three common mistakes in ransomware security planning

As the frequency and intensity of ransomware attacks increase, one thing is becoming abundantly clear: organizations can do more to protect themselves. Unfortunately, most organizations are dropping the ball. Most victims receive adequate warning of potential vulnerabilities yet are woefully unprepared to recover when they are hit. Here are just a few recent examples of both prevention and incident response failures:

  • Two months before the city of Atlanta was hit by ransomware in 2018, an audit identified over 1,500 severe security vulnerabilities.
  • Before the city of Baltimore suffered multiple weeks of downtime due to a ransomware attack in 2019, a risk assessment identified a severe vulnerability due to servers running an outdated operating system (and therefore lacking the latest security patches) and insufficient backups to restore those servers, if necessary.
  • Honda was attacked this past June, and public access to Remote Desktop Protocol (RDP) for some machines may have been the attack vector leveraged by hackers. Complicating matters further, there was a lack of adequate network segmentation.

Other notable recent victims include Travelex, Blackbaud, and Garmin. In all these examples, these are large organizations that should have very mature security profiles. So, what’s the problem?

Three common mistakes lead to inadequate prevention and ineffective response, and they are committed by organizations of all sizes:

1. Failing to present risk in business terms, which is crucial to persuading business leaders to provide appropriate funding and policy buy-in.

2. Not going deep enough in how you test your ransomware readiness.

3. Insufficient DR planning that fails to account for a ransomware threat that could infect your backups.

Common mistake #1 – Failing to present security risk in business terms to get funding and policies

I was reminded of just how easy it is to bypass basic security (e.g., firewalls, email security gateways, and anti-malware solutions) when I recently watched a ransomware attack simulation: the initial pre-ransomware payload got a foot in the door, followed by techniques such as reverse DNS lookups to identify an Active Directory service, that has the necessary privileges to really do some damage, and finally Kerberoasting to take control.

No organization is bulletproof, but attacks take time – which means early detection with more advanced intrusion detection and a series of roadblocks for hackers to overcome (e.g., greater network segmentation, appropriate end-user restrictions, etc.) are crucial for prevention.
This is not news to security practitioners. But convincing business leaders to make additional security investments is another challenge altogether.

How to fix mistake #1: Quantify business impact to enable an informed cost-based business decision

Tighter security controls (via technology and policy) are friction for the business. And while no business leader wants to fall victim to ransomware, budgets are not unlimited.

To justify the extra cost and tighter controls, you need to make a business case that presents not just risk, but also quantifiable business impact. Help senior leadership compare apples to apples – in this case, the cost of improved security (capex and potential productivity friction) vs. the cost of a security breach (the direct cost of extended downtime and long-term cost of reputational damage).

Assessing business impact need not be onerous. Fairly accurate impact assessments of critical systems can easily be completed in one day. Focus on a few key applications and datasets to get a representative sample, and get a rough estimate of cost, goodwill, compliance, and/or health & safety impacts, depending on your organization. You don’t have to be exact at this stage to recognize the potential for a critical impact.

Below is an example of the resulting key talking points for a presentation to senior leadership:

ransomware security planning

Source: Info-Tech Research Group, Business Impact Analysis Summary Example, 2020

Common mistake #2 – Not going deep enough in testing ransomware readiness

If you aren’t already engaged in penetration testing to validate security technology and configuration, start now. If you can’t secure funding, re-read How to fix mistake #1.

Where organizations go wrong is stopping at penetration testing and not validating the end-to-end incident response. This is especially critical for larger organizations that need to quickly coordinate assessment, containment, and recovery with multiple teams.

How to fix mistake #2: Run in-depth tabletop planning exercises to cover a range of what-if scenarios

The problem with most tabletop planning exercises is that they are designed to simply validate your existing incident response plans (IRP).
Instead, dive into how an attack might take place and what actions you would take to detect, contain, and recover from the attack. For example, where your IRP says check for other infected systems, get specific in how that would happen. What tools would you use? What data would you review? What patterns would you look for?

It is always an eye-opener when we run tabletop planning with our clients. I’ve yet to come across a client that had a perfect incident response plan. Even organizations with a mature security profile and documented response plans often find gaps such as:

  • Poor coordination between security and infrastructure staff, with unclear handoffs during the assessment phase.
  • Existing tools aren’t fully leveraged (e.g., configuring auto-contain features).
  • Limited visibility into some systems (e.g., IoT devices and legacy systems).

Common mistake #3 – Backup strategies and DR plans don’t account for ransomware scenarios

Snapshots and standard daily backups on rewritable media, as in the example below, just aren’t good enough:

ransomware security planning

Source: Info-Tech Research Group, Outdated Backup Strategy Example, 2020

The ultimate safety net if hackers get in is your ability to restore from backup or failover to a clean standby site/system.

However, a key goal of a ransomware attack is to disable your ability to recover – that means targeting backups and standby systems, not just the primary data. If you’re not explicitly guarding against ransomware all the time, the money you’ve invested to minimize data loss due to traditional IT outages – from drive failures to hurricanes – becomes meaningless.

How to fix mistake #3: Apply defense-in-depth to your backup and DR strategy

The philosophy of defense-in-depth is best practice for security, and the same philosophy needs to be applied to your backups and recovery capabilities.

Defense-in-depth is not just about trying to catch what slips through the cracks of your perimeter security. It’s also about buying time to detect, contain, and recover from the attack.

Achieve defense-in-depth with the following approach:

1. Create multiple restore points so that you can be more granular with your rollback and not lose as much data.

2. Reduce the risk of infected backups by using different storage media (e.g., disk, NAS, and/or tape) and backup locations (i.e., offsite backups). If you can make the attackers jump through more hoops, it gives you a greater chance of detecting the attack before it infects all of your backups. Start with your most critical data and design a solution that considers business need, cost, and risk tolerance.

3. Invest in solutions that generate immutable backups. Most leading backup solutions offer options to ensure backups are immutable (i.e., can’t be altered after they’re written). Expect the cost to be higher, of course, so again consider business impact when deciding what needs higher levels of protection.

Summary: Ransomware security planning

Ransomware attacks can be sophisticated, basic security practices just aren’t good enough. Get buy-in from senior leadership on what it takes to be ransomware-ready by presenting not just the risk of attack, but the potential extensive business impact. Assume you’ll get hit, be ready to respond quickly, test realistically, and update your DR strategy to encompass this fast-evolving threat.

How do I select a data storage solution for my business?

We live in the age of data. We are constantly producing it, analyzing it, figuring out how to store and protect it, and, hopefully, using it to refine business practices. Unfortunately, 58% of organizations make decisions based on outdated data.

While enterprises are rapidly deploying technologies for real-time analytics, machine learning and IoT, they are still utilizing legacy storage solutions that are not designed for such data-intensive workloads.

To select a suitable data storage for your business, you need to think about a variety of factors. We’ve talked to several industry leaders to get their insight on the topic.

Phil Bullinger, SVP and General Manager, Data Center Business Unit, Western Digital

select data storage solutionSelecting the right data storage solution for your enterprise requires evaluating and balancing many factors. The most important is aligning the performance and capabilities of the storage system with your critical workloads and their specific bandwidth, application latency and data availability requirements. For example, if your business wants to gain greater insight and value from data through AI, your storage system should be designed to support the accelerated performance and scale requirements of analytics workloads.

Storage systems that maximize the performance potential of solid state drives (SSDs) and the efficiency and scalability of hard disk drives (HDDs) provide the flexibility and configurability to meet a wide range of application workloads.

Your applications should also drive the essential architecture of your storage system, whether directly connected or networked, whether required to store and deliver data as blocks, files, objects or all three, and whether the storage system must efficiently support a wide range of workloads while prioritizing the performance of the most demanding applications.

Consideration should be given to your overall IT data management architecture to support the scalability, data protection, and business continuity assurance required for your enterprise, spanning from core data centers to those distributed at or near the edge and endpoints of your enterprise operations, and integration with your cloud-resident applications, compute and data storage services and resources.

Ben Gitenstein, VP of Product Management, Qumulo

select data storage solutionWhen searching for the right data storage solution to support your organizational needs today and in the future, it’s important to select a solution that is trusted, scalable to secure demanding workloads of any size, and ensures optimal performance of applications and workloads both on premises and in complex, multi- cloud environments.

With the recent pandemic, organizations are digitally transforming faster than ever before, and leveraging the cloud to conduct business. This makes it more important than ever that your storage solution has built in tools for data management across this ecosystem.

When evaluating storage options, be sure to do your homework and ask the right questions. Is it a trusted provider? Would it integrate well within my existing technology infrastructure? Your storage solution should be easy to manage and meet the scale, performance and cloud requirements for any data environment and across multi-cloud environments.

Also, be sure the storage solution gives IT control in how they manage storage capacity needs and delivers real-time insight into analytics and usage patterns so they can make smart storage allocation decisions and maximize an organizations’ storage budget.

David Huskisson, Senior Solutions Manager, Pure Storage

select data storage solutionData backup and disaster recovery features are critically important when selecting a storage solution for your business, as now no organization is immune to ransomware attacks. When systems go down, they need to be recovered as quickly and safely as possibly.

Look for solutions that offer simplicity in management, can ensure backups are viable even when admin credentials are compromised, and can be restored quickly enough to greatly reduce major organizational or financial impact.

Storage solutions that are purpose-built to handle unstructured data are a strong place to start. By definition, unstructured data means unpredictable data that can take any form, size or shape, and can be accessed in any pattern. These capabilities can accelerate small, large, random or sequential data, and consolidate a wide range of workloads on a unified fast file and object storage platform. It should maintain its performance even as the amount of data grows.

If you have an existing backup product, you don’t need to rip and replace it. There are storage platforms with robust integrations that work seamlessly with existing solutions and offer a wide range of data-protection architectures so you can ensure business continuity amid changes.

Tunio Zafer, CEO, pCloud

select data storage solutionBear in mind: your security team needs to assist. Answer these questions to find the right solution: Do you need ‘cold’ storage or cloud storage? If you’re looking to only store files for backup, you need a cloud backup service. If you’re looking to store, edit and share, go for cloud storage. Where are their storage servers located? If your business is located in Europe, the safest choice is a storage service based in Europe.

Best case scenario – your company is going to grow. Look for a storage service that offers scalability. What is their data privacy policy? Research whether someone can legally access your data without your knowledge or consent. Switzerland has one of the strictest data privacy laws globally, so choosing a Swiss-based service is a safe bet. How is your data secured? Look for a service that offers robust encryption in-transit and at-rest.

Client-side encryption means that your data is secured on your device and is transferred already encrypted. What is their support package? At some point, you’re going to need help. A data storage service with a support package that’s included for free, answers in up to 24 hours is preferred.

Working together to secure our expanding connected health future

Securing medical devices is not a new challenge. Former Vice President Cheney, for example, had the wireless capabilities of a defibrillator disabled when implanted near his heart in 2007, and hospital IT departments and health providers have for years secured medical devices to protect patient data and meet HIPAA requirements.

connected health

With the expansion of security perimeters, the surge in telehealth usage (particularly during COVID-19), and proliferation in the number and types of connected technologies, healthcare cybersecurity has evolved into a more complex and urgent effort.

Today, larger hospital systems have approximately 350,000+ medical devices running simultaneously. On top of this, millions of additional connected devices are maintained by the patients themselves. Over the next 10 years, it’s estimated the number of connected medical devices could increase to roughly 50 billion, driven by innovations such as 5G, edge computing, and more. This rise in connectivity has increased the threat of cyberattacks not just to patient data, but also patient safety. Vulnerabilities in healthcare technology (e.g., an MRI machine or pacemaker) can lead to patient harm if diagnoses are delayed or the right treatments don’t get to the right people.

What can the healthcare industry do to strengthen their defenses today? How can they lay the groundwork for more secure devices and networks tomorrow?

The challenges are interconnected. The solutions cannot be siloed, and collaboration between manufacturers, doctors, healthcare delivery organizations and regulators is more critical now than ever before.

Device manufacturers: Integrating security into product design

Many organizations view medical device cybersecurity as protecting technology while it is deployed as part of a local network. Yet medical devices also need to be designed and developed with mobile and cloud security in mind, with thoughtful consideration about the patient experience. It is especially important we take this step as medical technology moves beyond the four walls of the hospital and into the homes of patients. The connected device itself needs to be secure, as opposed to the network surrounding the device.

We also need greater visibility and transparency across the medical device supply chain—a “software bill of materials.” The multicomponent nature of many medical products, such as insulin pumps or pacemakers, make the final product feel like a black box: hospitals and users know what it’s intended to do, but they don’t have much understanding about the individual components that make everything work. That makes it difficult to solve cybersecurity problems as they arise.

According to the 2019 HIMSS Cybersecurity Survey, just over 15% of significant security issues were initially started through either medical device problems in hospitals or vendor medical devices. As a result, some of these issues led to ransomware attacks exposing vulnerabilities, as healthcare providers and device makers scrambled to figure out just which of the products were at risk, while their systems were under threat. A software bill of materials would have helped them respond quickly to security, license, and operational risks.

Healthcare delivery organizations: Prioritizing preparedness and patient education

Healthcare providers, for their part, need to strengthen their threat awareness and preparedness, thinking about security from device procurement all the way to the sunsetting of legacy devices, which can extend over years and decades.

It’s currently not uncommon for healthcare facilities to use legacy technology that is 15 to 20 years old. Many of these devices are no longer supported and their security doesn’t meet the baseline of today’s evolving threats. However, as there is no replacement technology that serves the same functions, we need to provide heightened monitoring of these devices.

Threat modeling can help hospitals and providers understand their risks and increase resilience. Training and preparedness exercises are imperative in another critical area of cybersecurity: the humans operating the devices. Such exercises can put doctors, for instance, in an emergency treatment scenario with a malfunctioning device, and the discussions that follow provide valuable opportunities to educate, build awareness of, and proactively prepare for cyber threats.

Providers might consider “cybersecurity informed consent” to educate patients. When a patient signs a form before a procedure that acknowledges potential risks like infection or side effects, cyber-informed consent could include risks related to data breaches, denial of service attacks, ransomware, and more. It’s an opportunity to both manage risk and engage patients in conversations about cybersecurity, increasing trust in the technology that is essential for their health.

Regulators: Connecting a complex marketplace

The healthcare industry in the US is tremendously complex, comprised of hundreds of large healthcare systems, thousands of groups of physician practices, public and private payers, medical device manufacturers, software companies, and so on.

This expanding healthcare ecosystem can make it difficult to coordinate. Groups like the Food & Drug Administration (FDA) and the Healthcare Sector Coordinating Council have been rising to the challenge.

They’ve assembled subgroups and task forces in areas like device development and the treatment of legacy technologies. They’ve been reaching out to hospitals, patients, medical device manufacturers, and others to strengthen information-sharing and preparedness, to move toward a more open, collaborative cybersecurity environment.

Last year, the FDA issued a safety communication to alert health care providers and patients about cybersecurity vulnerabilities identified in a wireless telemetry technology used for communication that impacted more than 20 types of implantable cardiac devices, programmers, and home monitors. Later in 2019, the same device maker recalled thousands of insulin pumps due to unpatchable cyber vulnerabilities.

These are but two examples of many that demonstrate not only the impact of cybersecurity to patient health but to device makers and the healthcare system at large. Connected health should give patients access to approved technologies that can save lives without introducing risks to patient safety.

As the world continues to realize the promise of connected technologies, we must monitor threats, manage risks, and increase our network resilience. Working together to incorporate cybersecurity into device design, industry regulations, provider resilience, and patient education are where we should start.

Contributing author: Shannon Lantzy, Chief Scientist, Booz Allen Hamilton.

Why developing cybersecurity education is key for a more secure future

Cybersecurity threats are growing every day, be they are aimed at consumers, businesses or governments. The pandemic has shown us just how critical cybersecurity is to the successful operation of our respective economies and our individual lifestyles.

developing cybersecurity education

The rapid digital transformation it has forced upon us has seen us rely almost totally on the internet, ecommerce and digital communications to do everything from shopping to working and learning. It has brought into stark focus the threats we all face and the importance of cybersecurity skills at every level of society.

European Cybersecurity Month is a timely reminder that we must not become complacent and must redouble our efforts to stay safe online and bolster the cybersecurity skills base in society. This is imperative not only to manage the challenges we face today, but to ensure we can rise to the next wave of unknown, sophisticated cybersecurity threats that await us tomorrow.

Developing cybersecurity education at all levels, encouraging more of our students to embrace STEM subjects at an early age, educating consumers and the elderly on how to spot and avoid scams are critical to managing the challenge we face. The urgency and need to build our professional cybersecurity workforce is paramount to a safe and secure cyber world.

With a global skills gap of over four million, the cybersecurity professional base must grow substantially now in the UK and across mainland Europe to meet the challenge facing organisations, at the same time as we lay the groundwork to welcome the next generation into cybersecurity careers. That means a stronger focus on adult education, professional workplace training and industry-recognised certification.

At this key moment in the evolution of digital business and the changes in the way society functions day-to-day, certification plays an essential role in providing trust and confidence on knowledge and skills. Employers, government, law enforcement – whatever the function, these organisations need assurance that cybersecurity professionals have the skills, expertise and situational fluency needed to deal with current and future needs.

Certifications provide cybersecurity professionals with this important verification and validation of their training and education, ensuring organisations can be confident that current and future employees holding a given certification have an assured and consistent skillset wherever in the world they are.

The digital skills focus of European Cybersecurity Month is a reminder that there is a myriad of evolving issues that cybersecurity professionals need to be proficient in including data protection, privacy and cyber hygiene to name just a few.

However, certifications are much more than a recognised and trusted mark of achievement. They are a gateway to ensuring continuous learning and development. Maintaining a cybersecurity certification, combined with professional membership is evidence that professionals are constantly improving and developing new skills to add value to the profession and taking ownership for their careers. This new knowledge and understanding can be shared throughout an organisation to support security best practice, as well as ensuring cyber safety in our homes and communities.

Ultimately, we must remember that cybersecurity skills, education and best practice is not just a European issue, and neither is it a political issue. Rather, it is a global challenge that impacts every corner of society. Cybersecurity mindfulness needs to be woven into the DNA of everything we do, and it starts with everything we learn.

Why CIOs need to focus on password exposure, not expiration

The cybersecurity market is growing even in the midst of the pandemic-driven economic downturn, with spending predicted to reach $123 billion by the end of the year. While disruptive technologies are undoubtedly behind much of this market growth, companies cannot afford to overlook security basics.

focus on password exposure

Biometrics may be a media darling, but the truth is that passwords will remain the primary authentication mechanism for the foreseeable future. But while passwords may not be a cutting-edge security innovation, that’s not to suggest that CIOs don’t need to modernize their approach to password management.

Mandatory password resets

Employees’ poor password management practices are well-documented, with Google finding that 65% of people use the same password for multiple, if not all, online accounts. To circumvent the security risks associated with this behavior, companies have historically focused on periodic password resets. Seventy-seven percent of IT departments surveyed by Forrester in 2016 were expiring passwords for all staff on a quarterly basis.

This approach made sense in the early days of the digital age, when employees typically only had a handful of passwords to remember. I’d argue that times had already changed by 2016, but we are certainly in an entirely different landscape today. As digital transformation accelerates and employees are faced with managing multiple passwords for all of their accounts, it’s simply no longer realistic or wise to force frequent password resets.

It’s time to retire password expiration

Both NIST and Microsoft have recently come out against forced periodic password resets for a variety of reasons, including:

  • Password expiration eats up significant resources and budget. According to Forrester, a single password reset costs $70 of help desk labor. When you multiply this by the average number of employees in a typical organization, it’s easy to see how password expiration can become an unwieldy expense and add significant pressure on overburdened IT teams.
  • It encourages poor cybersecurity practices. When users are frequently asked to change passwords they typically create weaker ones—for example, slight variants of the original password or the same root word or phrase with different special characters for each account.
  • The practice impedes efficiency and introduces friction. Forced resets have a negative impact on productivity as employees often struggle to remember their passwords. One recent study found that 78% of people had to reset a password they forgot in the past 90 days, eating up valuable time that could have better been deployed elsewhere. In addition, the frustration associated with frequent changes can cause employees to seek a workaround or engage in poor security practices like sharing passwords among colleagues or reusing personal passwords for corporate accounts.

Exposure, not expiration

The fundamental purpose of passwords is to ensure that no one but the authorized user has access to the account or system in question. As such, it follows that password security has evolved from a focus on expiration to a focus on exposure. If credentials are secure, there is no reason for companies to incur the cost and other issues associated with forcing a reset. It’s critical that CIOs adopt this mindset and evaluate how they can continuously screen passwords to ensure their integrity.

Putting NIST’s recommendations into practice

According to NIST, companies should compare passwords “ …against a list that contains values known to be commonly-used, expected or compromised… The list MAY include, but is not limited to:

  • Passwords obtained from previous breach corpuses
  • Dictionary words
  • Repetitive or sequential characters
  • Context-specific words, such as the name of the service, the username, and derivatives thereof.”

Given that multiple data breaches occur in virtually every sector on a daily basis, companies need a dynamic, automated solution that can cross-reference proposed passwords against known breach data. In this environment, it’s highly likely that a password could be secure at its creation but become compromised down the road. As such, CIOs also need to monitor password security on a daily basis and take steps to protect sensitive information if a compromise is detected.

Depending on the nature of the account and the employee’s privilege this could take a variety of forms, including:

  • Stepping up MFA or additional authentication mechanisms
  • Forcing a password reset
  • Temporarily suspending access to the account

Because these actions occur only if a compromise has been detected, this modern approach to credential screening eliminates the unnecessary cost and friction associated with password expiration.

Protecting the password layer in the new normal

Replacing password expiration with password exposure will be particularly critical as CIOs manage an increasingly hybrid workforce. With Gartner finding that 74% of organizations plan to shift some employees to permanent remote work positions, it’s likely that users will be creating new digital accounts and accessing different services online.

A modern password management approach that continuously screens for any credential compromise is the best way that organizations can secure this complex environment while simultaneously encouraging productivity and reducing help desk costs.

Three immediate steps to take to protect your APIs from security risks

In one form or another, APIs have been around for years, bringing the benefits of ease of use, efficiency and flexibility to the developer community. The advantage of using APIs for mobile and web apps is that developers can build and deploy functionality and data integrations quickly.

API security posture

API security posture

But there is a huge downside to this approach. Undermining the power of an API-driven development methodology are shadow, deprecated and non-conforming APIs that, when exposed to the public, introduce the risk of data loss, compromise or automated fraud.

The stateless nature of APIs and their ubiquity makes protecting them increasingly difficult, largely because malicious actors can leverage the same developer benefits – ease of use and flexibility – to easily execute account takeovers, credential stuffing, fake account creation or content scraping. It’s no wonder that Gartner identified API security as a top concern for 50% of businesses.

Thankfully, it’s never too late to get your API footprint in order to better protect your organization’s critical data. Here are a few easy steps you can follow to mitigate API security risks immediately:

1. Start an organization-wide conversation

If your company is having conversations around API security at all, it’s likely that they are happening in a fractured manner. If there’s no larger, cohesive conversation, then various development and operational teams could be taking conflicting approaches to mitigating API security risks.

For this reason, teams should discuss how they can best work together to support API security initiatives. As a basis for these meetings, teams should refer to the NIST Cybersecurity Framework, as it’s a great way to develop a shared understanding of organization-wide cybersecurity risks. The NIST CSF will help the collective team to gain a baseline awareness about the APIs used across the organization to pinpoint the potential gaps in the operational processes that support them, so that companies can work towards improving their API strategy immediately.

2. Ask (& answer) any outstanding questions as a team

To improve an organization’s API security posture, it’s critical that outstanding questions are asked and answered immediately so that gaps in security are reduced and closed. When posing these questions to the group, consider the API assets you have overall, the business environment, governance, risk assessment, risk management strategy, access control, awareness and training, anomalies and events, continuous security monitoring, detection processes, etc. Leave no stone unturned. Here are a few suggested questions to use as a starting point as you work on the next step in this process towards getting your API security house in order:

  • How many APIs do we have?
  • How were they developed? Which are open-source, custom built or third-party?
  • Which APIs are subject to legal or regulatory compliance?
  • How do we monitor for vulnerabilities in our APIs?
  • How do we protect our APIs from malicious traffic?
  • Are there APIs with vulnerabilities?
  • What is the business impact if the APIs are compromised or abused?
  • Is API security a part of our on-going developer training and security evangelism?

Once any security holes have been identified through a shared understanding, the team then can collectively work together to fill those gaps.

3. Strive for complete and continuous API security and visibility

Visibility is critical to immediate and continuous API security. By going through step one and two, organizations are working towards more secure APIs today – but what about tomorrow and in the years to come as your API footprint expands exponentially?

Consider implementing a visibility and monitoring solution to help you oversee this security program on an ongoing basis, so that your organization can feel confident in having a strong and comprehensive API security posture that grows and adapts as your number of APIs expand and shift. The key components to visibility and monitoring?

Centralized visibility and inventory of all APIs, a detailed view of API traffic patterns, discovery of APIs transmitting sensitive data, continuous API specification conformance assessment, having validated authentication and access control programs in place and running automatic risk analysis based on predefined criteria. Continuous, runtime visibility into how many APIs an organization has, who is accessing them, where they are located and how they are being used, is key to API security.

As organizations continue to expand their use of APIs to drive their business, it’s crucial that companies consider every path malicious actors might take to attack their organization’s critical data.

MITRE Shield shows why deception is security’s next big thing

Seasoned cybersecurity pros will be familiar with MITRE. Known for its MITRE ATT&CK framework, MITRE helps develop threat models and defensive methodologies for both the private and public sector cybersecurity communities.

MITRE Shield

MITRE recently added to their portfolio and released MITRE Shield, an active defense knowledge base that captures and organizes security techniques in a way that is complementary to the mitigations featured in MITRE ATT&CK.

The MITRE Shield framework focuses on active defense and adversary engagement, which takes the passivity out of network defense. MITRE defines active defense as ranging from “basic cyber defensive capabilities to cyber deception and adversary engagement operations,” which “allow an organization to not only counter current attacks, but also learn more about that adversary and better prepare for new attacks in the future.”

This is the first time that deception has been proactively referenced in a framework from MITRE, and yes, it’s a big deal.

As the saying goes, the best defense is a good offense. Cybercriminals continue to evolve their tactics, and as a result, traditional security and endpoint protections are proving insufficient to defend against today’s sophisticated attackers. Companies can no longer sit back and hope that firewalls or mandatory security training will be enough to protect critical systems and information. Instead, they should consider the “active defense” tactics called for in MITRE Shield to help level the playing field.

Why deception?

The key to deception technology – and why it’s so relevant now – is that it goes beyond simple detection to identify and prevent lateral movement, notoriously one of the most difficult aspects of network defense. The last several months have been especially challenging for security teams, with the pandemic and the sudden shift to remote work leaving many organizations more vulnerable than before. Cybercriminals are acutely aware of this and have been capitalizing on the disruption to launch more attacks.

In fact, the number of data breaches in 2020 has almost doubled (compared to the year before), with more than 3,950 incidents as of August. But what this number doesn’t account for are the breaches that may still be undetected, in which attackers gained access to a company’s network and are performing reconnaissance weeks, or potentially months, before they actually launch an attack.

As they move through a network laterally, cybercriminals stealthily gather information about a company and its assets, allowing them to develop a plan for a more sophisticated and damaging attack down the line. This is where deception and active defense converge – hiding real assets (servers, applications, routers, printers, controllers and more) in a crowd of imposters that look and feel exactly like the real thing. In a deceptive environment, the attacker must be 100% right, otherwise they will waste time and effort collecting bad data in exchange for revealing their tradecraft to the defender.

Deception exists in a shadow network. Traps don’t touch real assets, making it a highly valued solution for even the most diverse environments, including IT, OT and Internet of Things devices. And because traps are not visible to legitimate users or systems and serve only to deceive attackers, they deliver high fidelity alerts and virtually no false positives.

How can companies embrace MITRE Shield using deception?

MITRE Shield currently contains 34 deception-based tactics, all mapped to one of MITRE’s eight active defense categories: Channel, Collect, Contain, Detect, Disrupt, Facilitate, Legitimize and Test. Approximately one third of suggested tactics in the framework are related to deception, which not only shows the power of deception as an active defense strategy, but also provides a roadmap for companies to develop a successful deception posture of their own.

There are three tiers of deceptive assets that companies should consider, depending on the level of forensics desired:

1. Low interaction, which consists of simple fake assets designed to divert cybercriminals away from the real thing, using up their time and resources.

2. Medium interaction, which offers greater insights into the techniques used by cybercriminals, allowing security teams to identify attackers and respond to the attack.

3. High interaction, which provides the most insight into attacker activity, leveraging extended interaction to collect information.

While a company doesn’t have to use all of the deception-based tactics outlined in MITRE Shield to prevent attacks, low interaction decoys are a good place to start, and can be deployed in a matter of minutes. Going forward, CISOs should consider whether it’s time to rethink their security strategy to include more active defense tactics, including deception.

The lifecycle of a eureka moment in cybersecurity

It takes more than a single eureka moment to attract investor backing, especially in a notoriously high-stakes and competitive industry like cybersecurity.

eureka moment cybersecurity

While every seed-stage investor has their respective strategies for vetting raw ideas, my experience of the investment due diligence process involves a veritable ringer of rapid-fire, back-to-back meetings with cybersecurity specialists and potential customers, as well as rigorous market scoping by analysts and researchers.

As the CTO of a seed-stage venture capital firm entirely dedicated to cybersecurity, I spend a good portion of my time ideating alongside early-stage entrepreneurs and working through this process with them. To do this well, I’ve had to develop an internal seismometer for industry pain points and potential competitors, play matchmaker between tech geniuses and industry decision-makers, and peer down complex roadmaps to find the optimal point of convergence for good tech and good business.

Along the way, I’ve gained a unique perspective on the set of necessary qualities for a good idea to turn into a successful startup with significant market traction.

Just as a good idea doesn’t necessarily translate into a great product, the qualities of a great product don’t add up to a magic formula for guaranteed success. However, how well an idea performs in the categories I set out below can directly impact the confidence of investors and potential customers you’re pitching to. Therefore, it’s vital that entrepreneurs ask themselves the following before a pitch:

Do I have a strong core value proposition?

The cybersecurity industry is saturated with features passing themselves off as platforms. While the accumulated value of a solution’s features may be high, its core value must resonate with customers above all else. More pitches than I wish to count have left me scratching my head over a proposed solution’s ultimate purpose. Product pitches must lead with and focus on the solution’s core value proposition, and this proposition must be able to hold its own and sell itself.

Consider a browser security plugin with extensive features that include XSS mitigation, malicious website blocking, employee activity logging and download inspections. This product proposition may be built on many nice-to-have features, but, without a strong core feature, it doesn’t add up to a strong product that customers will be willing to buy. Add-on features, should they need to be discussed, ought to be mentioned as secondary or additional points of value.

What is my solution’s path to scalability?

Solutions must be scalable in order to reach as many customers as possible and avoid price hikes with reduced margins. Moreover, it’s critical to factor in the maintenance cost and “tech debt” of solutions that are environment-dependent on account of integrations with other tools or difficult deployments.

I’ve come across many pitches that fail to do this, and entrepreneurs who forget that such an omission can both limit their customer pool and eventually incur tremendous costs for integrations that are destined to lose value over time.

What is my product experience like for customers?

A solution’s viability and success lie in so much more than its outcome. Both investors and customers require complete transparency over the ease-of use of a product in order for it to move forward in the pipeline. Frictionless and resource-light deployments are absolutely key and should always mind the realities of inter-departmental politics. Remember, the requirement of additional hires for a company to use your product is a hidden cost that will ultimately reduce your margins.

Moreover, it can be very difficult for companies to rope in the necessary stakeholders across their organization to help your solution succeed. Finally, requiring hard-to-come-by resources for a POC, such as sensitive data, may set up your solution for failure if customers are reluctant to relinquish the necessary assets.

What is my solution’s time-to-value?

Successfully discussing a core value must eventually give way to achieving it. Satisfaction with a solution will always ultimately boil down to deliverables. From the moment your idea raises funds, your solution will be running against the clock to provide its promised value, successfully interact with the market and adapt itself where necessary.

The ability to demonstrate strong initial performance will draw in sought-after design partners and allow you to begin selling earlier. Not only are these sales necessary bolsters to your follow on rounds, they also pave the way for future upsells to customers.

It’s critical, where POCs are involved, that the beta content installed by early customers delivers well in order to drive conversions and complete the sales process. It’s critical to create a roadmap for achieving this type of deliverability that can be clearly articulated to your stakeholders.

When will my solution deliver value?

It’s all too common for entrepreneurs to focus on “the ultimate solution”. This usually amounts to what they hope their solution will achieve some three years into development while neglecting the market value it can provide along the way. While investors are keen to embrace the big picture, this kind of entrepreneurial tunnel vision hurts product sales and future fundraising.

Early-stage startups must build their way up to solving big problems and reconcile with the fact that they are typically only equipped to resolve small ones until they reach maturity. This must be communicated transparently to avoid creating a false image of success in your market validation. Avoid asking “do you need a product that solves your [high-level problem]?” and ask instead “would you pay for a product that solves this key element of your [high-level problem]?”.

Unless an idea breaks completely new ground or looks to secure new tech, it’s likely to be an improvement to an already existing solution. In order to succeed at this, however, it’s critical to understand the failures and drawbacks of existing solutions before embarking on building your own.

Cybersecurity buyers are often open to switching over to a product that works as well as one they already use without its disadvantages. However, it’s incumbent on vendors to avoid making false promises and follow through on improving their output.

The cybersecurity industry is full of entrepreneurial genius poised to disrupt the current market. However, that potential can only manifest by designing it to address much more than mere security gaps.

The lifecycle of a good cybersecurity idea may start with tech, but it requires a powerful infusion of foresight and listening to make it through investor and customer pipelines. This requires an extraordinary amount of research in some very unexpected places, and one of the biggest obstacles ideating entrepreneurs face is determining precisely what questions to ask and gaining access to those they need to understand.

Working with well-connected investors dedicated to fostering those relationships, ironing out roadmap kinks in the ideation process is one of the surest ways to secure success. We must focus on building good ideas sustainably and remember that immediate partial value delivery is a small compromise towards building out the next great cybersecurity disruptor.

Hardware security: Emerging attacks and protection mechanisms

Maggie Jauregui’s introduction to hardware security is a fun story: she figured out how to spark, smoke, and permanently disable GFCI (Ground Fault Circuit Interrupter – the two button protections on plugs/sockets that prevent you from electrocuting yourself by accident with your hair dryer) wirelessly with a walkie talkie.

hardware security challenges

“I could also do this across walls with a directional antenna, and this also worked on AFCI’s (Arc Fault Circuit Interrupts – part of the circuit breaker box in your garage), which meant you could drive by someone’s home and potentially turn off their lights,” she told Help Net Security.

This first foray into hardware security resulted in her first technical presentation ever at DEF CON and a follow up presentation at CanSecWest about the effects of radio waves on modern platforms.

Jauregui says she’s always been interested in hardware. She started out as an electrical engineering major but switched to computer science halfway through university, and ultimately applied to be an Intel intern in Mexico.

“After attending my first hackathon — where I actually met my husband — I’ve continued to explore my love for all things hardware, firmware, and security to this day, and have been a part of various research teams at Intel ever since,” she added. (She’s currently a member of the corporation’s Platform Armoring and Resilience team.)

What do we talk about when we talk about hardware security?

Computer systems – a category that these days includes everything from phones and laptops to wireless thermostats and other “smart” home appliances – are a combination of many hardware components (a processor, memory, i/o peripherals, etc.) that together with firmware and software are capable of delivering services and enabling the connected data centric world we live in.

Hardware-based security typically refers to the defenses that help protect against vulnerabilities targeting these devices, and it’s main focus it to make sure that the different hardware components working together are architected, implemented, and configured correctly.

“Hardware can sometimes be considered its own level of security because it often requires physical presence in order to access or modify specific fuses, jumpers, locks, etc,” Jauregui explained. This is why hardware is also used as a root of trust.

Hardware security challenges

But every hardware device has firmware – a tempting attack vector for many hackers. And though the industry has been making advancements in firmware security solutions, many organizations are still challenged by it and don’t know how to adequately protect their systems and data, she says.

She advises IT security specialists to be aware of firmware’s importance as an asset to their organization’s threat model, to make sure that the firmware on company devices is consistently updated, and to set up automated security validation tools that can scan for configuration anomalies within their platform and evaluate security-sensitive bits within their firmware.

“Additionally, Confidential Computing has emerged as a key strategy for helping to secure data in use,” she noted. “It uses hardware memory protections to better isolate sensitive data payloads. This represents a fundamental shift in how computation is done at the hardware level and will change how vendors can structure their application programs.”

Finally, the COVID-19 pandemic has somewhat disrupted the hardware supply chain and has brought to the fore another challenge.

“Because a computing system is typically composed of multiple components from different manufacturers, each with its own level of scrutiny in relation to potential supply chain attacks, it’s challenging to verify the integrity across all stages of its lifecycle,” Jauregui explained.

“This is why it is critical for companies to work together on a validation and attestation solution for hardware and firmware that can be conducted prior to integration into a larger system. If the industry as a whole comes together, we can create more measures to help protect a product through its entire lifecycle.”

Achieving security in low-end systems on chips

The proliferation of Internet of Things devices and embedded systems and our reliance on them should make the security of these systems extremely important.

As they commonly rely on systems on chips (SoCs) – integrated circuits that consolidate the components of a computer or other electronic system on a single microchip – securing these devices is a different proposition than securing “classic” computer systems, especially if they rely on low-end SoCs.

Jauregui says that there is no single blanket solution approach to implement security of embedded systems, and that while some of the general hardware security recommendations apply, many do not.

“I highly recommend readers to check out the book Demystifying Internet of Things Security written by Intel scientists and Principal Engineers. It’s an in depth look at the threat model, secure boot, chain of trust, and the SW stack leading up to defense-in-depth for embedded systems. It also examines the different security building blocks available in Intel Architecture (IA) based IoT platforms and breaks down some the misconceptions of the Internet of Things,” she added.

“This book explores the challenges to secure these devices and provides suggestions to make them more immune to different threats originating from within and outside the network.”

For those security professionals who are interested in specializing in hardware security, she advises being curious about how things work and doing research, following folks doing interesting things on Twitter and asking them things, and watching hardware security conference talks and trying to reproduce the issues.

“Learn by doing. And if you want someone to lead you through it, go take a class! I recommend hardware security classes by Joe FitzPatrick and Joe Grand, as they are brilliant hardware researchers and excellent teachers,” she concluded.