Three immediate steps to take to protect your APIs from security risks

In one form or another, APIs have been around for years, bringing the benefits of ease of use, efficiency and flexibility to the developer community. The advantage of using APIs for mobile and web apps is that developers can build and deploy functionality and data integrations quickly.

API security posture

API security posture

But there is a huge downside to this approach. Undermining the power of an API-driven development methodology are shadow, deprecated and non-conforming APIs that, when exposed to the public, introduce the risk of data loss, compromise or automated fraud.

The stateless nature of APIs and their ubiquity makes protecting them increasingly difficult, largely because malicious actors can leverage the same developer benefits – ease of use and flexibility – to easily execute account takeovers, credential stuffing, fake account creation or content scraping. It’s no wonder that Gartner identified API security as a top concern for 50% of businesses.

Thankfully, it’s never too late to get your API footprint in order to better protect your organization’s critical data. Here are a few easy steps you can follow to mitigate API security risks immediately:

1. Start an organization-wide conversation

If your company is having conversations around API security at all, it’s likely that they are happening in a fractured manner. If there’s no larger, cohesive conversation, then various development and operational teams could be taking conflicting approaches to mitigating API security risks.

For this reason, teams should discuss how they can best work together to support API security initiatives. As a basis for these meetings, teams should refer to the NIST Cybersecurity Framework, as it’s a great way to develop a shared understanding of organization-wide cybersecurity risks. The NIST CSF will help the collective team to gain a baseline awareness about the APIs used across the organization to pinpoint the potential gaps in the operational processes that support them, so that companies can work towards improving their API strategy immediately.

2. Ask (& answer) any outstanding questions as a team

To improve an organization’s API security posture, it’s critical that outstanding questions are asked and answered immediately so that gaps in security are reduced and closed. When posing these questions to the group, consider the API assets you have overall, the business environment, governance, risk assessment, risk management strategy, access control, awareness and training, anomalies and events, continuous security monitoring, detection processes, etc. Leave no stone unturned. Here are a few suggested questions to use as a starting point as you work on the next step in this process towards getting your API security house in order:

  • How many APIs do we have?
  • How were they developed? Which are open-source, custom built or third-party?
  • Which APIs are subject to legal or regulatory compliance?
  • How do we monitor for vulnerabilities in our APIs?
  • How do we protect our APIs from malicious traffic?
  • Are there APIs with vulnerabilities?
  • What is the business impact if the APIs are compromised or abused?
  • Is API security a part of our on-going developer training and security evangelism?

Once any security holes have been identified through a shared understanding, the team then can collectively work together to fill those gaps.

3. Strive for complete and continuous API security and visibility

Visibility is critical to immediate and continuous API security. By going through step one and two, organizations are working towards more secure APIs today – but what about tomorrow and in the years to come as your API footprint expands exponentially?

Consider implementing a visibility and monitoring solution to help you oversee this security program on an ongoing basis, so that your organization can feel confident in having a strong and comprehensive API security posture that grows and adapts as your number of APIs expand and shift. The key components to visibility and monitoring?

Centralized visibility and inventory of all APIs, a detailed view of API traffic patterns, discovery of APIs transmitting sensitive data, continuous API specification conformance assessment, having validated authentication and access control programs in place and running automatic risk analysis based on predefined criteria. Continuous, runtime visibility into how many APIs an organization has, who is accessing them, where they are located and how they are being used, is key to API security.

As organizations continue to expand their use of APIs to drive their business, it’s crucial that companies consider every path malicious actors might take to attack their organization’s critical data.

The lifecycle of a eureka moment in cybersecurity

It takes more than a single eureka moment to attract investor backing, especially in a notoriously high-stakes and competitive industry like cybersecurity.

eureka moment cybersecurity

While every seed-stage investor has their respective strategies for vetting raw ideas, my experience of the investment due diligence process involves a veritable ringer of rapid-fire, back-to-back meetings with cybersecurity specialists and potential customers, as well as rigorous market scoping by analysts and researchers.

As the CTO of a seed-stage venture capital firm entirely dedicated to cybersecurity, I spend a good portion of my time ideating alongside early-stage entrepreneurs and working through this process with them. To do this well, I’ve had to develop an internal seismometer for industry pain points and potential competitors, play matchmaker between tech geniuses and industry decision-makers, and peer down complex roadmaps to find the optimal point of convergence for good tech and good business.

Along the way, I’ve gained a unique perspective on the set of necessary qualities for a good idea to turn into a successful startup with significant market traction.

Just as a good idea doesn’t necessarily translate into a great product, the qualities of a great product don’t add up to a magic formula for guaranteed success. However, how well an idea performs in the categories I set out below can directly impact the confidence of investors and potential customers you’re pitching to. Therefore, it’s vital that entrepreneurs ask themselves the following before a pitch:

Do I have a strong core value proposition?

The cybersecurity industry is saturated with features passing themselves off as platforms. While the accumulated value of a solution’s features may be high, its core value must resonate with customers above all else. More pitches than I wish to count have left me scratching my head over a proposed solution’s ultimate purpose. Product pitches must lead with and focus on the solution’s core value proposition, and this proposition must be able to hold its own and sell itself.

Consider a browser security plugin with extensive features that include XSS mitigation, malicious website blocking, employee activity logging and download inspections. This product proposition may be built on many nice-to-have features, but, without a strong core feature, it doesn’t add up to a strong product that customers will be willing to buy. Add-on features, should they need to be discussed, ought to be mentioned as secondary or additional points of value.

What is my solution’s path to scalability?

Solutions must be scalable in order to reach as many customers as possible and avoid price hikes with reduced margins. Moreover, it’s critical to factor in the maintenance cost and “tech debt” of solutions that are environment-dependent on account of integrations with other tools or difficult deployments.

I’ve come across many pitches that fail to do this, and entrepreneurs who forget that such an omission can both limit their customer pool and eventually incur tremendous costs for integrations that are destined to lose value over time.

What is my product experience like for customers?

A solution’s viability and success lie in so much more than its outcome. Both investors and customers require complete transparency over the ease-of use of a product in order for it to move forward in the pipeline. Frictionless and resource-light deployments are absolutely key and should always mind the realities of inter-departmental politics. Remember, the requirement of additional hires for a company to use your product is a hidden cost that will ultimately reduce your margins.

Moreover, it can be very difficult for companies to rope in the necessary stakeholders across their organization to help your solution succeed. Finally, requiring hard-to-come-by resources for a POC, such as sensitive data, may set up your solution for failure if customers are reluctant to relinquish the necessary assets.

What is my solution’s time-to-value?

Successfully discussing a core value must eventually give way to achieving it. Satisfaction with a solution will always ultimately boil down to deliverables. From the moment your idea raises funds, your solution will be running against the clock to provide its promised value, successfully interact with the market and adapt itself where necessary.

The ability to demonstrate strong initial performance will draw in sought-after design partners and allow you to begin selling earlier. Not only are these sales necessary bolsters to your follow on rounds, they also pave the way for future upsells to customers.

It’s critical, where POCs are involved, that the beta content installed by early customers delivers well in order to drive conversions and complete the sales process. It’s critical to create a roadmap for achieving this type of deliverability that can be clearly articulated to your stakeholders.

When will my solution deliver value?

It’s all too common for entrepreneurs to focus on “the ultimate solution”. This usually amounts to what they hope their solution will achieve some three years into development while neglecting the market value it can provide along the way. While investors are keen to embrace the big picture, this kind of entrepreneurial tunnel vision hurts product sales and future fundraising.

Early-stage startups must build their way up to solving big problems and reconcile with the fact that they are typically only equipped to resolve small ones until they reach maturity. This must be communicated transparently to avoid creating a false image of success in your market validation. Avoid asking “do you need a product that solves your [high-level problem]?” and ask instead “would you pay for a product that solves this key element of your [high-level problem]?”.

Unless an idea breaks completely new ground or looks to secure new tech, it’s likely to be an improvement to an already existing solution. In order to succeed at this, however, it’s critical to understand the failures and drawbacks of existing solutions before embarking on building your own.

Cybersecurity buyers are often open to switching over to a product that works as well as one they already use without its disadvantages. However, it’s incumbent on vendors to avoid making false promises and follow through on improving their output.

The cybersecurity industry is full of entrepreneurial genius poised to disrupt the current market. However, that potential can only manifest by designing it to address much more than mere security gaps.

The lifecycle of a good cybersecurity idea may start with tech, but it requires a powerful infusion of foresight and listening to make it through investor and customer pipelines. This requires an extraordinary amount of research in some very unexpected places, and one of the biggest obstacles ideating entrepreneurs face is determining precisely what questions to ask and gaining access to those they need to understand.

Working with well-connected investors dedicated to fostering those relationships, ironing out roadmap kinks in the ideation process is one of the surest ways to secure success. We must focus on building good ideas sustainably and remember that immediate partial value delivery is a small compromise towards building out the next great cybersecurity disruptor.

Hardware security: Emerging attacks and protection mechanisms

Maggie Jauregui’s introduction to hardware security is a fun story: she figured out how to spark, smoke, and permanently disable GFCI (Ground Fault Circuit Interrupter – the two button protections on plugs/sockets that prevent you from electrocuting yourself by accident with your hair dryer) wirelessly with a walkie talkie.

hardware security challenges

“I could also do this across walls with a directional antenna, and this also worked on AFCI’s (Arc Fault Circuit Interrupts – part of the circuit breaker box in your garage), which meant you could drive by someone’s home and potentially turn off their lights,” she told Help Net Security.

This first foray into hardware security resulted in her first technical presentation ever at DEF CON and a follow up presentation at CanSecWest about the effects of radio waves on modern platforms.

Jauregui says she’s always been interested in hardware. She started out as an electrical engineering major but switched to computer science halfway through university, and ultimately applied to be an Intel intern in Mexico.

“After attending my first hackathon — where I actually met my husband — I’ve continued to explore my love for all things hardware, firmware, and security to this day, and have been a part of various research teams at Intel ever since,” she added. (She’s currently a member of the corporation’s Platform Armoring and Resilience team.)

What do we talk about when we talk about hardware security?

Computer systems – a category that these days includes everything from phones and laptops to wireless thermostats and other “smart” home appliances – are a combination of many hardware components (a processor, memory, i/o peripherals, etc.) that together with firmware and software are capable of delivering services and enabling the connected data centric world we live in.

Hardware-based security typically refers to the defenses that help protect against vulnerabilities targeting these devices, and it’s main focus it to make sure that the different hardware components working together are architected, implemented, and configured correctly.

“Hardware can sometimes be considered its own level of security because it often requires physical presence in order to access or modify specific fuses, jumpers, locks, etc,” Jauregui explained. This is why hardware is also used as a root of trust.

Hardware security challenges

But every hardware device has firmware – a tempting attack vector for many hackers. And though the industry has been making advancements in firmware security solutions, many organizations are still challenged by it and don’t know how to adequately protect their systems and data, she says.

She advises IT security specialists to be aware of firmware’s importance as an asset to their organization’s threat model, to make sure that the firmware on company devices is consistently updated, and to set up automated security validation tools that can scan for configuration anomalies within their platform and evaluate security-sensitive bits within their firmware.

“Additionally, Confidential Computing has emerged as a key strategy for helping to secure data in use,” she noted. “It uses hardware memory protections to better isolate sensitive data payloads. This represents a fundamental shift in how computation is done at the hardware level and will change how vendors can structure their application programs.”

Finally, the COVID-19 pandemic has somewhat disrupted the hardware supply chain and has brought to the fore another challenge.

“Because a computing system is typically composed of multiple components from different manufacturers, each with its own level of scrutiny in relation to potential supply chain attacks, it’s challenging to verify the integrity across all stages of its lifecycle,” Jauregui explained.

“This is why it is critical for companies to work together on a validation and attestation solution for hardware and firmware that can be conducted prior to integration into a larger system. If the industry as a whole comes together, we can create more measures to help protect a product through its entire lifecycle.”

Achieving security in low-end systems on chips

The proliferation of Internet of Things devices and embedded systems and our reliance on them should make the security of these systems extremely important.

As they commonly rely on systems on chips (SoCs) – integrated circuits that consolidate the components of a computer or other electronic system on a single microchip – securing these devices is a different proposition than securing “classic” computer systems, especially if they rely on low-end SoCs.

Jauregui says that there is no single blanket solution approach to implement security of embedded systems, and that while some of the general hardware security recommendations apply, many do not.

“I highly recommend readers to check out the book Demystifying Internet of Things Security written by Intel scientists and Principal Engineers. It’s an in depth look at the threat model, secure boot, chain of trust, and the SW stack leading up to defense-in-depth for embedded systems. It also examines the different security building blocks available in Intel Architecture (IA) based IoT platforms and breaks down some the misconceptions of the Internet of Things,” she added.

“This book explores the challenges to secure these devices and provides suggestions to make them more immune to different threats originating from within and outside the network.”

For those security professionals who are interested in specializing in hardware security, she advises being curious about how things work and doing research, following folks doing interesting things on Twitter and asking them things, and watching hardware security conference talks and trying to reproduce the issues.

“Learn by doing. And if you want someone to lead you through it, go take a class! I recommend hardware security classes by Joe FitzPatrick and Joe Grand, as they are brilliant hardware researchers and excellent teachers,” she concluded.

Cybersecurity lessons learned from data breaches and brand trust matters

Your brand is a valuable asset, but it’s also a great attack vector. Threat actors exploit the public’s trust of your brand when they phish under your name or when they counterfeit your products. The problem gets harder because you engage with the world across so many digital platforms – the web, social media, mobile apps. These engagements are obviously crucial to your business.

Something else should be obvious as well: guarding your digital trust – public confidence in your digital security – is make-or-break for your business, not just part of your compliance checklist.

COVID-19 has put a renewed spotlight on the importance of defending against cyberattacks and data breaches as more users are accessing data from remote or non-traditional locations. Crisis fuels cybercrime and we have seen that hacking has increased substantially as digital transformation initiatives have accelerated and many employees have been working from home without adequate firewalls and back-up protection.

The impact of cybersecurity breaches is no longer constrained to the IT department. The frequency and sophistication of ransomware, phishing schemes, and data breaches have the potential to destroy both brand health and financial viability. Organizations across industry verticals have seen their systems breached as cyber thieves have tried to take advantage of a crisis.

Good governance will be essential for handling the management of cyber issues. Strong cybersecurity will also be important to show customers that steps are being taken to avoid hackers and keep their data safe.

The COVID crisis has not changed the cybersecurity fundamentals. What will the new normal be like? While the COVID pandemic has turned business and society upside down, well-established cybersecurity practices – some known for decades – remain the best way to protect yourself.

1. Data must be governed

Data governance is the capability within an organization to help provide for and protect for high quality data throughout the lifecycle of that data. This includes data integrity, data security, availability, and consistency. Data governance includes people, processes, and technology that help enable appropriate handling of the data across the organization. Data governance program policies include:

  • Delineating accountability for those responsible for data and data assets
  • Assigning responsibility to appropriate levels in the organization for managing and protecting the data
  • Determining who can take what actions, with what data, under what circumstances, using what methods
  • Identifying safeguards to protect data
  • Providing integrity controls to provide for the quality and accuracy of data

2. Patch management and vulnerability management: Two sides of a coin

Address threats with vulnerability management. Bad actors look to take advantage of discovered vulnerabilities in an attempt to infect a workstation or server. Managing threats is a reactive process where the threat must be actively present, whereas vulnerability management is proactive, seeking to close the security gaps that exist before they are taken advantage of.

It’s more than just patching vulnerabilities. Formal vulnerability management doesn’t simply involve the act of patching and reconfiguring insecure settings. Vulnerability management is a disciplined practice that requires an organizational mindset within IT that new vulnerabilities are found daily requiring the need for continual discovery and remediation.

3. Not “if” but “when”: Assume you’re already hacked

If you build your operations and defense with this premise in mind, your chances of helping to detect these types of attacks and preventing the breaches are much greater than most organizations today.

The importance of incident response steps

A data breach should be viewed as a “when” not “if” occurrence, so be prepared for it. Under the pressure of a critical-level incident is no time to be figuring out your game plan. Your future self will thank you for the time and effort you invest on the front end.

Incident response can be stressful and is stressful when a critical asset is involved and you realize there’s an actual threat. Incident response steps help in these stressing, high pressure situations to more quickly guide you to successful containment and recovery. Response time is critical to minimizing damages. With every second counting, having a plan to follow already in place is the key to success.

4. Your size does not mean security maturity

It does not matter how big you are or the resources your team can access. As defenders, we always think, “If I only had enough money or people, I could solve this problem.” We need to change our thinking. It’s not how much you spend but rather, is that spend an effective use? Does it allow your team to disrupt attacks or just wait to be alerted (maybe)? No matter where an organization is on its journey toward security maturity, a risk assessment can prove invaluable in deciding where and when it needs most improvement.

cybersecurity lessons learned

For more mature organizations, the risk assessment process will focus less on discovering major controls gaps and more on finding subtler opportunities for continuously improving the program. An assessment of a less mature program is likely to find misalignments with business goals, inefficiencies in processes or architecture, and places where protections could be taken to another level of effectiveness.

5. Do more with less

Limited budgets, limited staff, limited time. Any security professional will have dealt with all of these repeatedly while trying to launch new initiatives or when completing day-to-day tasks. They are possibly the most severe and dangerous adversaries that many cybersecurity professionals will face. They affect every organization regardless of industry, size, or location and pose an existential threat to even the most prepared company. There is no easy way to contain them either, since no company has unlimited funding or time, and the lack of cybersecurity professionals makes filling roles incredibly tricky.

How can organizations cope with these natural limitations? The answer is resource prioritization, along with a healthy dose of operational improvements. By identifying areas where processes can be streamlined and understanding what the most significant risks are, organizations can begin to help protect their systems while staying within their constraints.

6. Rome wasn’t built in a day

An edict out of the IT department won’t get the job done. Building a security culture takes time and effort. What’s more, cybersecurity awareness training ought to be a regular occurrence — once a quarter at a minimum — where it’s an ongoing conversation with employees. One-and-done won’t suffice.

People have short memories, so repetition is altogether appropriate when it comes to a topic that’s so strategic to the organization. This also needs to be part of a broader top-down effort starting with senior management. Awareness training should be incorporated across all organizations, not just limited to governance, threat detection, and incident response plans. The campaign should involve more than serving up a dry set of rules, separate from the broader business reality.

Measuring impact beyond a single incident

Determining the true impact of a cyber attack has always and will likely be one of the most challenging aspects of this technological age.

true impact

In an environment where very limited transparency on the root cause and the true impact is afforded we are left with isolated examples to point to the direct cost of a security incident. For example, the 2010 attack on the Natanz nuclear facilities was and in certain cases is still used as the reference case study for why cybersecurity is imperative within an ICS environment (quite possibly substituted with BlackEnergy).

For the impact on ransomware, it was the impact WannaCry had on healthcare and will likely be replaced with the awful story where a patient sadly lost their life because of a ransomware attack.

What these cases clearly provide is a degree of insight into their impact. Albeit this would be limited in certain scenarios, but this approach sadly almost excludes the multitude of attacks that successfully occurred prior and in which the impact was either unavailable or did not make the headline story.

It can of course be argued that the use of such case studies are a useful vehicle to influence change, there is equally the risk that they simply are such outliers that decision makers do not recognise their own vulnerabilities within the broader problem statement.

If we truly need to influence change, then a wider body of work to develop the broader economic, and societal impact, from the multitude of incidents is required. Whilst this is likely to be hugely subjective it is imperative to understand the true impact of cybersecurity. I recall a conversation a friend of mine had with someone who claimed they “are not concerned with malware because all it does is slow down their computer”. This of course is the wider challenge to articulate the impact in a manner which will resonate.

Ask anybody the impact of car theft and this will be understood, ask the same question about any number of digital incidents and the reply will likely be less clear.

It can be argued that studies which measure the macro cost of such incidents do indeed exist, but the problem statement of billions lost is so enormous that we each are unable to relate to this. A small business owner hearing about how another small business had their records locked with ransomware, and the impact to their business is likely to be more influential than an economic model explaining the financial cost of cybercrime (which is still imperative to policy makers for example).

If such case studies are so imperative and there exists a stigma with being open about such breaches what can be done? This of course is the largest challenge, with potential litigation governing every communication. To be entirely honest as I sit here and try and conclude with concrete proposals I am somewhat at a loss as to how to change the status quo.

The question is more an open one, what can be done? Can we leave fault at the door when we comment on security incidents? Perhaps encourage those that are victims to be more open? Of course this is only a start, and an area that deserves a wider discussion.

Using virtualization to isolate risky applications and other endpoint threats

More and more security professionals are realizing that it’s impossible to fully secure a Windows machine – with all its legacy components and millions of potentially vulnerable lines of code – from within the OS. With attacks becoming more sophisticated than ever, hypervisor-based security, from below the OS, becomes a necessity.

Unlike modern OS kernels, hypervisors are designed for a very specific task. Their code is usually very small, well-reviewed and tested, making them very hard to exploit. Because of that, the world trusts modern hypervisors to run servers, containers, and other workloads in the cloud, which sometimes run side-by-side on the same physical server with complete separation and isolation. Because of that, companies are leveraging the same trusted technology to bring hardware-enforced isolation to the endpoint.

Microsoft Defender Application Guard

Microsoft Defender Application Guard (previously known as Windows Defender Application Guard, or just WDAG), brings hypervisor-based isolation to Microsoft Edge and Microsoft Office applications.

It allows administrators to apply policies that force untrusted web sites and documents to be opened in isolated Hyper-V containers, completely separating potential malware from the host OS. Malware running in such containers won’t be able to access and exfiltrate sensitive files such as corporate documents or the users’ corporate credentials, cookies, or tokens.

With Application Guard for Edge, when a user opens a web site that was not added to the allow-list, he is automatically redirected to a new isolated instance of Edge, continuing the session there. This isolated instance of Edge provides another, much stronger, sandboxing layer to cope with web threats. If allowed by the administrator, files downloaded during that session can be accessed later from the host OS.

isolate risky applications

With Application Guard for Office, when a user opens an unknown document, maybe downloaded from the internet or opened as an email attachment, the document is automatically opened in an isolated instance of Office.

Until now, such documents would be opened in “protected view”, a special mode that eliminates the threat from scripts and macros by disabling embedded code execution. Unfortunately, this mode sometimes breaks legit files, such as spreadsheets that contain harmless macros. It also prevents users from editing documents.

Many users blindly disable the “protected view” mode to enable editing, thereby allowing malware to execute on the device. With Application Guard for Office, users don’t compromise security (the malware is trapped inside the isolated container) nor productivity )the document is fully functional and editable inside the container).

In both cases, the container is spawned instantly, with minimal CPU, memory, and disk footprints. Unlike traditional virtual machines, IT administrators don’t need to manage the underlying OS inside the container. Instead, it’s built out of existing Windows system binaries that remain patched as long as the host OS is up to date. Microsoft has also introduced new virtual GPU capabilities, allowing software running inside the container to be hardware-GPU accelerated. With all these optimizations, Edge and Office running inside the container feel fast and responsive, almost as if they were running without an additional virtualization layer.

The missing compatibility

While Application Guard works well with Edge and Office, it doesn’t support other applications. Edge will always be the browser running inside the container. That means, for example, no Google accounts synchronization, something that many users probably want.

What about downloaded applications? Applications are not allowed to run inside the container. (The container hardening contains some WDAC policies that allow only specific apps to execute.) That means that users can execute those potentially malicious applications on the host OS only.

Administrators who don’t allow unknown apps on the host OS might reduce users’ productivity and increase frustration. This is probably more prominent today, with so many people working from home and using a new wave of modern collaboration tools and video conferencing applications.

Users who are invited to external meetings sometimes need to download and run a client that may be blocked by the organization on the host OS. Unfortunately, it’s not possible to run the client inside the container either, and the users need to look for other solutions.

And what about non-Office documents? Though Office documents are protected, non-Office documents aren’t. Users sometimes use various other applications to create and edit documents, such as Adobe Acrobat and Photoshop, Autodesk AutoCAD, and many others. Application Guard won’t help to protect the host OS from such documents that are received over email or downloaded from the internet.

Even with Office alone, there might be problems. Many organizations use Office add-ons to customize and streamline the end-user experience. These add-ons may integrate with other local or online applications to provide additional functionality. As Application Guard runs a vanilla Office without any customizations, these add-ons won’t be able to run inside the container.

The missing manageability

Configuring Application Guard is not easy. First, while Application Guard for Edge technically works on both Windows Pro and Windows Enterprise, only on Windows Enterprise is it possible to configure it to kick-in automatically for untrusted websites. For non-technical users, that makes Application Guard almost useless in the eyes of their IT administrators, as those users have to launch it manually every time they consider a website to be untrusted. That’s a lot of room for human error. Even if all the devices are running Windows Enterprise, it’s not a walk in the park for administrators.

For the networking isolation configuration, administrators have to provide a manual list of comma-separated IPs and domain names. It’s not possible to integrate with your already fully configured web-proxy. It’s also not possible to integrate with category-based filtering systems that you might also have. Aside from the additional system to manage, there is no convenient UI or advanced capabilities (such as automatic filtering based on categories) to use. To make it work with Chrome or Firefox, administrators also need to perform additional configurations, such as delivering browser extensions.

This is not a turnkey solution for administrators and it requires messing with multiple configurations and GPOs until it works.
In addition, other management capabilities are very limited. For example, while admins can define whether clipboard operations (copy+paste) are allowed between the host and the container, it’s not possible to allow these operations only one way and not the other. It’s also not possible to allow certain content types such as text and images, while blocking others, such as binary files.
OS customizations and additional software bundlings such as Edge extensions and Office add-ins are not available either.

While Office files are opened automatically in Application Guard, other file types aren’t. Administrators that would like to use Edge as a secure and isolated PDF viewer, for example, can’t configure that.

The missing security

As stated before, Application Guard doesn’t protect against malicious files that were mistakenly categorized to be safe by the user. The user might securely download a malicious file on his isolated Edge but then choose to execute it on the host OS. He might also mistakenly categorize an untrusted document as a corporate one, to have it opened on the host OS. Malware could easily infect the host due to user errors.

Another potential threat comes from the networking side. While malware getting into the container is isolated in some aspects such as memory (it can’t inject itself into processes running on the host) and filesystem (it can’t replace files on the host with infected copies), it’s not fully isolated on the networking side.

Application Guard containers leverage the Windows Internet Connection Sharing (ICS) feature, to fully share networking with the host. That means that malware running inside the container might be able to attack some sensitive corporate resources that are accessible by the host (e.g., databases and data centers) by exploiting network vulnerabilities.

While Application Guard tries to isolate web and document threats, it doesn’t provide isolation in other areas. As mentioned before, Application Guard can’t isolate non-Microsoft applications that the organization chooses to use but not trust. Video conferencing applications, for example, have been exploited in the past and usually don’t require access to corporate data – it’s much safer to execute these in an isolated container.

External device handling is another risky area. Think of CVE-2016-0133, which allowed attackers to execute malicious code in the Windows kernel simply by plugging a USB thumb drive into the victim’s laptop. Isolating unknown USB devices can stop such attacks.

The missing holistic solution

Wouldn’t it be great if users could easily open any risky document in an isolated environment, e.g., through a context menu? Or if administrators could configure any risky website, document, or application to be automatically transferred and opened in an isolated environment? And maybe also to have corporate websites to be automatically opened back on the host OS, to avoid mixing sensitive information and corporate credentials with non-corporate work?

How about automatically attaching risky USB devices to the container, e.g., personal thumb drives, to reduce chances of infecting the host OS? And what if all that could be easy for administrators to deploy and manage, as a turn-key solution in the cloud?

Credential stuffing is just the tip of the iceberg

Credential stuffing attacks are taking up a lot of the oxygen in cybersecurity rooms these days. A steady blitz of large-scale cybersecurity breaches in recent years have flooded the dark web with passwords and other credentials that are used in subsequent attacks such as those on Reddit and State Farm, as well as widespread efforts to exploit the remote work and online get-togethers resulting from the COVID-19 pandemic.

credential stuffing

But while enterprises are rightly worried about weathering a hurricane of credential-stuffing attacks, they also need to be concerned about more subtle, but equally dangerous, threats to APIs that can slip in under the radar.

Attacks that exploit APIs, beyond credential stuffing, can start small with targeted probing of unique API logic, and lead to exploits such as the theft of personal information, wholesale data exfiltration or full account takeovers.

Unlike automated flood-the-zone, volume-based credential attacks, other API attacks are conducted almost one-to-one and carried out in elusive ways, targeting the distinct vulnerabilities of each API, making them even harder to detect than attacks happening on a large scale. Yet, they’re capable of causing as much, if not more, damage. And they’re becomingg more and more prevalent with APIs being the foundation of modern applications.

Beyond credential stuffing

Credential stuffing attacks are a key concern for good reason. High profile breaches—such as those of Equifax and LinkedIn, to name two of many—have resulted in billions of compromised credentials floating around on the dark web, feeding an underground industry of malicious activity. For several years now, about 80% of breaches that have resulted from hacking have involved stolen and/or weak passwords, according to Verizon’s annual Data Breach Investigations Report.

Additionally, research by Akamai determined that three-quarters of credential abuse attacks against the financial services industry in 2019 were aimed at APIs. Many of those attacks are conducted on a large scale to overwhelm organizations with millions of automated login attempts.

The majority of threats to APIs move beyond credential stuffing, which is only one of many threats to APIs as defined in the 2019 OWASP API Security Top 10. In many instances they are not automated, are much more subtle and come from authenticated users.

APIs, which are essential to an increasing number of applications, are specialized entities performing particular functions for specific organizations. Someone exploiting a vulnerability in an API used by a bank, retailer or other institution could, with a couple of subtle calls, dump the database, drain an account, cause an outage or do all kinds of other damage to impact revenue and brand reputation.

An attacker doesn’t even have to necessarily sneak in. For instance, they could sign on to Disney+ as a legitimate user and then poke around the API looking for opportunities to exploit. In one example of a front-door approach, a researcher came across an API vulnerability on the Steam developer site that would allow the theft of game license keys. (Luckily for the company, he reported it—and was rewarded with $20,000.)

Most API attacks are very difficult to detect and defend against since they’re carried out in such a clandestine manner. Because APIs are mostly unique, their vulnerabilities don’t conform to any pattern or signature that would allow common security controls to be enforced at scale. And the damage can be considerable, even coming from a single source. For example, an attacker exploiting a weakness in an API could launch a successful DoS attack with a single request.

API DoS

Rather than the more common DDoS attack, which floods a target with requests from many sources via a botnet, an API DoS can happen when the attacker manipulates the logic of the API, causing the application to overwork itself. If an API is designed to return, say, 10 items per request, an attacker could change that value to 10 million, using up all of an application’s resources and crashing it—with a single request.

Credential stuffing attacks present security challenges of their own. With easy access to evasion tools—and with their own sophistication improving dramatically – it’s not difficult for attackers to disguise their activity behind a mesh of thousands of IP addresses and devices. But credential stuffing nevertheless is an established problem with established solutions.

How enterprises can improve

Enterprises can scale infrastructure to mitigate credential stuffing attacks or buy a solution capable of identifying and stopping the attacks. The trick is to evaluate large volumes of activity and block malicious login attempts without impacting legitimate users, and to do it quickly, identifying successful malicious logins and alerting users in time to protect them from fraud.

Enterprises can improve API security first and foremost by identifying all of their APIs including data exposure, usage, and even those they didn’t know existed. When APIs fly under security operators’ radar, otherwise secure infrastructure has a hole in the fence. Once full visibility is attained, enterprises can more tightly control API access and use, and thus, enable better security.

NIST guide to help orgs recover from ransomware, other data integrity attacks

The National Institute of Standards and Technology (NIST) has published a cybersecurity practice guide enterprises can use to recover from data integrity attacks, i.e., destructive malware and ransomware attacks, malicious insider activity or simply mistakes by employees that have resulted in the modification or destruction of company data (emails, employee records, financial records, and customer data).

guide recover ransomware

About the guide

Ransomware is currently one of the most disruptive scourges affecting enterprises. While it would be ideal to detect the early warning signs of a ransomware attack to minimize its effects or prevent it altogether, there are still too many successful incursions that organizations must recover from.

Special Publication (SP) 1800-11, Data Integrity: Recovering from Ransomware and Other Destructive Events can help organizations to develop a strategy for recovering from an attack affecting data integrity (and to be able to trust that any recovered data is accurate, complete, and free of malware), recover from such an event while maintaining operations, and manage enterprise risk.

The goal is to monitor and detect data corruption in widely used as well as custom applications, and to identify what data way altered/corrupted, when, by whom, the impact of the action, whether other events happened at the same time. Finally, organizations are advised on how to restore data to its last known good configuration and to identify the correct backup version.

“Multiple systems need to work together to prevent, detect, notify, and recover from events that corrupt data. This project explores methods to effectively recover operating systems, databases, user files, applications, and software/system configurations. It also explores issues of auditing and reporting (user activity monitoring, file system monitoring, database monitoring, and rapid recovery solutions) to support recovery and investigations,” the authors added.

The National Cybersecurity Center of Excellence (NCCoE) at NIST used specific commercially available and open-source components when creating a solution to address this cybersecurity challenge, but noted that each organization’s IT security experts should choose products that will best work for them by taking into consideration how they will integrate with the IT system infrastructure and tools already in use.

guide recover ransomware

The NCCoE tested the set up against several test cases (ransomware attack, malware attack, user modifies a configuration file, administrator modifies a user’s file, database or database schema has been altered in error by an administrator or script). Additional materials can be found here.

Your best defense against ransomware: Find the early warning signs

As ransomware continues to prove how devastating it can be, one of the scariest things for security pros is how quickly it can paralyze an organization. Just look at Honda, which was forced to shut down all global operations in June, and Garmin, which had its services knocked offline for days in July.

Ransomware isn’t hard to detect but identifying it when the encryption and exfiltration are rampant is too little too late. However, there are several warning signs that organizations can catch before the real damage is done. In fact, FireEye found that there is usually three days of dwell time between these early warning signs and detonation of ransomware.

So, how does a security team find these weak but important early warning signals? Somewhat surprisingly perhaps, the network provides a unique vantage point to spot the pre-encryption activity of ransomware actors such as those behind Maze.

Here’s a guide, broken down by MITRE category, of the many different warning signs organizations being attacked by Maze ransomware can see and act upon before it’s too late.

Initial access

With Maze actors, there are several initial access vectors, such as phishing attachments and links, external-facing remote access such as Microsoft’s Remote Desktop Protocol (RDP), and access via valid accounts. All of these can be discovered while network threat hunting across traffic. Furthermore, given this represents the actor’s earliest foray into the environment, detecting this initial access is the organization’s best bet to significantly mitigate impact.

ATT&CK techniques

Hunt for…

T1193 Spear-phishing attachment
T1192 Spear-phishing link

  • Previously unseen or newly registered domains, unique registrars
  • Doppelgangers of your organization / partner’s domains or Alexa top 500
T133 External Remote Services
  • Inbound RDP from external devices
T1078 Valid accounts
  • Exposed passwords across SMB, FTP, HTTP, and other clear text usage
T1190 Exploit public-facing application
  • Exposure and exploit to known vulnerabilities

Execution

The execution phase is still early enough in an attack to shut it down and foil any attempts to detonate ransomware. Common early warning signs to watch for in execution include users being tricked into clicking a phishing link or attachment, or when certain tools such as PsExec have been used in the environment.

ATT&CK techniques

Hunt for…

T1024 User execution

  • Suspicious email behaviors from users and associated downloads
T1035 Service execution
  • File IO over SMB using PsExec, extracting contents on one system and then later on another system
T1028 Windows remote management
  • Remote management connections excluding known good devices

Persistence

Adversaries using Maze rely on several common techniques, such as a web shell on internet-facing systems and the use of valid accounts obtained within the environment. Once the adversary has secured a foothold, it starts to become increasingly difficult to mitigate impact.

ATT&CK techniques

Hunt for…

T1100 Web shell

  • Unique activity connections (e.g. atypical ports and user agents) from external connections
T1078 Valid accounts
  • Remote copy of KeePass file stores across SMB or HTTP

Privilege escalation

As an adversary gains higher levels of access it becomes significantly more difficult to pick up additional signs of activity in the environment. For the actors of Maze, the techniques used for persistence are similar to those for privileged activity.

ATT&CK techniques

Hunt for…

T1100 Web shell

  • Web shells on external facing web and gateway systems
T1078 Valid accounts
  • Remote copy of password files across SMB (e.g. files with “passw”)

Defense evasion

To hide files and their access to different systems, adversaries like the ones who use Maze will rename files, encode, archive, and use other mechanisms to hide their tracks. Attempts to hide their traces are in themselves indicators to hunt for.

ATT&CK techniques

Hunt for…

T1027 Obfuscated files or information

  • Adversary tools by port usage, certificate issuer name, or unknown protocol communications
T1078 Valid accounts
  • New account creation from workstations and other non-admin used devices

Credential access

There are several defensive controls that can be put in place to help limit or restrict access to credentials. Threat hunters can enable this process by providing situational awareness of network hygiene including specific attack tool usage, credential misuse attempts and weak or insecure passwords.

ATT&CK techniques

Hunt for…

T110 Brute force

  • RDP brute force attempts against known username accounts
T1081 Credentials in files
  • Unencrypted passwords and password files in the environment

Discovery

Maze adversaries use a number of different methods for internal reconnaissance and discovery. For example, enumeration and data collection tools and methods leave their own trail of evidence that can be identified before the exfiltration and encryption occurs.

ATT&CK techniques

Hunt for…

T1201 Password policy discovery

  • Traffic of devices copying the password policy off file shares
  • Enumeration of password policy
T1018 Remote system discovery

T1087 Account discovery

T1016 System network configuration discovery

T1135 Network share discovery

T1083 File and directory discovery

  • Enumeration for computer names, accounts, network connections, network configurations, or files

Lateral movement

Ransomware actors use lateral movement to understand the environment, spread through the network and then to collect and prepare data for encryption / exfiltration.

ATT&CK techniques

Hunt for…

T1105 Remote file copy

T1077 Windows admin shares

  • Suspicious SMB file write activity
  • PsExec usage to copy attack tools or access other systems
  • Attack tools copied across SMB
T1076 Remote Desktop Protocol

T1028 Windows remote management

T1097 Pass the ticket

  • HTTP POST with the use of WinRM user agent
  • Enumeration of remote management capabilities
  • Non-admin devices with RDP activity

Collection

In this phase, Maze actors use tools and batch scripts to collect information and prepare for exfiltration. It is typical to find .bat files or archives using the .7z or .exe extension at this stage.

ATT&CK techniques

Hunt for…

T1039 Data from network share drive

  • Suspicious or uncommon remote system data collection activity

Command and control (C2)

Many adversaries will use common ports or remote access tools to try and obtain and maintain C2, and Maze actors are no different. In the research my team has done, we’ve also seen the use of ICMP tunnels to connect to the attacker infrastructure.

ATT&CK techniques

Hunt for…

T1043 Common used port

T1071 Standard application layer protocol

  • ICMP callouts to IP addresses
  • Non-browser originating HTTP traffic
  • Unique device HTTP script like requests
T1105 Remote file copy
  • Downloads of remote access tools through string searches
T1219 Remote access tools
  • Cobalt strike BEACON and FTP to directories with cobalt in the name

Exfiltration

At this stage, the risk of exposure of sensitive data in the public realm is dire and it means an organization has missed many of the earlier warning signs—now it’s about minimizing impact.

ATT&CK techniques

Hunt for…

T1030 Data transfer size limits

  • External device traffic to uncommon destinations
T1048 Exfiltration over alternative protocol
  • Unknown FTP outbound
T1002: Data compressed
  • Archive file extraction

Summary

Ransomware is never good news when it shows up at the doorstep. However, with disciplined network threat hunting and monitoring, it is possible to identify an attack early in the lifecycle. Many of the early warning signs are visible on the network and threat hunters would be well served to identify these and thus help mitigate impact.

DaaS, BYOD, leasing and buying: Which is better for cybersecurity?

In the digital age, staff expect employers to provide hardware, and companies need hardware that allows employees to work efficiently and securely. There are already a number of models to choose from to purchase and manage hardware, however, with remote work policies becoming more popular, enterprises have to prioritize cybersecurity when making their selection.

Daas BYOD

The COVID-19 pandemic and online shift has brought to light the need for robust cybersecurity strategies and technology that facilitates safe practices. Since the pandemic started, the FBI has reported a 300 percent increase in cybercrime. As more businesses are forced to operate at a distance, hackers are taking advantage of weak links in their networks. At the same time, the crisis has meant many enterprises have had to cut their budgets, and so risk compromising cybersecurity when opting for more cost-effective measures.

Currently, Device-as-Service (DaaS), Bring-Your-Own-Device (BYOD) and leasing/buying are some of the most popular hardware options. To determine which is most appropriate for your business cybersecurity needs, here are the pros and cons of each:

Device-as-a-Service (DaaS)

DaaS models are when an organization distributes hardware like computers, tablets, and phones to employees with preconfigured and customized services and software. For many enterprises, DaaS is attractive because it allows them to acquire technology without having to outright buy, set up, and manage it – therefore saving time and money in the long run. Because of DaaS’s growing popularity, 65 percent of major PC manufacturers now offer DaaS capabilities, including Apple and HP.

When it comes to cybersecurity, DaaS is favorable because providers are typically experts in the field. In the configuration phase, they are responsible for ensuring that all devices have the latest security protections installed as standard, and they are also responsible for maintaining such protections. Once the hardware is in use, DaaS models allow providers to monitor the company’s entire fleet – checking that all devices adhere to security policies, including protocols around passwords, approved apps, and accessing sensitive data.

Another bonus is that DaaS can offer analytical insights about hardware, such as device location and condition. With this information, enterprises can be alerted if tech is stolen, missing or outdated and a threat to overall cybersecurity. Not to mention, a smart way to boost the level of protection given by DaaS models is to integrate it with Unified Endpoint Management (UEM). UEM helps businesses organize and control internet-enabled devices from a single interface and uses mobile threat detection to identify and thwart vulnerabilities or attacks among devices.

Nonetheless, to effectively utilize DaaS, enterprises have to determine their own relevant security principles before adopting the model. They then need to have an in-depth understanding of how these principles are applied throughout DaaS services and how the level of assurance enacts them. Assuming that DaaS completely removes enterprises from being involved in device cybersecurity would be unwise.

Bring-Your-Own-Device (BYOD)

BYOD is when employees use their own mobile, laptops, PCs, and tablets for work. In this scenario, companies have greater flexibility and can make significant cost savings, but, there are many more risks associated with personal devices compared to corporate-issued devices. Although BYOD is favorable among employees – who can use devices that they are more familiar with – enterprises essentially lose control and visibility of how data is transmitted, stored, and processed.

Personal devices are dangerous because hackers can create a sense of trust via personal apps on the hardware and more easily coerce users into sharing business details or download malicious content. Plus, with BYOD, companies are dependent on employees keeping all their personal devices updated with the most current protective services. One employee forgetting to do so could negate the cybersecurity for the overall network.

Similar to DaaS, UEM can also help companies that have adopted BYOD take a more centralized approach to manage the risk of exposing their data to malicious actors. For example, UEM can block websites or content from personal devices, as well as implement passcodes, and device and disk encryption. Alternatively, VPNs are common to enhance cybersecurity in companies that allow BYOD. In the COVID-19 pandemic, 68 percent of employees claim their company has expanded VPN usage as a direct result of the crisis. It’s worthwhile noting though, that VPNs only encrypt data accessed via the internet and cloud-based services.

When moving forward with BYOD models, enterprises must host regular training and education sessions around safe practices on devices, including recognizing threats, avoiding harmful websites, and the importance of upgrading. They also need to have documented and tested computer security incident response plans, so if any attacks do occur, they are contained as soon as possible.

Leasing / buying

Leasing hardware is when enterprises obtain equipment on a rental basis, in order to retain working capital that can be invested in other areas. In the past, as many as 80 percent of businesses chose to lease their hardware. The trend is less popular today, as SaaS products have proven to be more tailored and scalable.

Still, leasing is beneficial because rather than jeopardizing cybersecurity to purchase large volumes of hardware, enterprises can rent fully covered devices. Likewise, because the latest software typically requires the latest hardware, companies can rent the most recent tech at a fraction of the retail cost.

Comparable to DaaS providers, leasing companies are responsible for device maintenance and have to ensure that every laptop, phone, and tablet has the appropriate security software. Again, however, this does not absolve enterprises from taking an active role in cybersecurity implementation and surveillance.

Unlike leasing, where there can be uncertainty over who owns the cybersecurity strategy, buying is more straightforward. Purchasing hardware outright means companies have complete control over devices and can cherry-pick cybersecurity features to include. It also means they can be more flexible with cybersecurity partners, running trials with different solutions to evaluate which is the best fit.

That said, buying hardware has a noticeable downside where equipment becomes obsolete once new versions are released. 73 percent of senior leaders from enterprises actually agree that an abundance of outdated equipment leaves companies vulnerable to data security breaches. Considering that, on average, a product cycle takes only 12 to 24 months, and there are thousands of hardware manufacturers at work, devices can swiftly become outdated.

Additionally, because buying is a more permanent action, enterprises run the risk of being stuck with hardware that has been compromised. As opposed to software which can be relatively easily patched to fix, hardware often has to be sent off-site for repairs. This may result in enterprises with limited hardware continuing to use damaged or unprotected devices to avoid downtime in workflows.

If and when a company does decide to dispose of hardware, there are complications around guaranteeing that systems are totally blocked and databases or networks cannot be accessed afterwards. In contrast, providers from DaaS and leasing models expertly wipe devices at the end of contracts or when disposing of them, so enterprises don’t have to be concerned about unauthorized access.

Putting cybersecurity front-and-center

DaaS, BYOD, and leasing/buying all have their own unique benefits when it comes to cybersecurity. Despite all the perks, it has to be acknowledged that BYOD and leasing pose the biggest obstacles for enterprises because they take cybersecurity monitoring and control out of companies’ hands. Nevertheless, for all the options mentioned, UEM is a valuable way to bridge gaps and empower businesses to be in control of cybersecurity, while still being agile.

Ultimately, the most impactful cybersecurity measures are the ones that enterprises are firmly vested in, whatever hardware model they adopt. Businesses should never underestimate the power of a transparent, well-researched, and constantly evolving security framework – one which a hardware model complements, not solely creates.

Secure data sharing in a world concerned with privacy

The ongoing debate surrounding privacy protection in the global data economy reached a fever pitch with July’s “Schrems II” ruling at the European Court of Justice, which struck down the Privacy Shield – a legal mechanism enabling companies to transfer personal data from the EU to the US for processing – potentially disrupting the business of thousands of companies.

The plaintiff, Austrian privacy advocate Max Schrems, claimed that US privacy legislation was insufficiently robust to prevent national security and intelligence authorities from acquiring – and misusing – Europeans’ personal data. The EU’s top court agreed, abolishing the Privacy Shield and requiring American companies that exchange data with European partners to comply with the standards set out by the GDPR, the EU’s data privacy law.

Following this landmark ruling, ensuring the secure flow of data from one jurisdiction to another will be a significant challenge, given the lack of an international regulatory framework for data transfers and emerging conflicts between competing data privacy regulations.

This comes at a time when the COVID-19 crisis has further underscored the urgent need for collaborative international research involving the exchange of personal data – in this case, sensitive health data.

Will data protection regulations stand in the way of this and other vital data sharing?

The Privacy Shield was a stopgap measure to facilitate data-sharing between the US and the EU which ultimately did not withstand legal scrutiny. Robust, compliant-by-design tools beyond contractual frameworks will be required in order to protect individual privacy while allowing data-driven research on regulated data and business collaboration across jurisdictions.

Fortunately, innovative privacy-enhancing technologies (PETs) can be the stable bridge connecting differing – and sometimes conflicting – privacy frameworks. Here’s why policy alone will not suffice to resolve existing data privacy challenges – and how PETs can deliver the best of both worlds:

A new paradigm for ethical and secure data sharing

The abolition of the Privacy Shield poses major challenges for over 5,000 American and European companies which previously relied on its existence and must now confront a murky legal landscape. While big players like Google and Zoom have the resources to update their compliance protocols and negotiate legal contracts between transfer partners, smaller innovators lack these means and may see their activities slowed or even permanently halted. Privacy legislation has already impeded vital cross-border research collaborations – one prominent example is the joint American-Finnish study regarding the genomic causes of diabetes, which “slowed to a crawl” due to regulations, according to the head of the US National Institutes of Health (NIH).

One response to the Schrems II ruling might be expediting moves towards a federal data privacy law in the US. But this would take time: in Europe, over two years passed between the adoption of GDPR and its enforcement. Given that smaller companies are facing an immediate legal threat to their regular operations, a federal privacy law might not come quickly enough.

Even if such legislation were to be approved in Washington, it is unlikely to be fully compatible with GDPR – not to mention widening privacy regulations in other countries. The CCPA, the major statewide data protection initiative, is generally considered less stringent than GDPR, meaning that even CCPA-compliant businesses would still have to adapt to European standards.

In short, the existing legislative toolbox is insufficient to protect the operations of thousands of businesses in the US and around the world, which is why it’s time for a new paradigm for privacy-preserving data sharing based on Privacy-Enhancing Technologies.

The advantages of privacy-enhancing technologies

Compliance costs and legal risks are prompting companies to consider an innovative data sharing method based on PETs: a new genre of technologies which can help them bridge competing privacy frameworks. PETs are a category of technologies that protect data along its lifecycle while maintaining its utility, even for advanced AI and machine learning processes. PETs allow their users to harness the benefits of big data while protecting personally identifiable information (PII) and other sensitive information, thus maintaining stringent privacy standards.

One such PET playing a growing role in privacy-preserving information sharing is Homomorphic Encryption (HE), a technique regarded by many as the holy grail of data protection. HE enables multiple parties to securely collaborate on encrypted data by conducting analysis on data which remains encrypted throughout the process, never exposing personal or confidential information. Through HE, companies can derive the necessary insights from big data while protecting individuals’ personal details – and, crucially, while remaining compliant with privacy legislation because the data is never exposed.

Jim Halpert, a data regulation lawyer who helped draft the CCPA and is Global Co-Chair of the Data Protection, Privacy and Security practice at DLA Piper, views certain solutions based on HE as effective compliance tools.

“Homomorphic Encryption encrypts data elements in such a way that they cannot identify, describe or in any way relate to a person or household. As a result, homomorphically encrypted data cannot be considered ‘personal information’ and is thus exempt from CCPA requirements,” Halpert says. “Companies which encrypt data through HE minimize the risk of legal threats, avoid CCPA obligations, and eliminate the possibility that a third party could mistakenly be exposed to personal data.”

The same principle applies to GDPR, which requires any personally identifiable information to be protected.

HE is applicable to any industry and activity which requires sensitive data to be analyzed by third parties; for example, research such as genomic investigations into individuals’ susceptibility to COVID-19 and other health conditions, and secure data analysis in the financial services industry, including financial crime investigations across borders and institutions. In these cases, HE enables users to legally collaborate across different jurisdictions and regulatory frameworks, maximizing data value while minimizing privacy and compliance risk.

PETs will be crucial in allowing data to flow securely even after the Privacy Shield has been lowered. The EU and the US have already entered negotiations aimed at replacing the Privacy Shield, but while a palliative solution might satisfy business interests in the short term, it won’t remedy the underlying problems inherent to competing privacy frameworks. Any replacement would face immediate legal challenges in a potential “Schrems III” case. Tech is in large part responsible for the growing data privacy quandary. The onus, then, is on tech itself to help facilitate the free flow of data without undermining data protection.

What are the traits of an effective CISO?

Only 12% of CISOs excel in all four categories of the Gartner CISO Effectiveness Index.

effective CISO

“Today’s CISOs must demonstrate a higher level of effectiveness than ever before,” said Sam Olyaei, research director at Gartner.

“As the push to digital deepens, CISOs are responsible for supporting a rapidly evolving set of information risk decisions, while also facing greater oversight from regulators, executive teams and boards of directors. These challenges are further compounded by the pressure that COVID-19 has put on the information security function to be more agile and flexible.”

The survey was conducted among 129 heads of information risk functions, across all industries, globally in January 2020. The measure of CISO effectiveness is determined by a CISO’s ability to execute against a set of outcomes in the four categories of functional leadership, information security service delivery, scaled governance and enterprise responsiveness.

Each respondent’s score in each category was added together to calculate their overall effectiveness score. “Effective CISOs” are those who scored in the top one-third of the CISO effectiveness measure.

Top-performing CISOs demonstrate five key behaviors

Of the factors that impact CISO effectiveness, five behaviors significantly differentiate top-performing CISOs from bottom performers. On average, each of these behaviors is twice as prevalent in top performers than in bottom performers.

“A clear trend among top-performing CISOs is demonstrating a high level of proactiveness, whether that’s staying abreast of evolving threats, communicating emerging risks with stakeholders or having a formal succession plan,” said Mr. Olyaei. “CISOs should prioritize these kinds of proactive activities to boost their effectiveness.”

The survey also found that top performing CISOs regularly meet with three times as many non-IT stakeholders as they do IT stakeholders. Two-thirds of these top performers meet at least once per month with business unit leaders, while 43% meet with the CEO, 45% meet with the head of marketing and 30% meet with the head of sales.

“CISOs have historically built fruitful relationships with IT executives, but digital transformation has further democratized information security decision making,” added Daria Krilenko, senior research director at Gartner.

“Effective CISOs keep a close eye on how risks are evolving across the enterprise and develop strong relationships with the owners of that risk – senior business leaders outside of IT.”

Effective CISOs are better at managing stress

The survey also found that highly effective CISOs better manage workplace stressors. Just 27% of top performing CISOs feel overloaded with security alerts, compared with 62% of bottom performers. Furthermore, less than a third of top performers feel that they face unrealistic expectations from stakeholders, compared with half of bottom performing CISOs.

“As the CISO role becomes increasingly demanding, the most effective security leaders are those who can manage the stressors that they face daily,” said Mr. Olyaei.

“Actions such as keeping a clear distinction between work and nonwork, setting explicit expectations with stakeholders, and delegating or automating tasks are essential for enabling CISOs to function at a high level.”

5 simple steps to bring cyber threat intelligence sharing to your organization

Cyber threat intelligence (CTI) sharing is a critical tool for security analysts. It takes the learnings from a single organization and shares it across the industry to strengthen the security practices of all.

cyber threat intelligence sharing

By sharing CTI, security teams can alert each other to new findings across the threat landscape and flag active cybercrime campaigns and indicators of compromise (IOCs) that the cybersecurity community should be immediately aware of. As this intel spreads, organizations can work together to build upon each other’s defenses to combat the latest threat. This creates a herd-like immunity for networks as defensive capabilities are collectively raised.

Blue teams need to act more like red teams

A recent survey by Exabeam showed that 62 percent of blue teams have difficulty stopping red teams during adversary simulation exercises. A blue team is charged with defending one network. They have the benefit of knowing the ins and outs of their network better than any red team or cybercriminal, so they are well-equipped to spot abnormalities and IOCs and act fast to mitigate threats.

But blue teams have a bigger disadvantage: they mostly work in silos consisting only of members of their immediate team. They typically don’t share their threat intelligence with other security teams, vendors, or industry groups. This means they see cyber threats from a single lens. They lack the broader view of the real threat landscape external to their organization.

This disadvantage is where red teams and cybercriminals thrive. Not only do they choose the rules of the game – the when, where, and how the attack will be executed – they share their successes and failures with each other to constantly adapt and evolve tactics. They thrive in a communications-rich environment, sharing frameworks, toolkits, guidelines, exploits, and even offering each other customer support-like help.

For blue teams to move from defense to prevention, they need to take defense to the attacker’s front door. This proactive approach can only work by having timely, accurate, and contextual threat intelligence. And that requires a community, not a company. But many companies are hesitant to join the CTI community. The SANS 2020 Cyber Threat Intelligence Survey shows that more than 40% of respondents both produce and consume intelligence, leaving much room for improvement over the next few years.

Common challenges for beginning a cyber threat intelligence sharing program

One of the biggest challenges to intelligence sharing is that businesses don’t understand how sharing some of their network data can actually strengthen their own security over time. Much like the early days of open-source software, there’s a fear that if you have anything open to exposure it makes you inherently more vulnerable. But as open source eventually proved, more people collaborating in the open can lead to many positive outcomes, including better security.

Another major challenge is that blue teams don’t have the lawless luxury of sharing threat intelligence with reckless abandon: we have legal teams. And legal teams aren’t thrilled with the notion of admitting to IOCs on their network. And there is a lot of business-sensitive information that shouldn’t be shared, and the legal team is right to protect this.

The opportunity is in finding an appropriate line to walk, where you can share intelligence that contributes to bolstering cyber defense in the larger community without doing harm to your organization.

If you’re new to CTI sharing and want to get involved here are a few pieces of advice.

Clear it with your manager

If you or your organization are new to CTI sharing the first thing to do is to get your manager’s blessing before you move forward. Being overconfident in your organization’s appetite to share their network data (especially if they don’t understand the benefits) can be a costly, yet avoidable mistake.

Start sharing small

Don’t start by asking permission to share details on a data exfiltration event that currently has your company in crisis mode. Instead, ask if it’s ok to share a range of IPs that have been brute forcing logins on your site. Or perhaps you’ve seen a recent surge of phishing emails originating from a new domain and want to share that. Make continuous, small asks and report back any useful findings.

Share your experience when you can’t share intelligence

When you join a CTI group, you’re going to want to show that you’re an active, engaged member. But sometimes you just don’t have any useful intelligence to share. You can still add value to the group by lending your knowledge and experience. Your perspective might change someone’s mind on their process and make them a better practitioner, thus adding to the greater good.

Demonstrate value of sharing CTI

Tie your participation in CTI groups to any metrics that demonstrate your organization’s security posture has increased during that time. For example, show any time that participation in a CTI group has directly led to intelligence that helped decrease alerted events and helped your team get ahead of a new attack.

There’s a CTI group for everyone

From disinformation and dark web to medical devices and law enforcement, there’s a CTI segment for everything you ever wanted to be involved in. Some are invite-only, so the more active you are in public groups the more likely you’ll be asked to join groups that you’ve shown interest in or have provided useful intelligence about. These hyper-niche groups can provide big value to your organization as you can get expert consulting from top minds in the field.

The more data you have, the more points you can correlate faster. Joining a CTI sharing group gives you access to data you’d never even know about to inform better decision making when it comes to your defensive actions. More importantly, CTI sharing makes all organizations more secure and unites us under a common cause.

Justifying your 2021 cybersecurity budget

Sitting in the midst of an unstable economy, a continued public health emergency, and facing an uptick in successful cyber attacks, CISOs find themselves needing to enhance their cybersecurity posture while remaining within increasingly scrutinized budgets.

2021 cybersecurity budget

Senior leadership recognizes the value of cybersecurity but understanding how to best allocate financial resources poses an issue for IT professionals and executive teams. As part of justifying a 2021 cybersecurity budget, CISOs need to focus on quick wins, cost-effective SaaS solutions, and effective ROI predictions.

Finding the “quick wins” for your 2021 cybersecurity budget

Cybersecurity, particularly with organizations suffering from technology debt, can be time-consuming. Legacy technologies, including internally designed tools, create security challenges for organizations of all sizes.

The first step to determining the “quick wins” for 2021 lies in reviewing the current IT stack for areas that have become too costly to support. For example, as workforce members moved off-premises during the current public health crisis, many organizations found that their technology debt made this shift difficult. With workers no longer accessing resources from inside the organization’s network, organizations with rigid technology stacks struggled to pivot their work models.

Going forward, remote work appears to be one way through the current health and economic crises. Even major technology leaders who traditionally relied on in-person workforces have moved to remote models through mid-2021, with Salesforce the most recent to announce this decision.

Looking for gaps in security, therefore, should be the first step in any budget analysis. As part of this gap analysis, CISOs can look in the following areas:

  • VPN and data encryption
  • Data and user access
  • Cloud infrastructure security

Each of these areas can provide quick wins if done correctly because as organizations accelerate their digital transformation strategies to match these new workplace situations, they can now leverage cloud-native security solutions.

Adopting SaaS security solutions for accelerating security and year-over-year value

The SaaS-delivered security solution market exploded over the last five to ten years. As organizations moved their mission-critical business operations to the cloud, cybercriminals focused their activities on these resources.

Interestingly, a CNBC article from July 14, 2020 noted that for the first half of 2020, the number of reported data breaches dropped by 33%. Meanwhile, another CNBC article from July 29, 2020 notes that during the first quarter, large scale data breaches increased by 273% compared to the same time period in 2019. Although the data appears conflicting, the Identity Theft Research Center research that informed the July 14th article specifically notes, “This is not expected to be a long-term trend as threat actors are likely to return to more traditional attack patterns to replace and update identity information needed to commit future identity and financial crimes.” In short, rapidly closing security gaps as part of a 2021 cybersecurity budget plan needs to include the fast wins that SaaS-delivered solutions provide.

SaaS security solutions offer two distinct budget wins for CISOs. First, they offer rapid integration into the organization’s IT stack. In some cases, CISOs can get a SaaS tool deployed within a few weeks, in other cases within a few months. Deployment time depends on the complexity of the problem being solved, the type of integrations necessary, and the enterprise’s size. However, in the same way that agile organizations leverage cloud-based business applications, security teams can leverage rapid deployment of cloud-based security solutions.

The second value that SaaS security solutions offer is YoY savings. Subscription models offer budget conscious organizations several distinct value propositions. First, the organization can reduce hardware maintenance costs, including operational costs, upgrade costs, software costs, and servicing costs. Second, SaaS solutions often enable companies to focus on their highest risk assets and then increase their usage in the future. Third, they allow organizations to pivot more effectively because the reduced up-front capital outlay reduces the commitment to the project.

Applying a dollar value to these during the budget justification process might feel difficult, but the right key performance indicators (KPIs) can help establish baseline cost savings estimates.

Choosing the KPIs for effective ROI predictions

During an economic downturn, justifying the cybersecurity budget requests might be increasingly difficult. Most cybersecurity ROI predictions rely on risk evaluations and applying probability of a data breach to projected cost of a data breach. As organizations look to reduce costs to maintain financially viable, a “what if” approach may not be as appealing.

However, as part of budgeting, CISOs can look to several value propositions to bolster their spending. Cybersecurity initiatives focus on leveraging resources effectively so that they can ensure the most streamlined process possible while maintaining a robust security program. Aligning purchase KPIs with specific reduced operational costs can help gain buy-in for the solution.

A quick hypothetical can walk through the overarching value of SaaS-based security spending. Continuous monitoring for external facing vulnerabilities is time-consuming and often incorporates inefficiency. Hypothetical numbers based on research indicate:

A poll of C-level security executives noted that 37% said they received more than 10,000 alerts each month with 52% of those alerts identified as false positives.

  • The average security analyst spends ten minutes responding to a single alert.
  • The average security analyst makes approximately $91,000 per year.

Bringing this data together shows the value of SaaS-based solutions that reduce the number of false positives:

  • Every month enterprise security analysts spend 10 minutes for each of the 5,2000 false positives.
  • This equates to approximately 866 hours.
  • 866 hours, assuming a 40-hour week, is 21.65 weeks.
  • Assuming 4 weeks per month, the enterprise needs at least 5 security analysts to manage false positive responses.
  • These 5 security analysts cost a total of $455,000 per year in salary, not including bonuses and other benefits.

Although CISOs may not want to reduce their number of team members, they may not want to add additional ones, or they may be seeking to optimize the team they have. Tracking KPIs such reduction in false positives per month can provide the type of long-term cost value necessary for other senior executives and the board of directors.

Securing a 2021 cybersecurity budget

While the number of attacks may have stalled during 2020, cybercriminals have not stopped targeting enterprise data. Phishing attacks and malware attacks have moved away from the enterprise network level and now look to infiltrate end-user devices. As organizations continue to pivot their operating models, they need to look for cost-effective ways to secure their sensitive resources and data. However, budget constrictions arising from 2020’s economic instability may make it difficult for CISOs to gain the requisite dollars to continue to apply best security practices.

As organizations start looking toward their 2021 roadmap, CISOs will increasingly need to be specific about not only the costs associated with purchases but also the cost savings that those purchases provide from both data incident risk and operational cost perspective.

How security theater misses critical gaps in attack surface and what to do about it

Bruce Schneier coined the phrase security theater to describe “security measures that make people feel more secure without doing anything to actually improve their security.” That’s the situation we still face today when it comes to defending against cyber security risks.

The insurance industry employs actuaries to help quantify and manage the risks insurance underwriters take. The organizations and individuals that in-turn purchase insurance policies also look at their own biggest risks and the likelihood they will occur and opt accordingly for various deductibles and riders.

Things do not work the same way when it comes to cyber security. For example: Gartner observed that most breaches are the result of a vulnerability being exploited. Furthermore, they estimate that 99% of vulnerabilities exploited are already known by the industry and not net-new zero-day vulnerabilities.

How is it possible that well known vulnerabilities are a significant conduit for attackers when organizations collectively spend at least $1B on vulnerability scanning annually? Among other things, it’s because organizations are practicing a form of security theater: they are focusing those vulnerability scanners on what they know and what is familiar; sometimes they are simply attempting to fulfill a compliance requirement.

While there has been a strong industry movement towards security effectiveness and productivity, with approaches favoring prioritizing alerts, investigations and activities, there are still a good number of security theatrics carried out in many organizations. Many simply continue conducting various security processes and maintaining security solutions that may have been valuable at one time, but now don’t address the right concerns.

Broaching a concern such as security theater with security professionals can result in defensiveness or ire from disturbing a well-established process, or worse, practitioners assuming there is some implied level of foolishness or ineptitude. Rather than lambasting security theater practices outright, a better approach is to systematically consider what gaps may exist in your organization’s security posture. Part of this exercise requires asking yourself what you don’t know. That might seem like an oxymoron: how does one know what one does not know?

The idea of not knowing what you don’t know is a topic that frequently turns up on CISOs’ list of reasons that “keep them up at night.” The challenge with this type of security issue is less about swiftly applying software patches or assessing vulnerabilities of identified infrastructure. Here the main concern is to identify what might be completely unaddressed: is there some aspect of the IT ecosystem that is unprotected or could serve as an effective conduit to other resources? The question is basically, “What have we overlooked?” or “What asset or business system might be completely unknown, forgotten or not under our control?” The issue is not about the weakness of the known attack surface. It’s about the unknown attack surface that is not protected.

Sophisticated attackers are adept at developing a complete picture of an organization’s entire attack surface. There are numerous tools, techniques and even hacking services that can help attackers with this task. Most attackers are pragmatic and even business-oriented, and their goal is to find the path of least resistance that will provide the greatest payoff. Often this means focusing on the least monitored and least protected part of an organization’s attack surface.

Attackers are adept at finding internet-exposed, unprotected assets or systems. Often these are forgotten or unknown assets that are both an easy entrance to a company’s network as well as valuable in their own right. The irony is that attackers therefore often have a truer picture of an attack surface than the security team charged with defending it.

Interestingly, a security organizations’ effectiveness is often diminished by its own constraints, because theteam will focus on what they know they need to protect along with the established processes for doing that. Attackers have no such constraints. Rather than following prescribed rules or management by tradition, attackers will first perform reconnaissance and pursue intelligence to find the places of greatest weakness. Attackers look for these unprotected spots and favor them over resources that are actively monitored and defended.

Security organizations, on the other hand, typically start and end their assessments with their known assets. Security theater has them devoting too much focus to the known and not enough on the unknown.

Even well-established practices such as penetration testing, vulnerability assessment and security ratings result in security theater because they revolve around what is known. To move beyond theatrics into real effectiveness, security teams need to develop new processes to uncover the unknowns that are part of their IT ecosystem. That is exactly what attackers target. Few organizations are able to do this type of discovery and detection today. It is not viable either because of the existing workload or level of expertise needed to do a complete assessment. In addition, it is common for bias based on the pre-existing perceptions of the organization’s security posture to influence the search for the previously unknown.

The process of discovering previously unknown, exposed assets should be done on a regular basis. Automating this process—particularly due to the range of cloud, partner and subsidiary IT that must be considered—makes it more viable. While automation is necessary, it is still important for fully trained researchers to be involved to tune the process, interpret results and ensure its proper scope.

Adding a continuous process of identifying unknown, uncontrolled or abandoned assets and systems not only helps close gaps, but it expands the purview of security professionals to focus on not just what they know, but to also start considering what they do not know.

Attacked by ransomware? Five steps to recovery

Ransomware has been noted by many as the most threatening cybersecurity risk for organizations, and it’s easy to see why: in 2019, more than 50 percent of all businesses were hit by a ransomware attack – costing an estimated $11.5 billion. In the last month alone, major consumer corporations, including Canon, Garmin, Konica Minolta and Carnival, have fallen victim to major ransomware attacks, resulting in the payment of millions of dollars in exchange for file access.

steps ransomware recovery

While there is a lot of discussion about preventing ransomware from affecting your business, the best practices for recovering from an attack are a little harder to pin down.

While the monetary amounts may be smaller for your organization, the importance of regaining access to the information is just as high. What steps should you take for effective ransomware recovery? A few of our best tips are below.

1. Infection detection

Arguably the most challenging step for recovering from a ransomware attack is the initial awareness that something is wrong. It’s also one of the most crucial. The sooner you can detect the ransomware attack, the less data may be affected. This directly impacts how much time it will take to recover your environment.

Ransomware is designed to be very hard to detect. When you see the ransom note, it may have already inflicted damage across the entire environment. Having a cybersecurity solution that can identify unusual behavior, such as abnormal file sharing, can help quickly isolate a ransomware infection and stop it before it spreads further.

Abnormal file behavior detection is one of the most effective means of detecting a ransomware attack and presents with the fewest false positives when compared to signature based or network traffic-based detection.

One additional method to detect a ransomware attack is to use a “signature-based” approach. The issue with this method, is it requires the ransomware to be known. If the code is available, software can be trained to look for that code. This is not recommended, however, because sophisticated attacks are using new, previously unknown forms of ransomware. Thus, an AI/ML based approach is recommended, which will look for behaviors such as rapid, successive encryption of files and determine there is an attack happening.

Effective cybersecurity also includes good defensive mechanisms that protect business-critical systems like email. Often ransomware affects organizations by means of a phishing email attack or an email that has a dangerous file attached or hyperlinked.

If organizations are ill-equipped to handle dangerous emails, this can be an easy way for ransomware to make its way inside the walls of your organization’s on-premise environment or within the cloud SaaS environment. With cloud SaaS environments in particular, controlling third-party applications that have access to your cloud environment is extremely important.

2. Contain the damage

After you have detected an active infection, the ransomware process can be isolated and stopped from spreading further. If this is a cloud environment, these attacks often stem from a remote file sync or other process driven by a third-party application or browser plug-in running the ransomware encryption process. Digging in and isolating the source of the ransomware attack can contain the infection so that the damage to data is mitigated. To be effective, this process must be automated.

Many attacks happen after-hours when admins are not monitoring the environment and the reaction must be rapid to stop the spread of the virus. Security policy rules and scripts must be put in place as a part of proactive protection. Thus, when an infection is identified, the automation kicks in to stop the attack by removing the executable file or extension and isolate the infected files from the rest of the environment.

Another way organizations can help protect themselves and contain the damage should an attack occur is by purchasing cyber liability insurance. Cyber liability insurance is a specialty insurance line intended to protect businesses (and the individuals providing services from those businesses) from internet-based risks (like ransomware attacks) and risks related to information technology infrastructure, information privacy, information governance liability, and other related activities. In this type of attack situation, cyber liability insurance can help relieve some of the financial burden of restoring your data.

3. Restore affected data

In most cases, even if the ransomware attack is detected and contained quickly, there will still be a subset of data that needs to be restored. This requires having good backups of your data to pull back to production. Following the 3-2-1 backup best practice, it’s imperative to have your backup data in a separate environment from production.

The 3-2-1 backup rule consists of the following guidelines:

  • Keep 3 copies of any important file, one primary and two backups
  • Keep the file on 2 different media types
  • Maintain 1 copy offsite

If your backups are of cloud SaaS environments, storing these “offsite” using a cloud-to-cloud backup vendor aligns with this best practice. This will significantly minimize the chance that your backup data is affected along with your production data.

The tried and true way to recover from a ransomware attack involves having good backups of your business-critical data. The importance of backups cannot be stressed enough when it comes to ransomware. Recovering from backup allows you to be in control of getting your business data back and not the attacker.

All too often, businesses may assume incorrectly that the cloud service provider has “magically protected” their data. While there are a few mechanisms in place from the cloud service provider side, ultimately, the data is your responsibility as part of the shared responsibility model of most CSPs. You can take a look at Microsoft’s stance on shared responsibility here.

4. Notify the authorities

Many of the major compliance regulations that most organizations fall under today, such as PCI-DSS, HIPAA, GDPR, and others, require that organizations notify regulatory agencies of the breach. Notification of the breach should be immediate and the FBI’s Internet Crime Complaint Center should be the first organization alerted. Local law enforcement should be informed next. If your organization is in a governed industry, there may be strict guidelines regarding who to inform and when.

5. Test your access

Once data has been restored, test access to the data and any affected business-critical systems to ensure the recovery of the data and services have been successful. This will allow any remaining issues to be remedied before turning the entire system back over to production.

If you’re experiencing slower than usual response times in the IT environment or larger-than-normal file sizes, it may be a sign that something sinister is still looming in the database or storage.

Ransomware prevention v. recovery

Sometimes the best offense is a good defense. When it comes to ransomware and regaining access to critical files, there are only two options. You either restore your data from backup if you were forward-thinking enough to have such a system in place, or you have to pay the ransom. Beyond the obvious financial implications of acquiescing to the hacker’s demands, paying is risky because there is no way to ensure they will actually provide access to your files after the money is transferred.

There is no code of conduct or contract when negotiating with a criminal. A recent report found that some 42 percent of organizations who paid a ransom did not get their files decrypted.

Given the rising number of ransomware attacks targeting businesses, the consequences of not having a secure backup and detection system in place could be catastrophic to your business. Investing in a solution now helps ensure you won’t make a large donation to a nefarious organization later. Learning from the mistakes of other organizations can help protect yours from a similar fate.

Ensuring cyber awareness in the healthcare sector

As a result of the COVID-19 pandemic, healthcare professionals have increased their reliance on the internet to carry out their job. From connectivity with patients, to the interconnectivity of different medical devices passing patient data, the threat vector has expanded dramatically, so cyber awareness has become crucial.

cyber awareness healthcare

Healthcare under attack: What about cyber awareness?

This has made the sector an attractive target for cybercriminals, with the plethora of research, personal, and confidential data available to them. Recent research surveying healthcare professionals found that 41% are seeing cyberattacks against their organization take place on a weekly basis.

Healthcare organizations have seen a significant rise in prominence over the last few months owing to their key roles in fighting the pandemic. Nations have celebrated the heroes on the frontline in many ways, so why, despite the humanitarian capacity of their roles, are they being targeted by nefarious actors?

Critical national infrastructure

Healthcare plays a fundamental role in supporting a nation and is considered a fundamental part of the critical national infrastructure. With its heightened importance during the current global pandemic, it has rapidly become a very attractive target for nefarious actors intent on causing chaos and disruption, by exploiting a time of confusion and uncertainty. Cybercriminals know that by denying the services of the healthcare sector at this time would have massive ramifications for the well-being of the nation.

By denying services or the efficiency of the healthcare sector, a hostile state actor can be seen as subverting the credibility of both the government and NHS Trusts. There is also a possibility that in attacking a healthcare organization that is part of a wider network of infrastructure, it may be possible to pivot to other critical facilities.

This could start with something as simple as an email with a malicious link or document that a healthcare professional clicks on or opens, providing the cybercriminal access to the wider infrastructure. This is a very real possibility, as our recent research found that 25% of healthcare professionals believe their colleagues click on links in emails from unknown sources.

Since the WannaCry attack on the NHS in 2017, the healthcare, pharmaceutical, and biotechnology sectors have been conscious of the possibilities of a ransomware attack. In addition to the loss of sensitive data, ransomware attacks can put the lives of patients at risk.

The race for a vaccine

In addition to the healthcare sector, pharmaceutical and biotechnology organizations are also in a global race to develop cures and vaccines for COVID-19, with an increased reliance on AI within the industry. This can have many benefits, including the acceleration of drug development and the production of medicine. This speed is obviously extremely important now. Despite this, there are also risks with the increased use of AI.

While health technology tools and organizations are more powerful and impactful than ever before, individuals or organizations within this sector potentially hold the keys to ending the pandemic. As a result of this, they offer more cyberattack surfaces and options for adversaries. One example of this technology is the increased use of mobile devices by healthcare professionals. This can provide great benefits such as increased availability and efficiency, but also increases opportunities for cybercriminals if not used properly.

Our research found that 81% of healthcare professionals are using corporate devices for personal purposes, which could pose a large cybersecurity risk. This means professionals could be checking emails from compromised inboxes, sending personal emails that may contain bad links, or using online shopping websites that are not secure.

Both biotechnology and pharmaceutical companies have seen an increase in attacks compared to previous years. Reports have found the pharmaceutical industry is now the number one target for cybercriminals globally, especially for intellectual property theft. As these specialized companies move towards increased digitization and a reliance on IT and OT for development, storage, and understanding of more valuable data online, this threat only becomes more real.

Stolen data can either be sold on the dark web or ransomed back to desperate organizations which rely on access to critical documents, such as trial results, patient information, and intellectual property to continue operations.

With the medical sector having an increased reliance on AI, comes an increased number of devices, and objects being reliant and dependent on internet connectivity. This single factor leads to an increased number of potential, and vulnerable, exploitable access points for malicious actors. Unlike the many “entertainment” devices that aggregate to form our understanding of the IoT, there are multiple connected medical devices that are often unseen, but vital.

Connected medical devices have obvious benefits for clinicians, medical staff, and patients. These devices can instantly exchange data, or instructions on treatment. But this aspect is where some of the greatest dangers lie as the devices are often involved in critical procedures or treatments. Consequently, interference with the signals to a robotic surgical tool, for example, would potentially have devastating consequences.

Maintaining security through education

It is well-documented that healthcare budgets aren’t keeping up with demand and this may prevent many organizations maintaining an appropriate and resilient cybersecurity posture. This often results in security policies not being able to keep up, or just not considered during the application, maintenance, and through life support of digital systems.

Because of this, it is even more important that healthcare professionals are as vigilant to cyber-threats as possible. One small example of cyber negligence can lead to a cybersecurity attack – which happens every week for 41% of healthcare IT managers. These can result in service disruption, potentially postponing treatment for patients; or they can lead to huge amounts of data being leaked to hackers with nefarious intent.

How does XDR improve enterprise security in the face of evolving threats?

Cybercriminals will never run out of ways to breach the security protocols enterprises put in place. As security systems upgrade their defenses, attackers also level up their attacks. They develop stealthier ways to compromise networks to avoid detection and enhance the chances of penetration.

XDR enterprise security

Adversarial machine learning, for example, emerges as one of the stealthy cyber threats the security community should watch out for. This attack is barely detectable as it targets the machine learning algorithm itself to weaken its ability to detect intrusions or, worse, to manipulate the system to allow attacks to proceed instead of blocking them.

To counter sophisticated threats similar to adversarial ML, enterprises need to adopt more advanced solutions. Standard Intrusion Detection Systems (IDS) will not cut it. Reactive approaches such as Endpoint Detection and Response (EDR) and Network Traffic Analysis (NTA) do not have the ability to see through concealed attacks. They are good at providing layered visibility into threats, but they are practically inutile against stealthy cyberattacks.

What organizations need is something more proactive. It has to be a solution capable of identifying hidden complex threats quickly. It needs to create visibility not only into specific threat instances but into data across networks, endpoints, as well as cloud infrastructure. It has to be a cross-layered detection and response (XDR) solution.

XDR: An overview

XDR is a method of gathering and automatically correlating information across several security layers to enable rapid threat detection. It monitors threats across different sources or locations within an organization.

Attacks can tuck in between security silos created as a result of disconnected solution alerts and gaps during security triaging and attack investigations. These successfully remain hidden because of the disconnected and limited attack viewpoints of most security analysts.

XDR eliminates security silos through a comprehensive and holistic detection and response strategy. It collects information and matches the relationships of deep activity data across many security layers including those configured for endpoints, servers, emails, cloud, and workloads. Automated analysis of various data is undertaken to detect threats faster and for security analysts to have enough time to conduct thorough investigations.

The banes of traditional reactive approaches

EDR, NTA, and security information and event management (SIEM) are by no means weak security solutions. However, the way they work creates opportunities for unrelenting attackers to exploit.

One of the biggest problems with traditional security systems is alert overload. EDR and other strategies are known to generate high volumes of alerts that lack a meaningful context. These security notifications are often incomplete or don’t have enough information to make sense to security professionals.

According to data from an IDC InfoBrief, only 21 percent of organizations collect information that can be considered adequate to take decisive action. Most organizations (56 percent) say that the information they gather through their security systems only allow them to have a broad grasp of what the problem is about. They cannot specifically pinpoint the issue and implement appropriate solutions.

A whitepaper by Solarwinds helps illustrate the alert overload problem. Accordingly, a company with around a thousand employees can have up to 22,000 events per second registered in their SIEM systems. This number translates to nearly 2 million events in a day. Even the best security operations center (SOC) analysts would struggle to handle the overwhelming amount of alerts produced by this number of events.

Other issues that plague traditional security systems are the need for specialized expertise and time-consuming investigations, which can take several months. With EDR, for example, breach identification time reportedly takes up to 197 times, while containment can take up to 69 days.

Also, the technology-centric nature of the tools used in traditional systems takes away the focus on operational needs to address technology gaps. As presented in the IDC InfoBrief, 23 percent of companies say that their security teams spend more time maintaining and managing security tools instead of conducting actual security investigations. Meanwhile, 19 percent of companies report fragmentation or the lack of integration in their security product portfolio.

XDR addresses these drawbacks with its comprehensive approach in collecting deep activity data and cross-layer sweeping, hunting, and investigation routines. With the aid of artificial intelligence and advanced analytics, XDR spots actual threats in the midst of security alert overload.

Threat evolution outpacing solution improvements

Again, this article does not intend to invalidate or downplay the value of EDR. Many companies continue relying on it for good reasons. However, its capability is constrained because of its inherent design, which is to focus on managed endpoints. Likewise, it is restricted in the scope of threats it can identify and block and the identification of entities affected and the best responses to an attack.

Similarly, it would be incorrect to say that NTA has become useless. Network traffic analysis remains important, but it requires a method to break away from its network and monitored network segment limitations. NTA systems generate enormous amounts of logs that make it difficult to detect correlations between network alerts and other relevant data that contextualize security events.

There have been attempts to update EDR and NTA, but these improvements have been implemented as individual solutions or added security layers. As such, the data siloing problem remains. XDR provides the preferable holistic method of upgrading detection and response systems. It augments SIEMs by cutting down the time needed by SOCs to examine relevant alerts and assess which ones merit attention and action. XDR does not replace SIEMs but enhances it to make sense of the abundance of security logs and notifications it produces.

In other words, XDR serves as an alternative to the evolution needed among traditional security systems to match the perpetually evolving attacks of cybercriminals.

Extended detection and response

XDR makes it possible to pinpoint hidden threats and track them regardless of their source or location. This advanced system results in increased productivity for the organization’s IT team and improves the speed of security investigations. It provides multiple security layers that go beyond endpoints to broaden detection and response scope. Moreover, it creates an integrated and automated platform that enables complete visibility across security layers.

Cisco refers to it as extended detection and response, which makes complete sense considering how it goes beyond the mere identification and handling of a threat. XDR also makes it possible to determine how a user got infected, what the first point of entry was, how the attack managed to spread, and how many other users have been exposed to the threat. Additionally, its integration with SIEM and Security Orchestration, Automation, and Response (SOAR) systems allow analysts to use XDR in a broader security ecosystem.

Internet Impact Assessment Toolkit: Protect the core that underpins the Internet

The Internet Society has launched the first-ever regulatory assessment toolkit that defines the critical properties needed to protect and enhance the future of the Internet.

Internet Impact Assessment Toolkit

The Internet Impact Assessment Toolkit is a guide to help ensure regulation, technology trends and decisions don’t harm the infrastructure of the Internet. It describes the Internet at its optimal state – a network of networks that is universally accessible, decentralized and open; facilitating the free and efficient flow of knowledge, ideas and information.

Critical properties of the Internet Impact Assessment Toolkit

The five critical properties identified by the IWN are:

  • An accessible infrastructure with a common protocol – A ‘common language’ enabling global connectivity and unrestricted access to the Internet.
  • An open architecture of interoperable and reusable building blocks – Open infrastructure with a set of standards enabling permission-free innovation.
  • Decentralized management and a single distributed routing system – Distributed routing enabling local networks to grow, while maintaining worldwide connectivity.
  • Common global identifiers – A single common identifier allowing computers and devices around the world to communicate with each other.
  • A technology neutral, general-purpose network – A simple and adaptable dynamic environment cultivating infinite opportunities for innovation.

When combined, these properties form the unique foundation that underpins the Internet’s success and are essential for its healthy evolution. The closer the Internet aligns with the IWN, the more open and agile it is for future innovation and the broader benefits of collaboration, resiliency, global reach and economic growth.

“The Internet’s ability to support the world through a global pandemic is an example of the Internet Way of Networking at its finest,” explains Joseph Lorenzo Hall, Senior VP for a Strong Internet, Internet Society. “Governments didn’t need to do anything to facilitate this massive global pivot in how humanity works, learns and socializes. The Internet just works – and it works thanks to the principles that underpin its success.”

A resource for policymakers and technologists

The Internet Impact Assessment Toolkit will serve as an important resource to help policymakers and technologists ensure trends in regulatory and technical proposals don’t harm the unique architecture of the Internet. The toolkit explains why each property of the IWN is crucial to the Internet and the social and economic consequences that can arise when any of these properties are damaged.

For instance, the Toolkit shows how China’s restrictive networking model severely impacts its global reach and hinders collaboration with networks beyond its borders. It also highlights how the US administration’s Clean Network proposal challenges the Internet’s architecture by dictating how networks interconnect according to political considerations rather than technical considerations.

“We’re seeing a trend of governments encroaching on parts of the Internet’s infrastructure to try and solve social and political problems through technical means. Ill-informed regulation can drastically alter the Internet’s fundamental architecture and harm the ecosystem that supports it,” continues Hall. “We’re giving both policymakers and Internet users the information and tools to make sure they don’t break this resource that brings connectivity, innovation, and empowerment to everyone.”

Plan for change but don’t leave security behind

COVID-19 has upended the way we do all things. In this interview, Mike Bursell, Chief Security Architect at Red Hat, shares his view of which IT security changes are ongoing and which changes enterprises should prepare for in the coming months and years.

plan security

How has the pandemic affected enterprise edge computing strategies? Has the massive shift to remote work created problems when it comes to scaling hybrid cloud environments?

The pandemic has caused major shifts in the ways we live and work, from video calls to increased use of streaming services, forcing businesses to embrace new ways to be flexible, scalable, efficient and cost-saving. It has also exposed weaknesses in the network architectures that underpin many companies, as they struggle to cope with remote working and increased traffic. We’re therefore seeing both an accelerated shift to edge computing, which takes place at or near the physical location of either the end-user or the data source, and further interest in hybrid cloud strategies which don’t require as much on-site staff time.

Changing your processes to make the most of this without damaging your security posture requires thought and, frankly, new policies and procedures. Get your legal and risk teams involved – but don’t forget your HR department. HR has a definite role to play in allowing your key employees to continue to do the job you need them to do, but in ways that are consonant with the new world we’re living in.

However, don’t assume that these will be – or should be! – short-term changes. If you can find more efficient or effective ways of managing your infrastructure, without compromising your risk profile while also satisfying new staff expectations, then everyone wins.

What would you say are the most significant challenges for enterprises that want to build secure and future-proof application infrastructures?

One challenge is that although some of the technology is now quite mature, the processes for managing it aren’t, yet. And by that I don’t just mean technical processes, but how you arrange your teams and culture to suit new ways of managing, deploying, and (critically) automating your infrastructure. Add to this new technologies such as confidential computing (using Trusted Execution Environments to protect data in use), and there is still a lot of change.

The best advice is to plan for change – technical, process and culture – but do not, whatever you do, leave security till last. It has to be front and centre of any plans you make. One concrete change that you can make immediately is taking your security people off just “fire-fighting duty”, where they have to react to crises as they come in: businesses can consider how to use them in a more proactive way.

People don’t scale, and there’s a global shortage of security experts. So, you need to use the ones that you have as effectively as you can, and, crucially, give them interesting work to do, if you plan to retain them. It’s almost guaranteed that there are ways to extend their security expertise into processes and automation which will benefit your broader teams. At the same time, you can allow those experts to start preparing for new issues that will arise, and investigating new technologies and methodologies which they can then reapply to business processes as they mature.

How has cloud-native management evolved in the last few years and what are the current security stumbling blocks?

One of the areas of both maturity and immaturity is in terms of workload isolation. We can think of three types: workload from workload isolation (preventing workloads from interfering with each other – type 1); host from workload isolation (preventing workloads from interfering with the host – type 2); workload from host isolation (preventing hosts from interfering with workloads – type 3).

The technologies for types 1 and 2 are really quite mature now, with containers and virtual machines combining a variety of hardware and software techniques such virtualization, cgroups and SELinux. On the other hand, protecting workloads from malicious or compromised hosts is much more difficult, meaning that regulators – and sensible enterprises! – are unwilling to have some workloads execute in the public cloud.

Technologies like secure and measured boot, combined with TPM capabilities by projects such as Keylime (which is fully open source) are beginning to address this, and we can expect major improvement as confidential computing (and open source projects like Enarx which uses TEEs) matures.

In the past few years, we’ve seen a huge interest in Kubernetes deployments. What common mistakes are organizations making along the way? How can they be addressed?

One of the main mistakes we see businesses make is attempting to deploy Kubernetes without the appropriate level of in house expertise. Kubernetes is an ecosystem, rather than a one-off executable, that relies on other services provided by open source projects. It requires IT teams to fully understand the architecture that is made up of applications and network layers.

Once implemented, businesses must also maintain the ecosystem in parallel to any software running on top. When it comes to implementation, businesses are advised to follow open standards – those decided upon by the open source Kubernetes community as a whole, rather than a specific vendor. This will prevent teams from running into unexpected roadblocks, and helps to ensure a smooth learning curve for new team members.

Another mistake organizations can make is ignoring small but important details, like the backwards compatibility of Kubernetes with older versions is very important. It’s easy to overlook the fact that these may not have important security updates that can transfer, so IT teams must be mindful when merging code across versions, and check regularly for available updates.

Open source remains one of the building blocks of enterprise IT. What’s your take on the future of open source code in large business networks?

Open source is here to stay, and that’s a good thing, not least for security. The more security experts there are to look at code, the more likely that bugs will be found and fixed. Of course, security experts are short on the ground, and busy, so it’s important that large enterprises make a commitment to getting involved with open source and committing resources to it.

Another issue that people also get confused by thinking that just because a project is open source, it’s ready to use. There’s a difference between an open source project and an enterprise product which is based on that project. In the latter case, you get all the benefits of testing, patching, upgrading, vulnerability processes, version management and support. In the former case, you need to manage everything yourself – including ensuring that you have sufficient expertise in house to cope with any issues that come up.

Developing a plan for remote work security? Here are 6 key considerations

With so many organizations switching to a work-from-home model, many are finding security to be increasingly more difficult to administer and maintain. There is an influx of vulnerable points distributed across more locations than ever before, as remote workers strive to maintain their productivity. The result? Security teams everywhere are being stretched.

plan remote work security

The Third Global Threat Report from VMware Carbon Black also found little confidence among respondents that the rollout to remote working had been done securely. The study took a deep dive into the effects COVID-19 had on the security of remote working, with 91% of executives stating that working from home has led to a rise in attacks.

Are you making sure your security professionals are up to the task of remote working while security threats are on the rise?

1. Maintain consistency

One way to help mitigate risk is to have your developers and security professionals train at a consistent level so they are all on the same page. Knowing that there is some sort of security architecture at play in your organization and understanding the logistics of how to stress test aspects of that structure will make it easier to prepare for and block attacks.

2. Don’t overlook the details

Training needs to address all aspects of your structure, specifically: information security, data security, cybersecurity, computer security, physical security, IoT security, cloud security, and individual security. Each area of an architecture needs to be tested and hardened regularly for your organization to truly be shielded from security breaches. Be specific about your program: train your staff on how to defend your information around your HR records (SSNs, PII, etc.) and data that could be exposed (shopping cart, customer card numbers), as well as in cyber defense to provide tools against nefarious actors, breaches and threats.

3. Think about the individual

Staff must be trained to know how to lock down computers, so individual machines and network servers are safe. This training should also encompass how to ensure physical security, to protect your storage or physical assets. This comes into play more as the IoT plays a larger role in connecting our devices and BYOD policies allow for more connections to be made between personal and corporate assets. Individual security: each employee is entitled to be secure in their work for a company, and that includes privacy concerns and compliance issues.

4. Keep your head in the cloud

Today, most companies have some sort of cloud presence and security professionals will need to be trained to constantly check the interfaces to cloud and any hybrid on-prem and off-prem instances you have.

5. Invest in learning

With constantly changing layers of architecture and amplified room for breaches as a result of remote working, it’s hard to imagine how security professionals stay ahead of all the changes. One thing that keeps teams on top of their game is professional online learning.

During the COVID-19 shelter-in-place mandate, leading eLearning companies have witnessed a massive increase in hours of security content consumed. For some, security is one of the fastest-growing topic areas which suggests that this year, security is more important. This is likely because of the number of workers who have gone remote and challenges that brings to an organization, particularly in the security department.

6. Consider role-based training

While it’s important to equip teams with skills that apply across function, there is a case to be made for investing in experts. Cybersecurity is not a field where there is a linear path of growth. There are different journeys individuals can take to venture into paths to transition from a vulnerability analyst to a security architect. By looking at individuals within the organization to seek ways to upskill and take on new roles and responsibilities, you have the unique benefit of being able to help them curate roles that fit the needs of the organizations.

It’s not often that a business has a dedicated Remote Team Security Lead, because there was rarely a need for one. Considering the quick transition to remote work and possibility that this is the new normal, organizations can benefit by investing in specific training curated to meet the security needs of remote teams. If this role is cultivated within the organization, there is the added benefit of knowing that the lessons being taught provide direct relevancy to specific needs and increase the attractiveness of investing time and effort into skills training.

Training can be the key to preparing security professionals for the unexpected. But there is no one-size-fits-all lesson that can be delivered or an evergreen degree that can keep up with an industry that changes every day. Training needs to be always on the agenda and it needs to be developed in a way that offers different modalities of learning.

Regardless of how the individual best learns, criterion-based assessments can measure knowledge/skills and act as a guide to true, lasting learning. Developing a culture committed to agility and learning is the key to embracing change.