How a move to the cloud can improve disaster recovery plans

COVID-19 and the subsequent global recession have thrown a wrench into IT spending. Many enterprises have placed new purchases on hold. Gartner recently projected that global spending on IT would drop 8% overall this year — and yet dollars allocated to cloud-based services are still expected to rise by approximately 19 percent, bucking that downward trend.

improve disaster recovery plans

Underscoring the relative health of the cloud market, IDC reported that all growth in traditional tech spending will be driven by four platforms over the next five years: cloud, mobile, social and big data/analytics. Their 2020-2023 forecast states that traditional software continues to represent a major contribution to productivity, while investments in mobile and cloud hardware have created new platforms which will enable the rapid deployment of new software tools and applications.

With entire workforces suddenly going remote all over the world, there certainly are a number of specific business problems that need to be addressed, and many of the big issues involve VPNs.

Assault on VPNs

Millions of employees are working from home, and they all have to securely access their corporate networks. The vast majority of enterprises still rely on on-premises servers to some degree (estimates range from 60% to 98%), therefore VPNs play a vital role in enabling that employee connection to the network. This comes at a cost, though: bandwidth is gobbled up, slowing network performance — sometimes to a crippling level — and this has repercussions.

Maintenance of the thousands of machines and devices connected to the network gets sacrificed. The deployment of software, updates and patches simply doesn’t happen with the same regularity as when everyone works on-site. One reason for this is that content distribution (patches, applications and other updates) can take up much-needed bandwidth, and as a result, system hygiene gets sacrificed for the sake of keeping employees productive.

Putting off endpoint management, however, exposes corporate networks to enormous risks. Bad actors are well aware that endpoints are not being maintained at the same level as pre-pandemic, and they are more than willing to take advantage. Recent stats show that the volume of cyberattacks today is pretty staggering — much higher than prior to COVID-19.

Get thee to the cloud: Acceleration of modern device management

Because of bandwidth concerns, the pressure to trim costs, and the need to maintain machines in new ways, many enterprises are accelerating their move to the cloud. The cloud offers a lot of advantages for distributed workforces while also reducing costs. But digital transformation and the move to modern device management can’t happen overnight.

Enterprises have invested too much time, money, physical space and human resources to just walk away. Not to mention, on-premises environments have been highly reliable. Physical servers are one of the few things IT teams can count on to just work as intended these days.

Hybrid environments offer a happy medium. With the latest technology, enterprises can begin migrating to the cloud and adapt to changing conditions, meeting the needs of distributed teams. They can also save some money in the process. At the same time, they don’t have to completely abandon their tried-and-true servers.

Solving specific business problems: Content distribution to keep systems running

But what about those “specific business problems,” such as endpoint management and content distribution? Prior to COVID-19, this had been one of the biggest hurdles to digital transformation. It was not possible to distribute software and updates at scale without negatively impacting business processes and without excessive cost.

The issue escalated with the shift to remote work. Fortunately, technology providers have responded, developing solutions that leverage secure and efficient delivery mechanisms, such as peer-to-peer content distribution, that can work in the cloud. Even in legacy environments, vast improvements have been made to reduce bandwidth consumption.

These solutions allow enterprises to transition from a traditional on-premises infrastructure to the cloud and modern device management at their own speed, making their company more agile and resilient to the numerous risks they encounter today. Breakthrough technologies also support multiple system management platforms and help guarantee endpoints stay secure and updated even if corporate networks go down – something that, given the world we live in today, is a very real possibility.

Disaster averted

Companies like Garmin and organizations such as the University of California San Francisco joined the unwitting victims of ransomware attacks in recent months. Their systems were seized, only to be released upon payment of millions of dollars.

While there is the obvious hard cost involved, there are severe operational costs as well — employees that can’t get on the network to do their jobs, systems must be scanned, updated and remediated to ensure the network isn’t further compromised, etc. A lot has to happen within a short period of time in the wake of a cyberattack to get people back to work as quickly and safely as possible.

Fortunately, with modern cloud-based content distribution solutions, all that is needed for systems to stay up is electricity and an internet connection. Massive redundancy is being built into the design of products to provide extreme resilience and help ensure business continuity in case part or all of the corporate network goes down.

The newest highly scalable, cloud-enabled content distribution options enable integration with products like Azure CDN and Azure Storage and also provide a single agent for migration to modern device management. With features like cloud integration, internet P2P, and predictive bandwidth harvesting, enterprises can leverage a massive amount of bandwidth from the internet to manage endpoints and ensure they always stay updated and secure.

Given these new developments precipitated and accelerated by COVID-19, as well as the clear, essential business problem these solutions address, expect to see movement and growth in the cloud sector. Expect to see an acceleration of modern device management, and despite IT spending cuts, expect to see a better, more secure and reliable, cost efficient, operationally efficient enterprise in the days to come.

How tech trends and risks shape organizations’ data protection strategy

Trustwave released a report which depicts how technology trends, compromise risks and regulations are shaping how organizations’ data is stored and protected.

data protection strategy

Data protection strategy

The report is based on a recent survey of 966 full-time IT professionals who are cybersecurity decision makers or security influencers within their organizations.

Over 75% of respondents work in organizations with over 500 employees in key geographic regions including the U.S., U.K., Australia and Singapore.

“Data drives the global economy yet protecting databases, where the most critical data resides, remains one of the least focused-on areas in cybersecurity,” said Arthur Wong, CEO at Trustwave.

“Our findings illustrate organizations are under enormous pressure to secure data as workloads migrate off-premises, attacks on cloud services increases and ransomware evolves. Gaining complete visibility of data either at rest or in motion and eliminating threats as they occur are top cybersecurity challenges all industries are facing.”

More sensitive data moving to the cloud

Types of data organizations are moving into the cloud have become increasingly sensitive, therefore a solid data protection strategy is crucial. Ninety-six percent of total respondents stated they plan to move sensitive data to the cloud over the next two years with 52% planning to include highly sensitive data with Australia at 57% leading the regions surveyed.

Not surprisingly, when asked to rate the importance of securing data regarding digital transformation initiatives, an average score of 4.6 out of a possible high of five was tallied.

Hybrid cloud model driving digital transformation and data storage

Of those surveyed, most at 55% use both on-premises and public cloud to store data with 17% using public cloud only. Singapore organizations use the hybrid cloud model most frequently at 73% or 18% higher than the average and U.S. organizations employ it the least at 45%.

Government respondents store data on-premises only the most at 39% or 11% higher than average. Additionally, 48% of respondents stored data using the hybrid cloud model during a recent digital transformation project with only 29% relying solely on their own databases.

Most organizations use multiple cloud services

Seventy percent of organizations surveyed were found to use between two and four public cloud services and 12% use five or more. At 14%, the U.S. had the most instances of using five or more public cloud services followed by the U.K. at 13%, Australia at 9% and Singapore at 9%. Only 18% of organizations queried use zero or just one public cloud service.

Perceived threats do not match actual incidents

Thirty-eight percent of organizations are most concerned with malware and ransomware followed by phishing and social engineering at 18%, application threats 14%, insider threats at 9%, privilege escalation at 7% and misconfiguration attack at 6%.

Interestingly, when asked about actual threats experienced, phishing and social engineering came in first at 27% followed by malware and ransomware at 25%. The U.K. and Singapore experienced the most phishing and social engineering incidents at 32% and 31% and the U.S. and Australia experienced the most malware and ransomware attacks at 30% and 25%.

Respondents in the government sector had the highest incidents of insider threats at 13% or 5% above the average.

Patching practices show room for improvement

A resounding 96% of respondents have patching policies in place, however, of those, 71% rely on automated patching and 29% employ manual patching. Overall, 61% of organizations patched within 24 hours and 28% patched between 24 and 48 hours.

The highest percentage patching within a 24-hour window came from Australia at 66% and the U.K. at 61%. Unfortunately, 4% of organizations took a week to over a month to patch.

Reliance on automation driving key security processes

In addition to a high percentage of organizations using automated patching processes, findings show 89% of respondents employ automation to check for overprivileged users or lock down access credentials once an individual has left their job or changed roles.

This finding correlates to low concern for insider threats and data compromise due to privilege escalation according to the survey. Organizations must exercise caution when assuming removal of user access to applications to also include databases, which is often not the case.

Data regulations having minor impact on database security strategies

When asked if data regulations such as GDPR and CCPA impacted database security strategies, a surprising 60% of respondents said no.

These findings may suggest a lack of alignment between information technology and other departments, such as legal, responsible for helping ensure stipulations like ‘the right to be forgotten’ are properly enforced to avoid severe penalties.

Small teams with big responsibilities

Of those surveyed, 47% had a security team size of only six to 15 members. Respondents from Singapore had the smallest teams with 47% reporting between one and ten members and the U.S. had the largest teams with 22% reporting team size of 21 or more, 2% higher than the average.

Thirty-two percent of government respondents surprisingly run security operations with teams between just six and ten members.

Major gaps in virtual appliance security plague organizations

As evolution to the cloud is accelerated by digital transformation across industries, virtual appliance security has fallen behind, Orca Security reveals.

virtual appliance security

Virtual appliance security

The report illuminated major gaps in virtual appliance security, finding many are being distributed with known, exploitable and fixable vulnerabilities and on outdated or unsupported operating systems.

To help move the cloud security industry towards a safer future and reduce risks for customers, 2,218 virtual appliance images from 540 software vendors were analyzed for known vulnerabilities and other risks to provide an objective assessment score and ranking.

Virtual appliances are an inexpensive and relatively easy way for software vendors to distribute their wares for customers to deploy in public and private cloud environments.

“Customers assume virtual appliances are free from security risks, but we found a troubling combination of rampant vulnerabilities and unmaintained operating systems,” said Avi Shua, CEO, Orca Security.

“The Orca Security 2020 State of Virtual Appliance Security Report shows how organizations must be vigilant to test and close any vulnerability gaps, and that the software industry still has a long way to go in protecting its customers.”

Known vulnerabilities run rampant

Most software vendors are distributing virtual appliances with known vulnerabilities and exploitable and fixable security flaws.

  • The research found that less than 8 percent of virtual appliances (177) were free of known vulnerabilities. In total, 401,571 vulnerabilities were discovered across the 2,218 virtual appliances from 540 software vendors.
  • For this research, 17 critical vulnerabilities were identified, deemed to have serious implications if found unaddressed in a virtual appliance. Some of these well-known and
    easily exploitable vulnerabilities included: EternalBlue, DejaBlue, BlueKeep, DirtyCOW, and Heartbleed.
  • Meanwhile, 15 percent of virtual appliances received an F rating, deemed to have failed the research test.
  • More than half of tested virtual appliances were below an average grade, with 56 percent obtaining a C rating or below (15.1 percent F; 16.1 percent D; 25 percent C).
  • However, due to a retesting of the 287 updates made by software vendors after receiving findings, the average grade of these rescanned virtual appliances has increased from a B to an A.

Outdated appliances increase risk

Multiple virtual appliances were at security risk from age and lack of updates. The research found that most vendors are not updating or discontinuing their outdated or end-of-life (EOL) products.

  • The research found that only 14 percent (312) of the virtual appliance images had been updated within the last three months.
  • Meanwhile, 47 percent (1,049) had not been updated within the last year; 5 percent (110) had been neglected for at least three years, and 11 percent (243) were running on out of date or EOL operating systems.
  • Although, some outdated virtual appliances have been updated after initial testing. For example, Redis Labs had a product that scored an F due to an out-of-date operating system and many vulnerabilities, but now scored an A+ after updates.

The silver lining

Under the principle of Coordinated Vulnerability Disclosure, researchers emailed each vendor directly, giving them the opportunity to fix their security issues. Fortunately, the tests have started to move the cloud security industry forward.

As a direct result of this research, vendors reported that 36,259 out of 401,571 vulnerabilities have been removed by patching or discontinuing their virtual appliances from distribution. Some of these key corrections or updates included:

  • Dell EMC issued a critical security advisory for its CloudBoost Virtual Edition
  • Cisco published fixes to 15 security issues found in the one of its virtual appliances scanned in the research
  • IBM updated or removed three of its virtual appliances within a week
  • Symantec removed three poorly scoring products
  • Splunk, Oracle, IBM, Kaspersky Labs and Cloudflare also removed products
  • Zoho updated half of its most vulnerable products
  • Qualys updated a 26-month-old virtual appliance that included a user enumeration vulnerability that Qualys itself had discovered and reported in 2018

Maintaining virtual appliances

For customers and software vendors concerned about the issues illuminated in the report, there are corrective and preventive actions that can be taken. Software suppliers should ensure their virtual appliances are well maintained and that new patches are provided as vulnerabilities are identified.

When vulnerabilities are discovered, the product should be patched or discontinued for use. Meanwhile, vulnerability management tools can also discover virtual appliances and scan them for known issues. Finally, companies should also use these tools to scan all virtual appliances for vulnerabilities before use as supplied by any software vendor.

New Bluetooth Vulnerability

New Bluetooth Vulnerability

There’s a new unpatched Bluetooth vulnerability:

The issue is with a protocol called Cross-Transport Key Derivation (or CTKD, for short). When, say, an iPhone is getting ready to pair up with Bluetooth-powered device, CTKD’s role is to set up two separate authentication keys for that phone: one for a “Bluetooth Low Energy” device, and one for a device using what’s known as the “Basic Rate/Enhanced Data Rate” standard. Different devices require different amounts of data — and battery power — from a phone. Being able to toggle between the standards needed for Bluetooth devices that take a ton of data (like a Chromecast), and those that require a bit less (like a smartwatch) is more efficient. Incidentally, it might also be less secure.

According to the researchers, if a phone supports both of those standards but doesn’t require some sort of authentication or permission on the user’s end, a hackery sort who’s within Bluetooth range can use its CTKD connection to derive its own competing key. With that connection, according to the researchers, this sort of erzatz authentication can also allow bad actors to weaken the encryption that these keys use in the first place — which can open its owner up to more attacks further down the road, or perform “man in the middle” style attacks that snoop on unprotected data being sent by the phone’s apps and services.

Another article:

Patches are not immediately available at the time of writing. The only way to protect against BLURtooth attacks is to control the environment in which Bluetooth devices are paired, in order to prevent man-in-the-middle attacks, or pairings with rogue devices carried out via social engineering (tricking the human operator).

However, patches are expected to be available at one point. When they’ll be, they’ll most likely be integrated as firmware or operating system updates for Bluetooth capable devices.

The timeline for these updates is, for the moment, unclear, as device vendors and OS makers usually work on different timelines, and some may not prioritize security patches as others. The number of vulnerable devices is also unclear and hard to quantify.

Many Bluetooth devices can’t be patched.

Final note: this seems to be another example of simultaneous discovery:

According to the Bluetooth SIG, the BLURtooth attack was discovered independently by two groups of academics from the École Polytechnique Fédérale de Lausanne (EPFL) and Purdue University.

Researchers aim to improve code patching in embedded systems

Three Purdue University researchers and their teammates at the University of California, Santa Barbara and Swiss Federal Institute of Technology Lausanne have received a DARPA grant to fund research that will improve the process of patching code in vulnerable embedded systems.

code patching embedded systems

“Many embedded systems, like computer systems running in trucks, airplanes and medical devices, run old code for which the source code and the original compilation toolchain are unavailable,” Antonio Bianchi, assistant professor of computer science at Purdue University said.

“Many old software components running in these systems are known to contain vulnerabilities; however, patching them to fix these vulnerabilities is not always possible or easy.”

Without source code, patching a vulnerability necessitates editing the binary code directly, Bianchi said. Additionally, even in a system that has been patched, there is no guarantee that the patch will not interfere with the original functionality of the device. Because of these difficulties, he said, the code running in embedded systems is often left unpatched, even when it is known to be vulnerable.

Ensuring the patch doesn’t interfere with device functionality

The team’s proposed approach entails defining and verifying a set of properties that a patch must have to ensure it doesn’t interfere with the device’s original functionality. The research also aims to develop automatic and minimal code patching for devices that may be vulnerable to cyberattacks.

Minimizing modifications, Bianchi said, will require minimal resources to verify the patched code and prevent the device’s functionality from being harmed. In addition, they will also develop new ways to test the patched code, which does not require it to run on real hardware.

Intel, SAP, and Citrix release critical security updates

August 2020 Patch Tuesday was expectedly observed by Microsoft and Adobe, but many other software firms decided to push out security updates as well. Apple released iCloud for Windows updates and Google pushed out fixes to Chrome. They were followed by Intel, SAP and Citrix. Intel’s updates It’s not unusual for Intel to take advantage of a Patch Tuesday. This time they released 18 advisories. Among the fixed flaws are: DoS, Information Disclosure and EoP … More

The post Intel, SAP, and Citrix release critical security updates appeared first on Help Net Security.

What are the benefits of automated, cloud-native patch management?

Could organizations recoup their share of more than $1 billion per quarter by moving away from legacy solutions to cloud-native patch management and endpoint hardening? A new report from Sedulo Group says yes.

cloud-native patch management

The 2020 TCO Study of Microsoft WSUS & SCCM report shows organizations using Microsoft endpoint management for patching and hardening spend nearly 2x as much as organizations using SaaS-based patch management platforms.

Microsoft System Center Configuration Manager (SCCM) and Microsoft Windows Server Update Services (WSUS) currently manage over 175 million endpoints and cost organizations more than $625 million per month to manage versus a cloud-native approach.

The report defines the hidden costs of legacy patching, analyzing several factors that can impact TCO such as the hardware, software, licensing, training, and personnel unique to an organization. Based on this analysis, the hardware requirements and operational costs for WSUS and SCCM have the ability to push the total organizational cost burden to over $6.6 million, or $11 per endpoint per month for typical customers.

The report found that the most significant cost savings were prevalent in “scenarios where multiple OS are in use, or workforces consist of heavily virtualized or entirely remote-based staff.”

“It’s not just operating systems that need to be regularly patched. Almost any piece of software can serve as an attacker’s entry point to a network, and each has its own patching or updating mechanism. It’s almost impossible for an administrator to learn in a timely manner when one of these apps has become vulnerable, and it’s very time-consuming to apply a patch on all instances of an app on the network,” Mitja Kolsek, co-founder of 0patch, told Help Net Security.

“I believe the optimal patching model for today’s organizations with complex, ever-changing network topology, countless software products, and attackers with 0-day and N-day vulnerabilities targeting them, comprises a cloud-based patching service for official vendor updates, combined with a cloud-based micropatching service for fixing critical 0-day vulnerabilities and N-day vulnerabilities on end-of-support systems. I envision future patching services to merge these two complementary concepts and even provide micropatches as an alternative to official vendor updates.”

The report highlights that “selecting a SaaS-based patch management solution over a legacy provider minimizes the risk of financial impact.” Cloud-native patching and endpoint hardening platforms reduce the impact of unplanned expenses and the total cost burden over time while providing greater value than WSUS or SCCM solutions by being able to rapidly deploy patches and easily meet the security needs of hybrod and remote workforces.

“Many organizations lack the ability to properly manage endpoints and are often paying too much for tools that simply cannot deliver enough value,” said Jay Prassl, CEO, Automox. “This study puts a spotlight on the cost burden that on-premise patching solutions create, and how making the switch to a cloud-native platform enables cost savings, increased capabilities, and the scalability today’s ever-changing businesses need to properly secure their workforces.”

Attackers are bypassing F5 BIG-IP RCE mitigation – you might want to patch after all

Attackers are bypassing a mitigation for the BIG-IP TMUI RCE vulnerability (CVE-2020-5902) originally provided by F5 Networks, NCC Group’s Research and Intelligence Fusion Team has discovered.

“Early data made available to us, as of 08:05 on July 8, 2020, is showing of ~10,000 Internet exposed F5 devices that ~6,000 were made potentially vulnerable again due to the bypass,” they warned.

F5 Networks has updated the security advisory to reflect this discovery and to provide an updated version of the mitigation. The advisory has also been updated with helpful notes regarding the impact of the flaw, the various mitigations, as well as indicators of compromise.

CVE-2020-5902 exploitation attempts

CVE-2020-5902 was discovered and privately disclosed by Positive Technologies researcher Mikhail Klyuchnikov.

F5 Networks released patches and published mitigations last Wednesday and PT followed with more information.

Security researchers were quick to set up honeypots to detect exploitation attempts and, a few dats later, after several exploits had been made public, they started.

Some were reconnaissance attempts, some tried to deliver backdoors, DDoS bots, coin miners, web shells, etc. Some were attempts to scrape admin credentials off vulnerable devices in an automated fashion.

There’s also a Metasploit module for CVE-2020-5902 exploitation available (and in use).

What now?

Any organization that applied the original, incomplete mitigation instead of patching their F5 BIG-IP boxes should take action again:

They should also check whether their devices have been compromised in the interim.

Massive complexity endangers enterprise endpoint environments

There’s a massive amount of complexity plaguing today’s enterprise endpoint environments. The number of agents piling up on enterprise endpoint devices – up on average – is hindering IT and security’s ability to maintain foundational security hygiene practices, such as patching critical vulnerabilities, which may actually weaken endpoint security defenses, Absolute reveals.

enterprise endpoint environments

Also, critical endpoint controls like encryption and antivirus agents, or VPNs, are prone to decay, leaving them unable to protect vulnerable devices, data, and users – with more than one in four enterprise devices found to have at least one of these controls missing or out of compliance.

Increasing security spend does not guarantee security

In addition to heightening risk exposure, the failure of critical endpoint controls to deliver their maximum intended value is also resulting in security investments and, ultimately, wasted endpoint security spend.

According to Gartner, “Boards and senior executives are asking the wrong questions about cybersecurity, leading to poor investment decisions. It is well-known to most executives that cybersecurity is falling short. There is a consistent drumbeat directed at CIOs and CISOs to address the limitations, and this has driven a number of behaviors and investments that will also fall short.”

“What has become clear with the insights uncovered in this year’s report is that simply increasing security spend annually is not guaranteed to make us more secure,” said Christy Wyatt, President and CEO of Absolute.

“It is time for enterprises to increase the rigor around measuring the effectiveness of the investments they’ve made. By incorporating resilience as a key metric for endpoint health, and ensuring they have the ability to view and measure Endpoint Resilience, enterprise leaders can maximize their return on security investments.”

The challenges of maintaining resilience

Without the ability to self-heal, critical controls suffer from fragility and lack of resiliency. Also, endpoint resilience is dependent not just on the health of single endpoint applications, but also combinations of apps.

The massive amount of complexity uncovered means that even the most well-functioning endpoint agents are at risk of collision or failure once deployed across today’s enterprise endpoint environments.

IT and security teams need intelligence into whether individual endpoint controls, as well as various combinations of controls, are functioning effectively and maintaining resilience in their own unique endpoint environment.

Single vendor application pairings not guaranteed to work seamlessly together

In applying the criteria for application resilience to same-vendor pairings of leading endpoint protection and encryption apps, widely varied average health and compliance rates among these pairings were found.

The net-net here is that sourcing multiple endpoint agents from a single vendor does not guarantee that those apps will not ultimately collide or decay when deployed alongside one another.

enterprise endpoint environments

Progress in Windows 10 migration

Much progress was made in Windows 10 migration, but fragmentation and patching delays leave organizations potentially exposed. Our data showed that while more than 75 percent of endpoints had made the migration to Windows 10 (up from 54 percent last year), the average Windows 10 enterprise device was more than three months behind in applying the latest security patches – perhaps unsurprisingly, as the data also identified more than 400 Windows 10 build releases across enterprise devices.

This delay in patching is especially concerning in light of a recent study that shows 60 percent of data breaches are the result of a known vulnerability with a patch available, but not applied.

Relying on fragile controls and unpatched devices

Fragile controls and unpatched devices are being relied on to protect remote work environments. With the rise of remote work environments in the wake of the COVID-19 outbreak, as of May 2020, one in three enterprise devices is now being used heavily (more than 8 hours per day).

The data also shows a 176 percent increase in the number of enterprise devices with collaboration apps installed as of May 2020, versus pre-COVID-19. This means the average attack surface, and potential vulnerabilities, has expanded significantly across enterprises.

Most malware in Q1 2020 was delivered via encrypted HTTPS connections

67% of all malware in Q1 2020 was delivered via encrypted HTTPS connections and 72% of encrypted malware was classified as zero day, so would have evaded signature-based antivirus protection, according to WatchGuard.

encrypted malware

These findings show that without HTTPS inspection of encrypted traffic and advanced behavior-based threat detection and response, organizations are missing up to two-thirds of incoming threats. The report also highlights that the UK was a top target for cyber criminals in Q1, earning a spot in the top three countries for the five most widespread network attacks.

“Some organizations are reluctant to set up HTTPS inspection due to the extra work involved, but our threat data clearly shows that a majority of malware is delivered through encrypted connections and that letting traffic go uninspected is simply no longer an option,” said Corey Nachreiner, CTO at WatchGuard.

“As malware continues to become more advanced and evasive, the only reliable approach to defense is implementing a set of layered security services, including advanced threat detection methods and HTTPS inspection.”

Monero cryptominers surge in popularity

Five of the top ten domains distributing malware in Q1 either hosted or controlled Monero cryptominers. This sudden jump in cryptominer popularity could simply be due to its utility; adding a cryptomining module to malware is an easy way for online criminals to generate passive income.

Flawed-Ammyy and Cryxos malware variants join top lists

The Cryxos trojan was third on a top-five encrypted malware list and also third on its top-five most widespread malware detections list, primarily targeting Hong Kong. It is delivered as an email attachment disguised as an invoice and will ask the user to enter their email and password, which it then stores.

Flawed-Ammyy is a support scam where the attacker uses the Ammyy Admin support software to gain remote access to the victim’s computer.

Three-year-old Adobe vulnerability appears in top network attacks

An Adobe Acrobat Reader exploit that was patched in August 2017 appeared in a top network attacks list for the first time in Q1. This vulnerability resurfacing several years after being discovered and resolved illustrates the importance of regularly patching and updating systems.

Mapp Engage, AT&T and Bet365 targeted with spear phishing campaigns

Three new domains hosting phishing campaigns appeared on a top-ten list in Q1 2020. They impersonated digital marketing and analytics product Mapp Engage, online betting platform Bet365 (this campaign was in Chinese) and an AT&T login page (this campaign is no longer active at the time of the report’s publication).

COVID-19 impact

Q1 2020 was only the start of the massive changes to the cyber threat landscape brought on by the COVID-19 pandemic. Even in these first three months of 2020, we still saw a massive rise in remote workers and attacks targeting individuals.

Malware hits and network attacks decline. Overall, there were 6.9% fewer malware hits and 11.6% fewer network attacks in Q1, despite a 9% increase in the number of Fireboxes contributing data. This could be attributed to fewer potential targets operating within the traditional network perimeter with worldwide work-from-home policies in full force during the pandemic.

Increased attacks and the power of a fully staffed cybersecurity team

The cybersecurity landscape is constantly evolving, and even more so during this time of disruption. According to ISACA’s survey, most respondents believe that their enterprise will be hit by a cyberattack soon – with 53 percent believing it is likely they will experience one in the next 12 months.

cybersecurity hiring and retention

Cyberattacks continuing to increase

The survey found cyberattacks are also continuing to increase, with 32 percent of respondents reporting an increase in the number of attacks relative to a year ago. However, there is a glimmer of hope—the rate at which the attacks increase is continuing to decline over time; last year, just over 39 percent of respondents answered in the same way.

Though while attacks are going up—with the top attack types reported as social engineering (15 percent), advanced persistent threat (10 percent) and ransomware and unpatched systems (9 percent each)—respondents believe that cybercrime remains underreported.

Sixty-two percent of professionals believe that enterprises are failing to report cybercrime, even when they have a legal or contractual obligation to do so.

“These survey results confirm what many cybersecurity professionals have known from for some time and in particular during this health crisis—that attacks have been increasing and are likely to impact their enterprise in the near term,” says Ed Moyle, founding partner, SecurityCurve.

“It also reveals some hard truths our profession needs to face around the need for greater transparency and communication around these attacks.”

Security programs tools

Among the tools used in security programs for fighting these attacks are AI and machine learning solutions, and the survey asked about these for the first time this year. While these options are available to incorporate into security solutions, only 30 percent of those surveyed use these tools as a direct part of their operations capability.

The survey also found that while the number of respondents indicating they are significantly understaffed fell by seven percentage points from last year, a majority of organizations (62 percent) remain understaffed. Understaffed security teams and those struggling to bring on new staff are less confident in their ability to respond to threats.

Only 21 percent of “significantly understaffed” respondents report that they are completely or very confident in their organization’s ability to respond to threats, whereas those who indicated their enterprise was “appropriately staffed” have a 50 percent confidence level.

Cybersecurity hiring and retention

The impact goes even further, with the research finding that enterprises struggling to fill roles experience more attacks, with the length of time it takes to hire being a factor. For example, 35 percent of respondents in enterprises taking three months to hire reported an increase in attacks and 38 percent from those taking six months or more.

Additionally, 42 percent of organizations that are unable to fill open security positions are experiencing more attacks this year.

“Security controls come down to three things—people, process and technology—and this research spotlights just how essential people are to a cybersecurity team,” says Sandy Silk, Director of IT Security Education & Consulting, Harvard University, and ISACA cybersecurity expert.

“It is evident that cybersecurity hiring and retention can have a very real impact on the security of enterprises. Cybersecurity teams need to think differently about talent, including seeking non-traditional candidates with diverse educational levels and experience.”

Sensitive data is piling up on enterprise devices, Windows 10 machines behind on patching

Directly after the WHO declared COVID-19 a global pandemic, an estimated 16 million US employees were sent home and instructed to work remotely, while governments around the world implemented widespread school closures impacting over 90 percent of the world’s student population, Absolute reveals.

sensitive data enterprise devices

This result placed IT and security teams under immediate pressure to quickly stand up work-from-home or learn-from-home environments to ensure continued productivity, connectivity, and security.

“COVID-19 marks the beginning of a new era where we believe the nature of work will be forever changed,” said Christy Wyatt, President and CEO of Absolute.

“As this crisis took hold, we saw our customers mobilize quickly to get devices into the hands of students and employees and navigate the challenges of standing up remote work and distance learning programs. What has become resoundingly clear is there has never been a more critical time for having undeletable endpoint resilience.”

Sensitive data is building up on enterprise devices

There has been a 46 percent increase in the number of items of sensitive data – such as Personally Identifiable Information (PII) and Protected Health Information (PHI) – identified on enterprise endpoints, compared to pre-COVID-19. Compounded by the pre-existing gaps in endpoint security and health, this means enterprise organizations are at heightened risk.

Enterprises at heightened risk of data breaches or compliance violations

On average, one in four enterprise endpoint devices have a critical security application (anti-malware, encryption, VPN, or client management) that is missing, inactive or out-of-date.

With the significant increases in sensitive data being stored on enterprise endpoint devices, enterprises are putting themselves at risk of legal compliance violations and data breaches as COVID-19 cyber attacks accelerate.

sensitive data enterprise devices

Employee and student device usage continues to rise post-pandemic

The data shows a nearly 50 percent increase in the amount of heavy device usage – 8+ hours per day – across enterprise organizations, jumping to an increase of 62 percent in heavy education device usage. The average number of hours education endpoint devices are being used daily is also up 27 percent.

Patch management plaguing both enterprise and education IT teams

Device health sees slight improvement, but patch management continues to plague both enterprise and education IT teams. The average enterprise endpoint device running Windows 10 continues to be nearly 3 months behind in applying the latest patch, with that delay spiking to more than 180 days since a patch has been applied to the average student Windows 10 device – leaving students and employees vulnerable.

April 2020 Patch Tuesday forecast: Uncertainty reigns, but patching endures through pandemic

I should have reserved the title from last month’s article – Let’s put the madness behind us for this month. Of course, it has a completely different meaning now in the wake of the COVID-19 pandemic chaos. The biggest change and challenge for most of us is managing and securing an IT environment while working from home.

April 2020 Patch Tuesday forecast

Extending the edge of the corporate network through VPNs has taxed many environments, placing greater reliance on collaboration and communication tools. And with that, vulnerabilities have surfaced, and in some cases, exploitation has occurred. Let’s look at some important events since last patch Tuesday.

The cyber threat of COVID-19

COVID-19 has been not only a threat in a physical sense, but also generated one of the larger cybersecurity threats in recent memory. Attackers have built on the public’s need for the latest, global COVID-19 information by creating widespread phishing attacks. These phishing attacks often contain downloaders which exploit known vulnerabilities.

Many of these attacks are posing as the World Health Organization, National Institutes of Health, or other trusted sources for information. During this crisis it remains a priority to make employees aware of these attacks and to continue to apply the software updates needed to protect your systems.

Attacks on collaboration software

I mentioned recent attacks on collaboration software, with Zoom unfortunately being the leader in the news. Several vulnerabilities concerning passwords and privilege escalation have been discovered in this widely used application, and the overall security of the product has been questioned by many.

Attackers have been able to interrupt live sessions. In this time of working from home, the need for regular interaction to accomplish our jobs is more important than ever, and we need to trust the tools we are using. Zoom has been responding rapidly, providing updates to combat this recent wave of attacks.

Windows SMBv3 vulnerability

Two days after March Patch Tuesday Microsoft released an update for the Windows SMBv3 vulnerability associated with CVE-2020-0796.

This vulnerability exists in Windows 10 1903 and 1909 and garnered a lot of attention because it received the highest Common Vulnerability Scoring System (CVSS) score of 10. It does not require user authentication and could be used to propagate a worm. Please make sure you’ve applied this update.

Windows 10

Microsoft delayed the end-of-support date for the Enterprise and Education versions of Windows 10 1709 from April 14 until October 13. Per Microsoft, this will remove at least one burden for those who were in the process of updating to a new edition. Of course, this means that both Windows 10 1709 and 1803 will reach end-of-support within a month of each other – 1803 ends November 10 so plan accordingly!

While on the subject of Windows 10, the release of Windows 10 2004 may be happening soon and there is cause for concern with so many people working from home. There is no control over the update being applied on a system running Home edition, so for employees, or their children doing schoolwork, this update could be very disruptive. Watch for more information from Microsoft and let your employees know what to expect.

The IT world is changing rapidly and as we’ve seen with Zoom, Microsoft and others, both policies and patch releases are being adapted to address the situation. The entire work-from-home scenario is forcing vendors to continuously assess the security state of their applications, so I anticipate we will see more releases addressing a smaller number of vulnerabilities as they are discovered and fixed.

April 2020 Patch Tuesday forecast

  • Microsoft should provide their regular updates across the board for the latest Windows 10 workstations and servers as well as the usual applications, i.e. Office, SharePoint, etc. Be on the lookout for a fix to the font vulnerability reported in Advisory 20006, Type 1 Font Parsing Remote Code Execution Vulnerability.
  • Mozilla provided security updates this week for Firefox, Firefox ESR and Thunderbird. We may not see anything from them next week.
  • Likewise, Google released a security update for Chrome this week, so I don’t expect to see anything on Patch Tuesday.
  • There are no pre-announcements for Adobe Acrobat, Reader, or Flash but I wouldn’t rule out an update next week.

We should have a smaller set of updates than usual released next week. But with the rising number of attacks coupled with the chaos surrounding the COVID-19 pandemic, it is more important than ever to protect our work-from-home employees. Once again, patch endures.

Qualys VMDR: Discover, prioritize, and patch critical vulnerabilities in real time

In this podcast, Prateek Bhajanka, VP of Product Management, Vulnerability Management, Detection and Response at Qualys, discusses how you can significantly accelerate an organization’s ability to respond to threats.

Qualys VMDR

Qualys VMDR enables organizations to automatically discover every asset in their environment, including unmanaged assets appearing on the network, inventory all hardware and software, and classify and tag critical assets. VMDR continuously assesses these assets for the latest vulnerabilities and applies the latest threat intel analysis to prioritize actively exploitable vulnerabilities.

Here’s a transcript of the podcast for your convenience.

Hi everyone. This is Prateek Bhajanka, VP of Product Management, Vulnerability Management, Detection and Response at Qualys. Today I’m going to talk about the new concept that Qualys has introduced in the market. That is vulnerability management detection and response, which talks about the entire lifecycle of vulnerability management using a single integrated workflow in the same platform altogether.

Qualys VMDR

Security is only as strong as the weakest link that you have in your organization. There could be so many assets and devices which are on the network, which are connected to the enterprise network, which are consuming your enterprise resources, which you may not even know of. You will not be able to secure anything that you do not know of. That’s the reason the VMDR concept picks up the problem of vulnerability management right from the bottom itself where it is helping you discover the assets which are connected, or which are getting connected to your enterprise network.

No matter whether it is getting connected using VPN, or locally, or through a network, as soon as a device is getting connected, it will be discovered by the sensors that are located in the network, which can tell you that these are the new assets which are connected and then you can go about inventoring them. You can maintain the asset inventory of those devices. Then the next step is that if you look at performing vulnerability management, then you go ahead and perform vulnerability assessment, vulnerability management of those devices, the existing ones, the ones which are already discovered and the ones which are now getting discovered. Then identify all the vulnerabilities which are existing in those assets, and then as it is perceived in the market, that vulnerability is a number game, but vulnerability management is no longer a number game.

The reason is, if you look at the statistics over the last 10 years, you would see that the total number of vulnerabilities which get discovered in a year, maybe let’s say 15,000 to 16,000 of vulnerabilities that are getting discovered, out of those vulnerabilities, only a handful, like 1000 vulnerabilities get exploited. That means the fraction of vulnerabilities which are getting exploited are not more than 10 to 12%. Let’s say that you have a thousand vulnerabilities in your organization, and even if you fixed 900 vulnerabilities, you cannot say that you have implemented vulnerability management effectively because the rest of the hundred vulnerabilities could be all the way more riskier than the 900 vulnerabilities that you fixed, and the rest hundred vulnerabilities that you left could be the vulnerabilities which are getting exploited in the wild.

Now we are bridging the gap and with the concept of VMDR, we are not just calculating these thousand vulnerabilities for you, but we are also helping you understand what hundred vulnerabilities are getting exploited in the wild using various formats. It could be malware, it could be ransomware, it could be nation-state attacks, it could be a remote code execution. So, what are the vulnerabilities that you should pay immediate attention to, so that you can prioritize your efforts because you have limited amount of remediation efforts, limited number of personnel, limited number of resources to work on vulnerability management, so that you would be able to focus on the areas which would be all the way more impactful then what it is today. So, right from asset discovery to asset inventory to vulnerability management, and then prioritizing those vulnerabilities on the basis of the threat which are active in the wild.

Qualys VMDR

Right now, so far what we are doing is problem identification, but we may not be actually solving the problem. How to solve that problem? With the concept of VMDR, we are also adding response capabilities in the same platform, so that it is not just about identifying the problem and leaving it on the table, but it is also about going and implementing the fixes. If you see a particular vulnerability, you would also be able to see which particular patch can be implemented in order to remediate this particular vulnerability.

That kind of correlation from CVE to the missing patch, it tells you the exact parts that you need to deploy so that this particular vulnerability can be remediated. It also tells you the list of prioritized assets on the basis of various real-time threat indicators, on the basis of various attack surfaces.

Once you have the vulnerability data, while we are doing the scanning, you have a lot of asset context that you can use to filter the number of vulnerabilities. When I say that you divide the context into two parts: internal and external. Your external context would be your threat intelligence feed that is coming from so many different sources or which may be inbuilt in the platform itself. And this threat intelligence is an external context because this is not taking into account your asset context or your internal organization context. So this will help you identify the vulnerabilities which are getting exploited in the wild today, which are expected to get exploited in the wild, for which there are some kind of chatter going around in the dark web, and that these are the vulnerabilities for which the exploits have been developed, the proof of concept is available, and so many things. This is very external.

Now, the internal context. Out of 1000 vulnerabilities, let’s say, on the basis of external context, you are able to prioritize or filter out, 800 vulnerabilities and now you’re left with 200 vulnerabilities. But how to go down further, how to streamline your efforts and prioritize your efforts.

Now comes the internal context. Whether this particular vulnerability is on a running kernel or a non-running kernel. Of course, I would like to focus my efforts on the running kernel first, because those are the kernels which would be exposed to any outsider. This is the asset context I would be putting in. What are the vulnerabilities which are already mitigated by the existing configuration? Let’s say, the BlueKeep vulnerability. BlueKeep vulnerability is a vulnerability which is on port 3389. If the network devices or if the network level authentication is already enabled on the network, that means I do not need to worry about the BlueKeep vulnerability.

Qualys VMDR

If that is already enabled, I can also filter out those vulnerabilities on which the assets have been tagged as BlueKeep vulnerabilities existing. On the basis of all these many factors, whether this is remotely discoverable or not, because you will have to see the vulnerabilities which are getting remotely discoverable, they can be remotely discovered by the attackers also. That means it’s a priority that you should go ahead and fix those vulnerabilities first. On the basis of so many other internal context filters that are available with the VMDR concept and VMDR platform, you would be able to identify those vulnerabilities, those hundred vulnerabilities out of a thousand vulnerabilities, which you should pay immediate attention to.

With the click of a button which is available on the console, you would be able to go ahead and deploy the remediation measures from the console itself so that the time to remediation is reduced to the minimum possible. And the ideal time to remediation, as our Chief Product Officer likes to call it as zero, the ideal time to remediation is zero because the average days before the vulnerability gets exploited in the wild is getting reduced. And now the average number of days has come down to seven.

You cannot have a significant delay before the vulnerability gets discovered and a vulnerability gets patched. This all, putting right from asset discovery to asset inventory, to vulnerability management, then prioritizing on the basis of the threats which are active, and then go about remediating and fixing those problems. This is the concept of vulnerability management, detection and response.

Organizations struggle with patching endpoints against critical vulnerabilities

Less than 50 percent of organizations can patch vulnerable systems swiftly enough to protect against critical threats and zero-day attacks, and 81 percent have suffered at least one data breach in the last two years, according to Automox.

patching endpoints

The research surveyed 560 IT operations and security professionals at enterprises with between 500 and 25,000 employees, across more than 15 industries to benchmark the state of endpoint patching and hardening.

While most enterprises want to prioritize patching and endpoint hardening, they are inhibited by the pace of digital transformation and modern workforce evolution, citing difficulty in patching systems belonging to mobile employees and remote offices, inefficient patch testing, lack of visibility into endpoints, and insufficient staffing in SecOps and IT operations to successfully do so.

Missing patches and configurations are at the center of data breaches

The report confirmed that four out of five organizations have suffered at least one data breach in the last two years. When asked about the root causes, respondents placed phishing attacks (36%) at the top of the list, followed by:

  • Missing operating systems patches (30%)
  • Missing application patches (28%)
  • Operating system misconfigurations (27%)

With missing patches and configurations cited more frequently than such high-profile issues as insider threats (26%), credential theft (22%), and brute force attacks (17%), three of the four most common issues can be addressed simply with better cyber hygiene.

Enterprises should patch within 24 hours

When critical vulnerabilities are discovered, cybercriminals can typically weaponize them within seven days. To ensure protection from the attacks that inevitably follow, security experts recommend that enterprises patch and harden all vulnerable systems within 72 hours.

Zero-day attacks, which emerge with no warning, pose an even greater challenge, and enterprises should aim to patch and harden vulnerable systems within 24 hours. Currently:

  • Less than 50% of enterprises can meet the 72-hour standard and only about 20% can match the 24-hour threshold for zero-days.
  • 59 percent agree that zero-day threats are a major issue for their organization because their processes and tools do not enable them to respond quickly enough.
  • Only 39% strongly agree that their organizations can respond fast enough to critical and high severity vulnerabilities to remediate successfully.
  • 15 percent of systems remained unpatched after 30 days.
  • Almost 60% harden desktops, laptops and servers only monthly or annually, which is an invitation to adversaries.

With cyber hygiene, endpoints need to be scanned and assessed on a regular basis, and if problems are found, promptly patched or reconfigured. Automation dramatically speeds up cyber hygiene processes by enabling IT operations and SecOps staff to patch and harden more systems with less effort, while reducing the amount of system and application downtime needed for patching and hardening. Organizations that have fully automated endpoint patching and hardening are outperforming others in basic cyber hygiene tasks.

patching endpoints

The modern workforce presents a cyber hygiene dilemma

Survey respondents are more confident in their ability to maintain cyber hygiene for on- premises computers and servers compared with remote and mobile systems such as servers on infrastructure as a Service (IaaS) cloud platforms, mobile devices (smartphones and tablets), and computers at remote locations. In fact, they rated their ability to maintain cyber hygiene for Bring Your Own Device (BYOD) lowest among all other IT components.

These patterns can be explained by the fact that most existing patch management tools don’t work well with cloud-based endpoints, and that virtual systems are very dynamic and therefore harder to monitor and protect than physical ones.

“Phishing has and will continue to be an issue for many organizations. As the Automox Cyber Hygiene Index highlights, 36% of data breaches involved phishing as the initial access technique used by attackers. Detecting phishing is extremely difficult, but giving your users the ability to report suspicious messages along with proper training goes a long way. You want your users to be part of your security team, and enabling them to report suspicious messages is one step towards this goal,” Josh Rickard, Swimlane Research Engineer, told Help Net Security.

“The combination of robust filtering and user enablement can drastically help with the detection of phishing attacks, but once they have been reported, you need automation to process and respond to them. More importantly, you need a platform that can automate and orchestrate across multiple tools and services. Using security, orchestration, automation and response (SOAR) for phishing alerts enables security teams to automatically process reported messages, make a determination based on multiple intelligence services/tools, respond by removing a message from a (or all) users mailboxes, and even search for additional messages with similar attributes throughout the organization. Having the ability to automate and orchestrate this response is critical for security teams and enables them to put their focus on other higher-value security-related issues,” Rickard concluded.

Scientists expose another security flaw in Intel processors

Computer scientists at KU Leuven have once again exposed a security flaw in Intel processors. Jo Van Bulck, Frank Piessens, and their colleagues in Austria, the United States, and Australia gave the manufacturer one year’s time to fix the problem.

Load Value Injection

Load Value Injection

Plundervolt, Zombieload, Foreshadow: in the past couple of years, Intel has had to issue quite a few patches for vulnerabilities that computer scientists at KU Leuven have helped to expose. “All measures that Intel has taken so far to boost the security of its processors have been necessary, but they were not enough to ward off our new attack,” says Jo Van Bulck from the Department of Computer Science at KU Leuven.

Like the previous attacks, the new technique – dubbed Load Value Injection – targets the ‘vault’ of computer systems with Intel processors: SGX enclaves.

“To a certain extent, this attack picks up where our Foreshadow attack of 2018 left off. A particularly dangerous version of this attack exploited the vulnerability of SGX enclaves, so that the victim’s passwords, medical information, or other sensitive information was leaked to the attacker.

“Load Value Injection uses that same vulnerability, but in the opposite direction: the attacker’s data are smuggled – ‘injected’ – into a software program that the victim is running on their computer. Once that is done, the attacker can take over the entire program and acquire sensitive information, such as the victim’s fingerprints or passwords.”

Giving Intel enough time to fix the problem

The vulnerability was already discovered on 4 April 2019. Nevertheless, the researchers and Intel agreed to keep it a secret for almost a year. Responsible disclosure embargoes are not unusual when it comes to cybersecurity, although they usually lift after a shorter period of time.

“We wanted to give Intel enough time to fix the problem. In certain scenarios, the vulnerability we exposed is very dangerous and extremely difficult to deal with because, this time, the problem did not just pertain to the hardware: the solution also had to take software into account. Therefore, hardware updates like the ones issued to resolve the previous flaws were no longer enough. This is why we agreed upon an exceptionally long embargo period with the manufacturer.”

“Intel ended up taking extensive measures that force the developers of SGX enclave software to update their applications. However, Intel has notified them in time. End-users of the software have nothing to worry about: they only need to install the recommended updates.”

“Our findings show, however, that the measures taken by Intel make SGX enclave software up to 2 to even 19 times slower.”

What are SGX enclaves?

Computer systems are made up of different layers, making them very complex. Every layer also contains millions of lines of computer code. As this code is still written manually, the risk for errors is significant.

If such an error occurs, the entire computer system is left vulnerable to attacks. You can compare it to a skyscraper: if one of the floors becomes damaged, the entire building might collapse.

Viruses exploit such errors to gain access to sensitive or personal information on the computer, from holiday pictures and passwords to business secrets.

In order to protect their processors against this kind of intrusions, IT company Intel introduced an innovative technology in 2015: Intel Software Guard eXtensions (Intel SGX). This technology creates isolated environments in the computer’s memory, so-called enclaves, where data and programs can be used securely.

“If you look at a computer system as a skyscraper, the enclaves form a vault”, researcher Jo Van Bulck explains. “Even when the building collapses the vault should still guard its secrets – including passwords or medical data.”

The technology seemed watertight until August 2018, when researchers at KU Leuven discovered a breach. Their attack was dubbed Foreshadow. In 2019, the Plundervolt attack revealed another vulnerability. Intel has released updates to resolves both flaws.

Combat complexity to prevent cybersecurity fatigue

In today’s security landscape, the average company uses more than 20 security technologies. While vendor consolidation is steadily increasing with 86 percent of organizations using between 1 and 20 cybersecurity vendors, more than 20 percent feel that managing a multi-vendor environment is very challenging, which has increased by 8 percent since 2017, according to a Cisco’s CISO Benchmark Report for which they surveyed 2,800 security professionals from 13 countries around the globe.

Other notable findings:

  • Forty-two percent of respondents are suffering from cybersecurity fatigue, defined as virtually giving up on proactively defending against malicious actors.
  • Over 96 percent of fatigue sufferers saying that managing a multi-vendor environment is challenging, complexity being one of the main causes of burnout.
  • Having to respond to too many alerts and struggling with vendor complexity, having a more impactful breach (in terms of the number of hours of downtime) also increases cyber fatigue.

combat cybersecurity complexity

Increasing investments to combat cybersecurity complexity

To combat cybersecurity complexity, security professionals are increasing investments in automation to simplify and speed up response times in their security ecosystems; using cloud security to improve visibility into their networks; and sustaining collaboration between networking, endpoint and security teams.

“As organizations increasingly embrace digital transformation, CISOs are placing higher priority in adopting new security technologies to reduce exposure against malicious actors and threats. Often, many of these solutions don’t integrate, creating substantial complexity in managing their security environment,” said Steve Martino, Senior Vice President and CISO, Cisco.

“To address this issue, security professionals will continue steady movement towards vendor consolidation, while increasing reliance on cloud security and automation to strengthen their security posture and reduce the risk of breaches.”

Additional CISO challenges and opportunities for improvement

Workload protection for all user and device connections across the network was found extremely challenging — Forty-one percent of the surveyed organizations found data centers were extremely difficult to defend, and 39 percent said they struggled to secure applications. The most troublesome place to defend data was the public cloud, with 52 percent finding it very or extremely challenging to secure, and 50 percent claiming private cloud infrastructure was a top security challenge.

Security professionals struggle to secure the growing mobile workforce and ubiquitous personal devices — More than half (52 percent) of respondents stated mobile devices are now very or extremely challenging to defend. Adopting zero-trust technologies can help secure managed and unmanaged devices without slowing down employees.

Adoption of zero-trust technologies to secure access of the network, applications, users, devices and workloads needs to increase — Only 27 percent of organizations are currently using multi-factor authentication (MFA), a valuable zero-trust technology to secure the workforce. Survey respondents from the following countries showed the highest MFA adoption rates in this order: USA, China, Italy, India, Germany, and UK. While micro-segmentation, a zero-trust approach to secure access of workloads, had the least adoption at only 17 percent of respondents.

Breaches due to an unpatched vulnerability caused higher levels of data loss — A key concern for 2020 is that 46 percent of organizations, up from 30 percent in last year’s report, had an incident caused by an unpatched vulnerability. Sixty-eight percent of organizations breached from an unpatched vulnerability suffered losses of 10,000 data records or more last year. In contrast, for those who said they suffered a breach from other causes, only 41percent lost 10,000 or more records in the same timeframe.

combat cybersecurity complexity

Security pros improving their security posture

  • Collaboration between network and security teams remains high— Ninety-one percent of respondents reported they’re very or extremely collaborative.
  • Security practitioners are realizing the benefits of automation for solving their skills shortage problem as they adopt solutions with greater machine learning and artificial intelligence capabilities—Seventy-seven percent of our survey respondents are planning to increase automation to simplify and speed up response times in their security ecosystems.
  • Cloud security adoption is increasing, improving effectiveness and efficiency— Eighty-six percent of respondents say utilizing cloud security increased visibility into their networks.

Recommendations for CISOs

  • Employ a layered defense, which should include MFA, network segmentation, and endpoint protection.
  • Gain the highest levels of visibility to bolster data governance, lower risk, and increase compliance.
  • Focus on cyber hygiene: shore up defenses, update and patch devices, and conduct drills and training.
  • Implement a zero-trust framework to build security maturity.
  • To reduce complexity and alert overload, adopt an integrated platform approach when managing multiple security solutions.

80% of successful breaches are from zero-day exploits

Organizations are not making progress in reducing their endpoint security risk, especially against new and unknown threats, a Ponemon Institute study reveals.

endpoint security risk

68% IT security professionals say their company experienced one or more endpoint attacks that compromised data assets or IT infrastructure in 2019, an increase from 54% of respondents in 2017.

Zero-day attacks continue to increase in frequency

Of those incidents that were successful, 80% were new or unknown, zero-day attacks. These attacks either involved the exploitation of undisclosed vulnerabilities or the use of new malware variants that signature-based, detection solutions do not recognize. Zero-day attacks continue to increase in frequency and are expected to more than double this year.

These attacks are also inflicting more bottom-line business damage. The study found that the average cost per endpoint breach increased to $9M in 2019, up more than $2M since 2018.

“Corporate endpoint breaches are skyrocketing and the economic impact of each attack is also growing due to sophisticated actors bypassing enterprise antivirus solutions,” said Larry Ponemon, Chairman of Ponemon Institute.

“Over half of cybersecurity professionals say their organizations are ineffective at thwarting major threats today because their endpoint security solutions are not effective at detecting advanced attacks.”

The third annual study surveyed 671 IT security professionals responsible for managing and reducing their organization’s endpoint security risk.

Increasing vulnerability during patch gaps

In addition to expressing concern over zero-day threats, respondents noted increasing vulnerability during patch gaps. In fact, 40% of companies say it’s taking longer to patch, with an average patch gap of 97 days due to the number of patches and their complexity.

Patch exploits will continue to be a hot-button issue in 2020 as the last remaining organizations upgrade to Windows 10 on the heels of Windows 7 end of life, and patch frequency increases.

An extra layer of security added to antivirus solutions

The shift to Windows 10 is also ushering in new enterprise security strategies that can be effective in thwarting more advanced threats. With Windows Defender AV built into the Windows 10 operating system, 80% of organizations report using or planning to use Defender AV for savings over their legacy antivirus solution.

Cost savings are being reallocated towards an added layer of advanced threat protection in endpoint stacks and an increase in IT resources. 51% of cybersecurity professionals say they’ve added an extra layer of security to their antivirus solutions.

Furthermore, since 2017 the number of IT departments reporting they have ample resources to minimize endpoint threats has increased from 36% to 44%.

endpoint security risk

“The move to Windows 10 provides the perfect opportunity for organizations to retool their endpoint security to better defend against the zero-day attacks and advanced threats that are evading legacy antivirus in 2020 and pose the biggest risk to their business,” said Andrew Homer, VP of Security Strategy at Morphisec.

“Forward thinking cybersecurity professionals are shifting to the free antivirus capability built into Windows 10 and reallocating their cost savings into an additional layer of advanced threat protection and increased IT resources.”

EDR adoption

The study found that half of the companies who have adopted EDR cite costly customization (55%) and false-positive alerts (60%) as significant challenges.

In addition, of IT departments that haven’t adopted EDR yet, 65% say lack of confidence in the ability to prevent zero-day threats and 61% note security staffing limitations as the top reasons to avoid adoption.

The importance of proactive patch management

IT teams appreciate it when vendors or security researchers discover new vulnerabilities and develop patches for them. So do attackers. The same information that lets IT teams know where they may be vulnerable so they can take action, also lets attackers know where the weaknesses are – providing an opportunity and a map to guide them so they can develop an exploit.

That means that once a vulnerability is disclosed, the clock starts ticking and it becomes a race for organizations to patch or mitigate vulnerable systems before they can be compromised.

While zero day attacks capture media attention with exciting headlines, the reality is that most attacks target known vulnerabilities for which patches or updates exist. According to the 2019 Verizon Data Breach Investigations Report, the average IT team patches fewer than 40% of affected systems within 30 days of discovering a vulnerability. However, cybercriminals can often develop an exploit for a publicly disclosed vulnerability within a matter of weeks or even days.

The gap between a working exploit being developed and the necessary patch being applied is a period of heightened—and avoidable – exposure to risk. One of the primary problems is that there is a disconnect between the priorities of IT and security teams. Where security teams take a proactive approach, the IT teams responsible for implementing patches tend to take a more reactive approach, potentially hindering the patch management program overall.

Reactive patch management

IT teams are busy. Patching vulnerable systems and applications is just one part of a very long list of tasks the IT team is responsible for. Everything is important on some level and it all needs to get done, so it’s understandable that patching may not always be the highest priority.

The problem is that if everything is a priority, then nothing is. Frequently, IT teams find themselves in a vicious cycle of constantly putting out fires – running from urgent issue to urgent issue because they never make the time to approach the situation proactively.

Risk assessment and context

The reality is that not every vulnerability is urgent – and that even the urgent ones aren’t necessarily a top priority for every vulnerable system or application. You need to have the right context to understand your exposure to risk.

You might have 100 systems affected by a vulnerability rated as “Critical”. If 84 of those systems don’t contain sensitive data and are not directly connected to other vulnerable or sensitive systems, they aren’t a top priority. Of the remaining 16, if 5 of those are systems that are public facing and you have other mitigating security controls in place, they also don’t need to be a top priority. The remaining 11 – the ones that are vulnerable, contain sensitive data or critical business functions, and are connected to the public internet – are the systems you should focus on first.

11 is a much more manageable number than 100. If you address just these 11 systems, though, you greatly reduce your attack surface and your exposure to risk. Having context enables you to prioritize effectively.

Proactive patch management

In an ideal world, all of your vulnerable systems would be patched, but in the real world you don’t have to patch every vulnerability right now. Proactive patch management is focused on protecting the systems and applications that are most important from a business perspective and reducing the overall attack surface.

You must at least be aware of the vulnerabilities in the first place, though. You need to have an accurate IT asset inventory and comprehensive visibility so you know where all of your systems and applications are, and what they’re connected to. Armed with that information, you can prioritize your efforts based on context and potential impact, and be proactive about patching and updating the systems that need it the most.