XDR: Unifying incident detection, response and remediation

According to IBM’s Cost of a Data Breach Report 2020, the average time it took a company in 2019 to identify and contain a breach was 279 days. It was 266 days in 2018 and the average over the past five years was a combined 280 days. In other words, things haven’t gotten much better. It’s clear that time is not on CISOs’ side and they need to act fast.

XDR

What’s holding organizations back when it comes to detecting and remediating data breaches?

Let’s consider the top challenges facing security operations centers (SOCs). First, there are too many alerts, which makes it difficult to prioritize those that deserve immediate attention and investigation.

Also, there’s no unified view of the security information generated by the layers of tools deployed by most large enterprises. Finally, these problems are compounded by the fact that organizations are using hybrid on-premises and cloud architectures, as well as purely cloud environments.

Another major obstacle facing SOCs is that threat hunting and investigations are still manually intensive activities. They are complicated by the fact that the data sources SOCs use are decentralized and must be accessed from different consoles.

SOCs also lack visibility into a very significant component of threat hunting: identity. It has taken an even more prominent role now that so many people are working remotely due to COVID-19.

The analysis, control and response planes in current security architectures are not integrated. In other words, analytics are separated from the administration and investigation stack, which is also separated from the tools used to intercept adversaries and shut down an attack.

Enter XDR

A new architecture has emerged called XDR, which stands for “extended detection and response.” Research firm Gartner listed XDR as one of its top 9 security and risk trends for 2020. XDR flips the current security model on its head by replacing the traditional top-down approach with a bottom-up approach to deliver more precise and higher fidelity results.

The primary driver behind XDR is its fusing of analytics with detection and response. The premise is that these functions are not and should not be separate. By bringing them together, XDR promises to deliver many benefits.

The first is a precise response to threats. Instead of keeping logs in a separate silo, with XDR they can be used to immediately drive response actions with higher fidelity and greater depth knowledge into the details surrounding an incident. For example, the traditional SIEM approach is based on monitoring network log data for threats and responding on the network.

Unless a threat is simple, like commodity malware that can be easily cleaned up, remediation is typically delayed until a manual investigation is performed. XDR, on the other hand, provides SOCs both the visibility and ability to not just respond but also remediate. SOC operators can take precise rather than broad actions, and not just across the network, but also the endpoint and other areas.

Because XDR seeks to fuse the analysis, control and response planes, it provides a unified view of threats. Instead of forcing SOCs to use multiple interfaces to threat hunt and investigate, event data and analytics are brought together in XDR to provide the full context needed to precisely respond to an incident.

Unlike the SIEM model, which centralizes logs for SOCs to figure out what’s important, XDR begins with a view of what’s important and then uses logs to inform response and remediations actions. This is fundamental to how XDR inverts traditional SIEM and SOC workflows.

Another important benefit of XDR is that it provides SOCs the ability to investigate and respond to incidents from the same security technology platform. For example, an alert or analytics indicator might be generated from the endpoint which initiates an investigative workflow that is then augmented with network logs or other system logs that are part of the XDR platform for greater context.

Instead of moving between different consoles, all the data sources are in one place. XDR enables SOC operators to resolve and close out a workflow on the same technology platform where it was initiated.

Currently, most organizations have tools that can initiate a workflow and others that can augment a workflow, but very few that can actually resolve a workflow. The goal of XDR is to provide a single environment where incidents can be initiated, investigated and remediated.

Finally, by fusing analytics, the network and the endpoint, SOCs can respond to incidents across a variety of control planes, and customize actions based on the event, the system criticality, the adversary activity, etc.

What XDR makes possible

With XDR, SOCs can force a re-log on, or a log off through the integration with IAM tools. They can contain a host because they are directly connected to the end point. Using network analysis and visibility XDR can provide deeper insight and context into threats, including whether they are moving laterally, have exfiltrated data, and more.

Ultimately, XDR makes it possible for SOCs to respond to incidents in ways that were not possible in the past, such as taking more surgical network-based remediation actions.

Making XDR a reality requires implementing a horizontal plane that connects all existing security silos to unify analysis, control, and response – which won’t happen overnight. The benefits of XDR, however, are well worth the effort.

Organizations plan to use AI and ML to tackle unknown attacks faster

Wipro published a report which provides fresh insights on how AI will be leveraged as part of defender stratagems as more organizations lock horns with sophisticated cyberattacks and become more resilient.

tackle unknown attacks

Organizations need to tackle unknown attacks

There has been an increase in R&D with 49% of the worldwide cybersecurity related patents filed in the last four years being focussed on AI and ML application. Nearly half the organizations are expanding cognitive detection capabilities to tackle unknown attacks in their Security Operations Center (SOC).

The report also illustrates a paradigm shift towards cyber resilience amid the rise in global remote work. It considers the impact of COVID-19 pandemic on cybersecurity landscape around the globe and provides a path for organizations to adapt with this new normal.

The report saw a global participation of 194 organizations and 21 partner academic, institutional and technology organizations over four months of research.

Global macro trends in cybersecurity

  • Nation state attacks target private sector: 86% of all nation-state attacks fall under espionage category, and 46% of them are targeted towards private companies.
  • Evolving threat patterns have emerged in the consumer and retail sectors: 47% of suspicious social media profiles and domains were detected active in 2019 in these sectors.

Cyber trends sparked by the global pandemic

  • Cyber hygiene proven difficult during remote work enablement: 70% of the organizations faced challenges in maintaining endpoint cyber hygiene and 57% in mitigating VPN and VDI risks.
  • Emerging post-COVID cybersecurity priorities: 87% of the surveyed organizations are keen on implementing zero trust architecture and 87% are planning to scale up secure cloud migration.

Micro trends: An inside-out enterprise view

  • Low confidence in cyber resilience: 59% of the organizations understand their cyber risks but only 23% of them are highly confident about preventing cyberattacks.
  • Strong cybersecurity spend due to board oversight & regulations: 14% of organizations have a security budget of more than 12% of their overall IT budgets.

Micro trends: Best cyber practices to emulate

  • Laying the foundation for a cognitive SOC: 49% of organizations are adding cognitive detection capabilities to their SOC to tackle unknown attacks.
  • Concerns about OT infrastructure attacks increasing: 65% of organizations are performing log monitoring of Operation Technology (OT) and IoT devices as a control to mitigate increased OT Risks.

Meso trends: An overview on collaboration

  • Fighting cyber-attacks demands stronger collaboration: 57% of organizations are willing to share only IoCs and 64% consider reputational risks to be a barrier to information sharing.
  • Cyber-attack simulation exercises serve as a strong wakeup call: 60% participate in cyber simulation exercises coordinated by industry regulators, CERTs and third-party service providers and 79% organizations have dedicated cyber insurance policy in place.

Future of cybersecurity

  • 5G security is the emerging area for patent filing: 7% of the worldwide patents filed in the cyber domain in the last four years have been related to 5G security.

Vertical insights by industry

  • Banking, financial services & insurance: 70% of financial services enterprises said that new regulations are fuelling increase in security budgets, with 54% attributing higher budgets to board intervention.
  • Communications: 71% of organizations consider cloud-hosting risk as a top risk.
  • Consumer: 86% of consumer businesses said email phishing is a top risk and 75% enterprises said a bad cyber event will lead to damaged band reputation in the marketplace.
  • Healthcare & life sciences: 83% of healthcare organizations have highlighted maintaining endpoint cyber hygiene as a challenge, 71% have highlighted that breaches reported by peers has led to increased security budget allocation.
  • Energy, natural resources and utilities: 71% organizations reported that OT/IT Integration would bring new risks.
  • Manufacturing: 58% said that they are not confident about preventing risks from supply chain providers.

Bhanumurthy B.M, President and Chief Operating Officer, Wipro said, “There is a significant shift in global trends like rapid innovation to mitigate evolving threats, strict data privacy regulations and rising concern about breaches.

“Security is ever changing and the report brings more focus, enablement, and accountability on executive management to stay updated. Our research not only focuses on what happened during the pandemic but also provides foresight toward future cyber strategies in a post-COVID world.”

Network visibility critical in increasingly complex environments

Federal IT leaders across the country voiced the importance of network visibility in managing and securing their agencies’ increasingly complex and hybrid networks, according to Riverbed.

network visibility

Of 200 participating federal government IT decision makers and influencers, 90 percent consider their networks to be moderately-to-highly complex, and 32 percent say that increasing network complexity is the greatest challenge an IT professional without visibility faces in their agency when managing the network.

Driving this network complexity are Cloud First and Cloud Smart initiatives that make it an imperative for federal IT to modernize its infrastructure with cloud transformation and “as-a-service” adoption.

More than 25 percent of respondents are still in the planning stages of their priority modernization projects, though 87 percent of survey respondents recognize that network visibility is a strong or moderate enabler of cloud infrastructure.

Network visibility can help expedite the evaluation process to determine what goes onto an agency’s cloud and what data and apps stay on-prem; it also allows clearer, ongoing management across the networks to enable smooth transitions to cloud, multi-cloud and hybrid infrastructures.

Accelerated move to cloud

The COVID-19 has further accelerated modernization and cloud adoption to support the massive shift of the federal workforce to telework – a recent Market Connections study indicates that 90 percent of federal employees are currently teleworking and that 86 percent expect to continue to do so at least part-time after the pandemic ends.

The rapid adoption of cloud-based services and solutions and an explosion of new endpoints accessing agency networks during the pandemic generated an even greater need for visibility into the who, what, when and where of traffic. In fact, 81 percent of survey respondents noted that the increasing use of telework accelerated their agency’s use and deployment of network visibility solutions, with 25 percent responding “greatly.”

“The accelerated move to cloud was necessary because the majority of federal staff were no longer on-prem, creating significant potential for disruption to citizen services and mission delivery,” said Marlin McFate, public sector CTO at Riverbed.

“This basically took IT teams from being able to see, to being blind. All of their users were now outside of their protected environments, and they no longer had control over the internet connections, the networks employees were logging on from or who or what else had access to those networks. To be able to securely maintain networks and manage end-user experience, you have to have greater visibility.”

Visibility drives security

Lack of visibility into agency networks and the proliferation of apps and endpoints designed to improve productivity and collaboration expands the potential attack surface for cyberthreats.

Ninety-three percent of respondents believe that greater network visibility facilitates greater network security and 96 percent believe network visibility is moderately or highly valuable in assuring secure infrastructure.

Further, respondents ranked cybersecurity as their agency’s number one priority that can be improved through better network visibility, and automated threat detection was identified as the most important feature of a network visibility solution (24 percent), followed by advanced reporting features (14 percent), and automated alerting (13 percent).

“Network visibility is the foundation of cybersecurity and federal agencies have to know what’s on their network so they can rapidly detect and remediate malicious actors. And while automation enablement calls for an upfront time investment, it can significantly improve response time not only for cyber threat detection but also network issues that can hit employee productivity,” concluded McFate.

SecOps teams turn to next-gen automation tools to address security gaps

SOCs across the globe are most concerned with advanced threat detection and are increasingly looking to next-gen automation tools like AI and ML technologies to proactively safeguard the enterprise, Micro Focus reveals.

next-gen automation tools

Growing deployment of next-gen tools and capabilities

The report’s findings show that over 93 percent of respondents employ AI and ML technologies with the leading goal of improving advanced threat detection capabilities, and that over 92 percent of respondents expect to use or acquire some form of automation tool within the next 12 months.

These findings indicate that as SOCs continue to mature, they will deploy next-gen tools and capabilities at an unprecedented rate to address gaps in security.

“The odds are stacked against today’s SOCs: more data, more sophisticated attacks, and larger surface areas to monitor. However, when properly implemented, AI technologies such as unsupervised machine learning, are helping to fuel next-generation security operations, as evidenced by this year’s report,” said Stephan Jou, CTO Interset at Micro Focus.

“We’re observing more and more enterprises discovering that AI and ML can be remarkably effective and augment advanced threat detection and response capabilities, thereby accelerating the ability of SecOps teams to better protect the enterprise.”

Organizations relying on the MITRE ATT&K framework

As the volume of threats rise, the report finds that 90 percent of organizations are relying on the MITRE ATT&K framework as a tool for understanding attack techniques, and that the most common reason for relying on the knowledge base of adversary tactics is for detecting advanced threats.

Further, the scale of technology needed to secure today’s digital assets means SOC teams are relying more heavily on tools to effectively do their jobs.

With so many responsibilities, the report found that SecOps teams are using numerous tools to help secure critical information, with organizations widely using 11 common types of security operations tools and with each tool expected to exceed 80% adoption in 2021.

Key observations

  • COVID-19: During the pandemic, security operations teams have faced many challenges. The biggest has been the increased volume of cyberthreats and security incidents (45 percent globally), followed by higher risks due to workforce usage of unmanaged devices (40 percent globally).
  • Most severe SOC challenges: Approximately 1 in 3 respondents cite the two most severe challenges for the SOC team as prioritizing security incidents and monitoring security across a growing attack surface.
  • Cloud journeys: Over 96 percent of organizations use the cloud for IT security operations, and on average nearly two-thirds of their IT security operations software and services are already deployed in the cloud.

Layered security becomes critical as malware attacks rise

Despite an 8% decrease in overall malware detections in Q2 2020, 70% of all attacks involved zero day malware – variants that circumvent antivirus signatures, which represents a 12% increase over the previous quarter, WatchGuard found.

malware detections Q2 2020

Malware detections during Q2 2020

Attackers are continuing to leverage evasive and encrypted threats. Zero day malware made up more than two-thirds of the total detections in Q2, while attacks sent over encrypted HTTPS connections accounted for 34%. This means that organizations that are not able to inspect encrypted traffic will miss a massive one-third of incoming threats.

Even though the percentage of threats using encryption decreased from 64% in Q1, the volume of HTTPS-encrypted malware increased dramatically. It appears that more administrators are taking the necessary steps to enable HTTPS inspection, but there’s still more work to be done.

“Businesses aren’t the only ones that have adjusted operations due to the global COVID-19 pandemic – cyber criminals have too,” said Corey Nachreiner, CTO of WatchGuard.

“The rise in sophisticated attacks, despite the fact that overall malware detections declined in Q2 2020, likely due to the shift to remote work, shows that attackers are turning to more evasive tactics that traditional signature-based anti-malware defences simply can’t catch.

“Every organization should be prioritising behaviour-based threat detection, cloud-based sandboxing, and a layered set of security services to protect both the core network, as well as remote workforces.”

JavaScript-based attacks are on the rise

The scam script Trojan.Gnaeus made its debut at the top of WatchGuard’s top 10 malware list for Q2, making up nearly one in five malware detections. Gnaeus malware allows threat actors to hijack control of the victim’s browser with obfuscated code, and forcefully redirect away from their intended web destinations to domains under the attacker’s control.

Another popup-style JavaScript attack, J.S. PopUnder, was one of the most widespread malware variants last quarter. In this case, an obfuscated script scans a victim’s system properties and blocks debugging attempts as an anti-detection tactic.

To combat these threats, organizations should prevent users from loading a browser extension from an unknown source, keep browsers up to date with the latest patches, use reputable adblockers and maintain an updated anti-malware engine.

Attackers increasingly use encrypted Excel files to hide malware

XML-Trojan.Abracadabra is a new addition to the top 10 malware detections list, showing a rapid growth in popularity since the technique emerged in April.

Abracadabra is a malware variant delivered as an encrypted Excel file with the password “VelvetSweatshop”, the default password for Excel documents. Once opened, Excel automatically decrypts the file and a macro VBA script inside the spreadsheet downloads and runs an executable.

The use of a default password allows this malware to bypass many basic antivirus solutions since the file is encrypted and then decrypted by Excel. Organizations should never allow macros from an untrusted source, and leverage cloud-based sandboxing to safely verify the true intent of potentially dangerous files before they can cause an infection.

An old, highly exploitable DoS attack makes a comeback

A six-year-old DoS vulnerability affecting WordPress and Drupal made an appearance on a list of top 10 network attacks by volume in Q2. This vulnerability is particularly severe because it affects every unpatched Drupal and WordPress installation and creates DoS scenarios in which bad actors can cause CPU and memory exhaustion on underlying hardware.

Despite the high volume of these attacks, they were hyper-focused on a few dozen networks primarily in Germany. Since DoS scenarios require sustained traffic to victim networks, this means there’s a strong likelihood that attackers were selecting their targets intentionally.

Malware domains leverage command and control servers to wreak havoc

Two new destinations made top malware domains list in Q2. The most common was findresults[.]site, which uses a C&C server for a Dadobra trojan variant that creates an obfuscated file and associated registry to ensure the attack runs and can exfiltrate sensitive data and download additional malware when users start up Windows systems.

One user alerted the WatchGuard team to Cioco-froll[.]com, which uses another C&C server to support an Asprox botnet variant, often delivered via PDF document, and provides a C&C beacon to let the attacker know it has gained persistence and is ready to participate in the botnet.

DNS firewalling can help organizations detect and block these kinds of threats independent of the application protocol for the connection.

Researchers develop AI technique to protect medical devices from anomalous instructions

Researchers at Ben-Gurion University of the Negev have developed a new AI technique that will protect medical devices from malicious operating instructions in a cyberattack as well as other human and system errors.

AI protect medical devices

Complex medical devices such as CT (computed tomography), MRI (magnetic resonance imaging) and ultrasound machines are controlled by instructions sent from a host PC.

Abnormal or anomalous instructions introduce many potentially harmful threats to patients, such as radiation overexposure, manipulation of device components or functional manipulation of medical images. Threats can occur due to cyberattacks, human errors such as a technician’s configuration mistake or host PC software bugs.

Dual-layer architecture: AI technique to protect medical devices

As part of his Ph.D. research, BGU researcher Tom Mahler has developed a technique using artificial intelligence that analyzes the instructions sent from the PC to the physical components using a new architecture for the detection of anomalous instructions.

“We developed a dual-layer architecture for the protection of medical devices from anomalous instructions,” Mahler says.

“The architecture focuses on detecting two types of anomalous instructions: (1) context-free (CF) anomalous instructions which are unlikely values or instructions such as giving 100x more radiation than typical, and (2) context-sensitive (CS) anomalous instructions, which are normal values or combinations of values, of instruction parameters, but are considered anomalous relative to a particular context, such as mismatching the intended scan type, or mismatching the patient’s age, weight, or potential diagnosis.

“For example, a normal instruction intended for an adult might be dangerous [anomalous] if applied to an infant. Such instructions may be misclassified when using only the first, CF, layer; however, by adding the second, CS, layer, they can now be detected.”

Improving anomaly detection performance

The research team evaluated the new architecture in the CT domain, using 8,277 recorded CT instructions and evaluated the CF layer using 14 different unsupervised anomaly detection algorithms. Then they evaluated the CS layer for four different types of clinical objective contexts, using five supervised classification algorithms for each context.

Adding the second CS layer to the architecture improved the overall anomaly detection performance from an F1 score of 71.6%, using only the CF layer, to between 82% and 99%, depending on the clinical objective or the body part.

Furthermore, the CS layer enables the detection of CS anomalies, using the semantics of the device’s procedure, an anomaly type that cannot be detected using only the CF layer.

62% of blue teams have difficulty stopping red teams during adversary simulation exercises

New Exabeam research shows that 62 percent of blue teams have difficulty stopping red teams during adversary simulation exercises.

blue teams red teams

Respondents named threat detection, incident response and flexibility/openness to change while working remotely as the top three areas that blue teams must improve upon. This indicates an increase in technical and adaptability challenges since the same study was performed in 2019, where the focus fell heavily on teamwork and communication.

While 37 percent of blue teams always or often catch these ‘bad actors,’ 55 percent say they only succeed sometimes, and 7 percent rarely or never achieve this feat. On a positive note, these numbers indicate a trend in the right direction compared to last year’s study, which showed one-third rarely or never catching red teams.

Companies increasing security investment

The fact that less than half of blue teams are stopping bad actors a majority of the time demonstrates the priority organizations must place on constantly evaluating and adjusting their security investments to keep up with today’s digital adversaries.

The study indicates that many companies are consciously taking these steps, with 50 percent increasing security investment and 30 percent adding to their security infrastructure as a result of these exercises. Seventeen percent have done both, and just 2 percent have not adjusted their security tools or budget in response.

Interestingly, the frequency and approach to red team/blue team tests vary widely. On average, organizations conduct red team exercises every five months – breaking down to 26 percent once a month, another quarter every 2-6 months, 32 percent every 7-11 months and 8 percent once a year.

Just 7 percent don’t utilize red teams at all. Blue team exercise frequency understandably reflected similar percentages and averaged out to every six months.

Many companies use the ‘purple team’ approach, in which the red and blue teams come from their own staff and work together to determine security preparedness. One-third run these simulations every 2-6 months, while 50 percent perform them every 7-11 months, and 12 percent report yearly tests. Again, only 7 percent do not have purple teams in place.

blue teams red teams

Internal and external red teams equally effective

Also new to 2020’s report, 92 percent of respondents tap external red teams without prior knowledge of their internal security systems to help their teams prepare for real-life cyberattacks. However, 54 percent found internal and external red teams equally effective, with a slightly higher percentage (24 percent) citing internal red teams as more effective than external (19 percent).

“An additional study recently reported that more than 80 percent of businesses have experienced a successful cyberattack since the start of the pandemic. Paired with the fact that just over a third of respondents are frequently stopping simulated attacks, these trends illuminate the security fallout caused by the remote work shift, tighter budgets and increasingly sophisticated attack techniques,” said Steve Moore, chief security strategist, Exabeam.

“These red team/blue team exercises can be valuable proof points when presenting budgetary and technological needs to the C-suite and board to help keep up with these changes. While there is always room for teams and security postures to mature, it is extremely encouraging that so many companies are regularly performing these tests to identify their weak spots and shore up their defenses.”

In addition to threat detection, incident response and flexibility, communication and teamwork (41 percent), knowledge of threats/tactics (38 percent) and persistence (20 percent) were also listed as valuable skills blue teams should focus on.

Integrated cloud-native security platforms can overcome limitations of traditional security products

To close security gaps caused by rapidly changing digital ecosystems, organizations must adopt an integrated cloud-native security platform that incorporates artificial intelligence, automation, intelligence, threat detection and data analytics capabilities, according to 451 Research.

cloud-native security platforms

Cloud-native security platforms are essential

The report clearly defines how to create a scalable, adaptable, and agile security posture built for today’s diverse and disparate IT ecosystems. And it warns that legacy approaches and MSSPs cannot keep up with the speed of digital transformation.

  • Massive change is occurring. Over 97 percent of organizations reported they are underway with, or expecting, digital transformation progress in the next 24 months, and over 41 percent are allocating more than 50 percent of their IT budgets to projects that grow and transform the business.
  • Security platforms enable automation and orchestration capabilities across the entire IT stack, streamlining and optimizing security operations, improving productivity, enabling higher utilization of assets, increasing the ROI of security investments and helping address interoperability challenges created by isolated, multi-vendor point products.
  • Threat-driven and outcome-based security platforms address the full attack continuum, compared with legacy approaches that generally focus on defensive blocking of a single vector.
  • Modern security platforms leverage AI and ML to solve some of the most prevalent challenges for security teams, including expertise shortages, alert fatigue, fraud detection, behavioral analysis, risk scoring, correlating threat intelligence, detecting advanced persistent threats, and finding patterns in increasing volumes of data.
  • Modern security platforms are positioned to deliver real-time, high-definition visibility with an unobstructed view of the entire IT ecosystem, providing insights into the company’s assets, attack surface, risks and potential threats and enabling rapid response and threat containment.

451 Senior Analyst Aaron Sherrill noted, “The impact of an ever-evolving IT ecosystem combined with an ever-evolving threat landscape can be overwhelming to even the largest, most well-funded security teams, including those at traditional MSSPs.

“Unfortunately, a web of disparate and siloed security tools, a growing expertise gap and an overwhelming volume of security events and alerts continue to plague internal and service provider security teams of every size.

“The consequences of these challenges are vast, preventing security teams from gaining visibility, scaling effectively, responding rapidly and adapting quickly. Today’s threat and business landscape demands new approaches and new technologies.”

How to deliver effective cybersecurity today

“Delivering effective cybersecurity today requires being able to consume a growing stream of telemetry and events from a wide range of signal sources,” said Dustin Hillard, CTO, eSentire.

“It requires being able to process that data to identify attacks while avoiding false positives and negatives. It requires equipping a team of expert analysts and threat hunters with the tools they need to investigate incidents and research advanced, evasive attacks.

“Most importantly, it requires the ability to continuously upgrade detection and defenses. These requirements demand changing the technology foundations upon which cybersecurity solutions are built—moving from traditional security products and legacy MSSP services to modern cloud-native platforms.”

Sherrill further noted, “Cloud-native security platforms optimize the efficiency and effectiveness of security operations by hiding complexity and bringing together disparate data, tools, processes, workflows and policies into a unified experience.

“Infused with automation and orchestration, artificial intelligence and machine learning, big data analytics, multi-vector threat detection, threat intelligence, and machine and human collaboration, cloud-native security platforms can provide the vehicle for scalable, adaptable and agile threat detection, hunting, and response. And when combined with managed detection and response services, organizations are able to quickly bridge expertise and resource gaps and attain a more comprehensive and impactful approach to cybersecurity.”

Most malware in Q1 2020 was delivered via encrypted HTTPS connections

67% of all malware in Q1 2020 was delivered via encrypted HTTPS connections and 72% of encrypted malware was classified as zero day, so would have evaded signature-based antivirus protection, according to WatchGuard.

encrypted malware

These findings show that without HTTPS inspection of encrypted traffic and advanced behavior-based threat detection and response, organizations are missing up to two-thirds of incoming threats. The report also highlights that the UK was a top target for cyber criminals in Q1, earning a spot in the top three countries for the five most widespread network attacks.

“Some organizations are reluctant to set up HTTPS inspection due to the extra work involved, but our threat data clearly shows that a majority of malware is delivered through encrypted connections and that letting traffic go uninspected is simply no longer an option,” said Corey Nachreiner, CTO at WatchGuard.

“As malware continues to become more advanced and evasive, the only reliable approach to defense is implementing a set of layered security services, including advanced threat detection methods and HTTPS inspection.”

Monero cryptominers surge in popularity

Five of the top ten domains distributing malware in Q1 either hosted or controlled Monero cryptominers. This sudden jump in cryptominer popularity could simply be due to its utility; adding a cryptomining module to malware is an easy way for online criminals to generate passive income.

Flawed-Ammyy and Cryxos malware variants join top lists

The Cryxos trojan was third on a top-five encrypted malware list and also third on its top-five most widespread malware detections list, primarily targeting Hong Kong. It is delivered as an email attachment disguised as an invoice and will ask the user to enter their email and password, which it then stores.

Flawed-Ammyy is a support scam where the attacker uses the Ammyy Admin support software to gain remote access to the victim’s computer.

Three-year-old Adobe vulnerability appears in top network attacks

An Adobe Acrobat Reader exploit that was patched in August 2017 appeared in a top network attacks list for the first time in Q1. This vulnerability resurfacing several years after being discovered and resolved illustrates the importance of regularly patching and updating systems.

Mapp Engage, AT&T and Bet365 targeted with spear phishing campaigns

Three new domains hosting phishing campaigns appeared on a top-ten list in Q1 2020. They impersonated digital marketing and analytics product Mapp Engage, online betting platform Bet365 (this campaign was in Chinese) and an AT&T login page (this campaign is no longer active at the time of the report’s publication).

COVID-19 impact

Q1 2020 was only the start of the massive changes to the cyber threat landscape brought on by the COVID-19 pandemic. Even in these first three months of 2020, we still saw a massive rise in remote workers and attacks targeting individuals.

Malware hits and network attacks decline. Overall, there were 6.9% fewer malware hits and 11.6% fewer network attacks in Q1, despite a 9% increase in the number of Fireboxes contributing data. This could be attributed to fewer potential targets operating within the traditional network perimeter with worldwide work-from-home policies in full force during the pandemic.

Increasing awareness of cyber risks among SMBs to boost MDR revenues

The increasing number of sophisticated cyber threats will lead to a rise in demand for Managed Detection and Response (MDR) solutions from small and medium businesses. The market size is poised to grow at a CAGR of 16.4% between 2019 and 2024, with revenues expected to reach $1.9 billion, according to Frost & Sullivan.

MDR revenues

“The rise in the number and complexity of threats has made internal management of information security increasingly laborious and expensive. In this context, outsourcing is being viewed as a strategic ally in securely managing IT environments in line with companies’ business strategies,” said Mauricio Chede, Senior Industry Analyst, Frost & Sullivan.

“MDR providers offer organizations the technology, process, and people to enable the proactive monitoring of their customer security environment and 24/7 threat detection to help mitigate security breaches, even more so during COVID-19.”

Chede added: “MDR providers must demonstrate trustworthiness in remediation without interrupting a customer’s business operations. They must adapt themselves to the customer’s needs and budget, understanding the vertical they are in and providing detection and response solutions in the shortest period of time, along with custom reports. Personal interaction through email or telephone with an assigned analyst is also a differentiating factor.”

MDR revenues

For further revenue opportunities, MDR vendors should:

  • Improve the quality of their solutions and offer new services to compete with new market participants and increase revenues.
  • Develop customizable MDR solutions at affordable prices to attract small and midsized businesses.
  • Explore the merger and acquisition of competitors to enhance regional presence and maximize revenues.
  • Offer consulting and value-added services to help clients take advantage of digital transformation initiatives.

Average bandwidth of DDoS attacks increasing, APIs and applications under attack

The volume and complexity of attacks continued to grow in the first quarter of 2020, according to Link11.

DDoS attacks increasing

There has been an increasing number of high-volume attacks in Q1 2020, with 51 attacks over 50 Gbps. The average bandwidth of attacks also rose, reaching 5,0 Gbps versus 4,3 Gbps in the same quarter in 2019.

Key findings

  • Maximum bandwidth nearly doubles: In Q1 2020, the maximum bandwidth nearly doubled in comparison to the previous year; the biggest attack stopped was 406 Gbps. In Q1 2019 the maximum bandwidth peaked at 224 Gbps.
  • Complex multi-vector attacks rising: The share of multi-vector attacks rose to 64% in Q1 2020 up from 47% in Q1 2019. 66% of all multi-vector attacks combined 2 – 3 vectors. More importantly, there were 19 attacks that used 10 or more different DDoS vectors, compared to no reported attacks of this scale in 2019.
  • Most frequently misused DDoS vectors: The most frequently used DDoS vectors in Q1 2020 were DNS Reflection, CLDAP, NTP and WS-Discovery.
  • DDoS attackers increasingly abuse public cloud services: Nearly the half of all DDoS attacks (47%) in Q1 2020 used public cloud server-based botnets, compared to 31% in the previous year.
  • APIs and applications under attack: As companies build new applications and services from multiple sources using APIs, they are becoming increasingly vulnerable to Layer 7 attacks, which are typically ‘low and slow’ compared to network layer attacks.

DDoS attacks increasing

Marc Wilczek, COO of Link11 said: “The threat landscape is changing as a result of the COVID-19 outbreak. With more people working remotely, there is a greater emphasis on virtual networks which need to be accessible from multiple locations.

“This is creating the perfect scenario for DDoS attackers to overwhelm networks and cause serious disruption. To address this, organizations need to be more proactive in their approach to DDoS protection, in order to respond to these ever-evolving threats.”

Cloud-enabled threats are on the rise, sensitive data is moving between cloud apps

44% of malicious threats are cloud enabled, meaning that cybercriminals see the cloud as an effective method for subverting detection, according to Netskope.

cloud-enabled threats

“We are seeing increasingly complex threat techniques being used across cloud applications, spanning from cloud phishing and malware delivery, to cloud command and control and ultimately cloud data exfiltration,” said Ray Canzanese, Threat Research Director at Netskope.

“Our research shows the sophistication and scale of the cloud enabled kill chain increasing, requiring security defenses that understand thousands of cloud apps to keep pace with attackers and block cloud threats. For these reasons, any enterprise using the cloud needs to modernize and extend their security architectures.”

Enterprises using a variety of apps

89% of enterprise users are in the cloud, actively using at least one cloud app every day. Cloud storage, collaboration, and webmail apps are among the most popular in use.

Enterprises also use a variety of apps in those categories – 142 on average – indicating that while enterprises may officially sanction a handful of apps, users tend to gravitate toward a much wider set in their day-to-day activities. Overall, the average enterprise uses over 2,400 distinct cloud services and apps.

Top 5 cloud app categories

  • Cloud storage
  • Collaboration
  • Webmail
  • Consumer
  • Social media

Top 10 most popular cloud apps

  • Google Drive
  • YouTube
  • Microsoft Office 365 for Business
  • Facebook
  • Google Gmail
  • Microsoft Office 365 SharePoint
  • Microsoft Office 365 Outlook.com
  • Twitter
  • Amazon S3
  • LinkedIn

Threats are mostly cloud based

44% of threats are cloud-based. Attackers are moving to the cloud to blend in, increase success rates and evade detections.

Attackers launch attacks through cloud services and apps using familiar techniques including scams, phishing, malware delivery, command and control, formjacking, chatbots, and data exfiltration. Of these, the two most popular cloud threat techniques are phishing and malware delivery. The top threat techniques in the cloud are phishing and malware delivery.

Top 5 targeted cloud apps

  • Microsoft Office 365 for Business
  • Box
  • Google Drive
  • Microsoft Azure
  • Github

Data policy violations come from cloud storage

Over 50% of data policy violations come from cloud storage, collaboration, and webmail apps, and the types of data being detected are primarily DLP rules and policies related to privacy, healthcare, and finance.

This shows that users are moving sensitive data across multiple dimensions among a wide variety of cloud services and apps, including personal instances and unmanaged apps in violation of organizational policies.

The risk of lateral data movement

20% of users move data laterally between cloud apps, such as copying a document from OneDrive to Google Drive or sharing it via Slack. More importantly, the data crosses many boundaries: moving between cloud app suites, between managed and unmanaged apps, between app categories, and between app risk levels.

Moreover, 37% of the data that users move across cloud apps is sensitive. In total, lateral data movement has been tracked among 2,481 different cloud services and apps, indicating the scale and the variety of cloud use across which sensitive information is being dispersed.

Protecting remote workers

One-third of enterprise users work remotely on any given day, across more than eight locations on average, accessing both public and private apps in the cloud. This trend has contributed to the inversion of the traditional network, with users, data, and apps now on the outside.

It also shows increasing demand on legacy VPNs and questions the availability of defenses to protect remote workers.

What is flowing through your enterprise network?

Since Edward Snowden’s revelations of sweeping internet surveillance by the NSA, the push to encrypt the web has been unrelenting.

firewall TLS inspection

Bolstered by Google’s various initiatives (e.g., its prioritizing of websites that use encryption in Google Search results, making Chrome mark HTTP sites as “not secure,” and tracking of worldwide HTTPS usage), CloudFlare’s Universal SSL offer and the advent of Let’s Encrypt, nearly seven years later various sources put the percentage of encrypted internet traffic between 80% and 90% across all platforms.

That’s good news for end users who wish their interactions with various websites to be safe from eavesdropping by third parties – whether they be hackers, companies or governments.

Exploited encryption

But with the sweet comes the sour: criminals are exploiting users’ erroneous belief that a site with HTTPS in its URL can be considered completely safe to trick them into trusting phishing sites.

According to SophosLabs, nearly one-third of malware and unwanted applications enter the enterprise network through TLS-encrypted flows.

Also, nearly a quarter of malware now communicates over HTTPS connections, making it more difficult for businesses to spot active infections within their networks, especially because – a recent survey has revealed – only 3.5% of organizations are actually decrypting their network traffic to properly inspect it.

Why so few? What’s stopping them? The number one reason is that they are concerned about firewall performance, but they also cite privacy concerns, degraded user experience (websites not loading properly) and complexity as important factors for their decision to not do it.

Covert malicious activity

Malware that communicates via TLS-secured connections includes well-known and nasty malware families like TrickBot, IcedID and Dridex.

The use of transport-layer encryption is just one of the methods for keeping the malware’s existence on compromised systems secret, but it helps it covertly download additional modules and configuration files and send the collected data to an outside server.

“We’ve also observed that, increasingly, more malicious functions are being orchestrated from the command and control server, rather than implemented in the malware binary, and the C2s make decisions about what the malware should do next based on the exfiltrated data, which increases the volume of network traffic,” Sophos researcher Luca Nagy pointed out.

“Malware authors also want to empower their binaries with newer features and refresh them more often, which also increases the need for secure network communication, to prevent network-level protection tools from discovering an active infection inside the network every time it downloads an updated version of itself.”

Performance before protection? It doesn’t have to be

Some respondents in the previously mentioned survey were also unaware of the need to decrypt network traffic, even though it’s (or should be) common knowledge that malware often uses encrypted connections for communication.

Connections to “safe” destinations like financial websites may, perhaps, be exempted from inspection, but most other encrypted traffic coming in and going out of the corporate network should be decrypted and analyzed.

The problem with this is that many firewall offerings are not up to the task of inspecting a huge volume of encrypted sessions without causing applications to break or degrade network performance.

Not all, though: Sophos’ XG Firewall, with its new “Xstream” architecture, was architected from the ground up with performance in mind, allowing users to decrypt and see all traffic at a performance level that is just about wire speed.

A new firewall for your traffic decryption needs

“With Sophos XG Firewall, IT managers can immediately deploy TLS inspection without concerns over performance or breaking incompatible devices on the network, and they can turn it on for different parts of the network with flexible policy setting options,” Dan Schiappa, chief product officer at Sophos, told Help Net Security.

firewall TLS inspection

“We’ve created the ability to inspect all TLS traffic across all protocols and ports, eliminating enormous security blind spots. Sophos XG Firewall scans all TLS encrypted traffic – not just web traffic. This is important because criminals are constantly trying to avoid attention and use non-standard communication ports to evade detection.”

Other new features include support for TLS 1.3 (which many other solutions don’t have); FastPath policy controls that accelerate performance of SD-WAN applications and traffic, including Voice over IP, SaaS and others, to up to wire speed; and an enhanced Deep Packet Inspection (DPI) engine that dynamically risk-assesses traffic streams and matches them to the appropriate threat scanning level.

Schiappa also said that they’ve wired data science and threat intel much deeper than ever before: AI-enhanced threat intelligence from SophosLabs provides insights needed to understand and adjust defenses to protect against a constantly changing threat landscape.

Finally, user-friendliness should not be discounted: Sophos XG Firewall is simple to use and manage on a single cloud-based platform – Sophos Central – where organizations can easily layer and manage multiple firewalls as well as synchronize their security applications.

What makes some organizations more cyber resilient than others?

Despite higher levels of investment in advanced cybersecurity technologies over the past three years, less than one-fifth of organizations are effectively stopping cyberattacks and finding and fixing breaches fast enough to lower the impact, according to a report from Accenture.

cyber resilient

Based on a survey of more than 4,600 enterprise security practitioners around the globe, the study explores the extent to which organizations prioritize security, the effectiveness of current security efforts, and the impact of new security-related investments.

Many are not cyber resilient

From detailed modeling of cybersecurity performance, the study identified a group of elite “leaders” — 17% of the research sample — that achieve significantly better results from their cybersecurity technology investments than other organizations.

Leaders were characterized as among the highest performers in at least three of the following four categories: stop more attacks, find breaches faster, fix breaches faster and reduce breach impact. The study identified a second group, comprising 74% of the respondents, as “non-leaders” — average performers in terms of cyber resilience but far from being laggards.

“Our analysis identifies a group of standout organizations that appear to have cracked the code of cybersecurity when it comes to best practices,” said Kelly Bissell, who leads Accenture Security globally. “Leaders in our survey are far quicker at detecting a breach, mobilizing their response, minimizing the damage and getting operations back to normal.”

For instance, leaders were four times more likely than non-leaders to detect a breach in less than one day (88% vs. 22%). And when defenses fail, 96% of the leaders fixed breaches in 15 days or less, on average, whereas 64% of non-leaders took 16 days or longer to remediate a breach — with nearly half of those taking more than a month.

The differences between leaders and non-leaders

Among the key differences in cybersecurity practices between leaders and non-leaders, the report identified:

  • Leaders focused more of their budget allocations on sustaining what they already have, whereas the non-leaders place significantly more emphasis on piloting and scaling new capabilities.
  • Leaders were nearly three times less likely to have had more than 500,000 customer records exposed through cyberattacks in the last 12 months (15% vs. 44%).
  • Leaders were more than three times as likely to provide users of security tools with required training for those tools (30% vs. 9%).

The study also found that 83% believe that organizations need to think beyond securing just their own enterprises and take better steps to secure their vendor ecosystems in order to become cyber resilient.

cyber resilient

Additionally, while cybersecurity programs designed to protect data and other key assets are only actively protecting about 60% of an organization’s business ecosystem, which includes vendors and other business partners, 40% of breaches come through this route.

“The sizable number of vendor relationships that most organizations have poses a significant challenge to their ability to monitor that business ecosystem,” Bissell said. “Yet, given the large percentage of breaches that originate in an organization’s supply chain, companies need to ensure that their cyber defenses stretch beyond their own walls.”

Product News: Encrypted Traffic Insights with Corelight

The NSA recently issued an advisory to enterprises that adopt ‘break and inspect’ technologies to gain visibility over encrypted traffic, warning them of the potential risks of such an approach. In fact, decrypting and re-encrypting traffic through a proxy device, a firewall, intrusion detection or prevention systems (IDS/IPS) that that doesn’t properly validate transport layer security (TLS) certificates, for instance, will weaken the end-to-end protection provided by the TLS encryption to the end-users, drastically increasing the likelihood that threat actors will target them in man-in-the-middle attack (MiTMP) attacks, Bleeping Computer reported.

“This is why companies like Corelight invest into features like SSH Inference to inform defenders while protecting privacy,” explained Richard Bejtlich, principal security strategist at Corelight. “Our new sensor feature profiles Secure Shell traffic to identify account access, file transfers, keystroke typing, and other activities, all while preserving default encryption and without modifying any endpoint software. I believe security teams will have to increasingly incorporate these sorts of solutions, rather than downgrading or breaking encrypted traffic,” he continued.

Corelight, in fact, has just recently unveiled the new capabilities of its network traffic analysis (NTA) solutions for cybersecurity, the Corelight Encrypted Traffic Collection (ETC). ETC will empower threat hunters and security analysts with rich and actionable insights for encrypted traffic, without the need to ‘break and inspect’.

Effectively able to read the network’s ‘body language,’ the tool will single out the behaviour of malicious activity even when decryption is not an option. Rather than simply detecting threats, the data that ETC can provide will allow enterprises to make critical, informed security decisions.

Capabilities

Availing itself of both Corelight’s Research Team packages and the curated packages from the open-source Zeek community, ETC will provide:

SSH client brute force detection – supports threat hunting for Access techniques by revealing when a client makes excessive authentication attempts.

SSH authentication bypass detection – reveals when a client and server switch to a non-SSH protocol, a tactic used in Access attempts.

SSH client keystroke detection – reveals an interactive session where a client sends user-driven keystrokes to the server, which may be an indication of Command and Control activity.

SSH client file activity detection – reveals a file transfer occurring during the session where the client sent a sequence of bytes to the server or vice versa, which could indicate either Staging or Exfiltration activity.

SSH scan detection – accelerates threat hunting for Access techniques by inferring scanning activity based on how often a single service is scanned.

SSL certificate monitoring – extend’s Zeek’s existing certificate monitoring capabilities to help defenders limit attack surface, find vulnerabilities, and enforce internal policy.

Encryption detection – accelerate threat hunting by finding unencrypted traffic over commonly encrypted ports/protocols as well as custom / pre-negotiated sessions.

For more technical information, you can read Corelight’s blog detailing the new capabilities.