Ransomware, the headliner of the previous half-year, walked off stage: only 1 percent of emails analyzed contained this kind of malware. Every third email, meanwhile, contained spyware, which is used by threat actors to steal payment data or other sensitive info to then put it on sale in the darknet or blackmail its owner.
Downloaders, intended for the installation of additional malware, and backdoors, granting cybercriminals remote access to victims’ computers, also made it to top-3. They are followed by banking Trojans, whose share in the total amount of malicious attachments showed growth for the first time in a while.
Opened email lets spy in
According to the data, in H1 2020, 43 percent of the malicious mails on the radars of Group-IB Threat Detection System had attachments with spyware or links leading to their downloading.
Another 17 percent contained downloaders, while backdoors and banking Trojans came third with a 16- and 15-percent shares, respectively. Ransomware, which in the second half of 2019 hid in every second malicious email, almost disappeared from the mailboxes in the first six months of this year with a share of less than 1 percent.
These findings confirm adversaries’ growing interest in Big Game Hunting. Ransomware operators have switched from attacks en masse on individuals to corporate networks. Thus, when attacking large companies, instead of infecting the computer of a separate individual immediately after the compromise, attackers use the infected machine to move laterally in the network, escalate the privileges in the system and distribute ransomware on as many hosts as possible.
Top-10 tools used in attacks were banking Trojan RTM (30%); spyware LOKI PWS (24%), AgentTesla (10%), Hawkeye (5%), and Azorult (1%); and backdoors Formbook (12%), Nanocore (7%), Adwind (3%), Emotet (1%), and Netwire (1%).
The new instruments detected in the first half of the year included Quasar, a remote access tool based on the open source; spyware Gomorrah that extracts login credentials of users from various applications; and 404 Keylogger, a software for harvesting user data that is distributed under malware-as-a-service model.
Almost 70 percent of malicious files were delivered to the victim’s computer with the help of archives, another 18% percent of malicious files were masked as office documents (with .doc, .xls and .pdf file extensions), while 14% more were disguised as executable files and scripts.
In the first six months of 2020, a total of 9 304 phishing web resources were blocked, which is an increase of 9 percent compared to the previous year. The main trend of the observed period was the two-fold surge in the number of resources using safe SSL/TLS connection – their amount grew from 33 percent to 69 percent in just half a year.
This is explained by the cybercriminals’ desire to retain their victim pool – the majority of web browsers label websites without SSL/TLS connection as a priori dangerous, which has a negative impact on the effectiveness of phishing campaigns.
Experts predict that the share of web-phishing with insecure connection will continue to decrease, while websites that do not support SSL/TLS will become an exception.
Just as it was the case in the second half of 2019, in the first half of this year, online services like ecommerce websites turned out to be the main target of web-phishers. In the light of global pandemic and the businesses’ dive into online world, the share of this phishing category increased to remarkable 46 percent.
The attractiveness of online services is explained by the fact that by stealing user login credentials, threat actors also gain access to the data of bank cards linked to user accounts.
Online services are followed by email service providers (24%), whose share, after a decline in 2019, resumed growth in 2020, and financial organizations (11%). Main web-phishing target categories also included payment services, cloud storages, social networks, and dating websites.
The leadership in terms of the number of phishing resources registered has persistently been held by .com domain zone – it accounts for nearly a half (44%) of detected phishing resources in the review period. Other domain zones popular among the phishers included .ru (9%), .br (6%), .net (3%) and .org (2%).
“The beginning of this year was marked by changes in the top of urgent threats that are hiding in malicious emails,” comments CERT-GIB deputy head Yaroslav Kargalev.
“Ransomware operators have focused on targeted attacks, choosing large victims with a higher payment capacity. The precise elaboration of these separate attacks affected the ransomware share in the top threats distributed via email en masse.
“Their place was taken by backdoors and spyware, with the help of which threat actors first steal sensitive information and then blackmail the victim, demanding a ransom, and, in case the demand is refused, releasing the info publicly.
“The ransomware operators’ desire to make a good score is likely to result in the increase of the number of targeted attacks. As email phishing remains the main channel of their distribution, the urgency of securing mail communication is more relevant than ever.”
There have been significant shifts in DDoS attack patterns in the first half of 2020, a Neustar report reveals. There has been a 151% increase in the number of DDoS attacks compared to the same period in 2019. These included the largest and longest attacks that Neustar has ever mitigated at 1.17 Terabits-per-second (Tbps) and 5 days and 18 hours respectively.
These figures are representative of the growing number, volume and intensity of network-type cyberattacks as organizations shifted to remote operations and workers’ reliance on the internet increased.
DDoS attacks becoming increasingly intense and sophisticated
Large DDoS attacks are bigger, more intense, and happening in greater numbers than ever before. There has been a noticeable spike in large attacks across the industry, most notably the 2.3 Tbps attack targeting an Amazon Web Services client in February – the largest volumetric DDoS attack on record.
The total number of attacks increased by over two and a half times during January through June of 2020 compared to the same period in 2019. The increase was felt across all size categories, with the biggest growth happening at opposite ends of the scale – the number of attacks sized 100 Gbps and above grew a whopping 275% and the number of very small attacks, sized 5 Gbps and below, increased by more than 200%.
Overall, small attacks sized 5 Gbps and below represented 70% of all attacks mitigated between January and June of 2020.
“While large volumetric attacks capture attention and headlines, bad actors increasingly recognise the value of striking at low enough volume to bypass the traffic thresholds that would trigger mitigation to degrade performance or precision target vulnerable infrastructure like a VPN,” said Michael Kaczmarek, Neustar VP of Security Products.
“These shifts put every organization with an internet presence at risk of a DDoS attack – a threat that is particularly critical with global workforces reliant on VPNs for remote login. VPN servers are often left vulnerable, making it simple for cybercriminals to take an entire workforce offline with a targeted DDoS attack.”
The rise in smaller DDoS attacks has been matched by increases in attack sophistication and intensity. 52% of threats mitigated by Neustar leveraged three vectors or more, with the number of attacks featuring a single vector essentially nonexistent.
New amplification methods and attacks of higher intensity targeted at critical pieces of web infrastructure were also tracked. The previous high-water mark of 500 millions-of-packets-per-second (Mpps) was topped this year, with an attack of over 800 Mpps recorded.
“The dependency and growth in online communications since COVID-19 has fundamentally changed what organizations must do to succeed,” said Brian McCann, President, Neustar Security Solutions.
“There is no one-size-fits-all solution for security, but having a reliable cloud service that ensures availability and security for all services and users has proven to be a critical difference between barely surviving and thriving in this rapidly changing environment.”
Ongoing impact of COVID-19 on cyberthreats and industry web traffic
The precipitous rise in DDoS attacks mirrors the growth in internet traffic seen during the pandemic. Internet use is up between 50% and 70% and streaming media rose more than 12% in the first quarter of 2020. This has meant that attackers of all types, whether serious cybercriminals or bored teenagers stuck at home, have had more screen time to be disruptive.
In a study of one of the largest cybercrime sites by Cambridge University’s Cybercrime Centre, they found that the number of attacks enacted by the website went up sharply at the start of the pandemic and associated lockdown. They also found that instead of existing cybercriminals staging more attacks, it was new attackers driving the increase in DDoS attacks.
The corresponding attacks, like internet traffic, have not been evenly spread across all websites. It’s well known that ecommerce and gaming websites have received a lot of negative attention from hackers, but there are other industries that have been hit hard by cybercriminals over the last six months.
Healthcare organizations contain sensitive patient information and a growing number of IoT devices that are easily exploited. Combined with the additional pressure of the pandemic, hospitals have become some of the most desirable targets for cybercriminals.
Industries that have seen a lot of growth during the pandemic, like online gambling, have also been ripe for cyberthreats. Most notably, online video has seen an incredible rise in both usage and DDoS attacks.
Omdia has reported an additional 200 billion hours of Netflix viewing or Zoom video calls over initial 2020 forecasts. Where traffic rises, so too do attacks; Neustar attack mitigations for this vertical increased by 461% over the last six months.
“While 2020 has brought radical changes in behaviour to consumers and criminals alike, it is naïve to assume that actions of either audience will revert completely to pre-pandemic norms after this crisis passes,” added Kaczmarek.
“Mitigating these increasingly sophisticated DDoS attacks will continue to be a necessary part of doing business online. At a time when many organizations could do with less worry, fully managed services can take the pressure off and ensure critical digital assets are safe and secure.”
The report highlights several emerging attacker tactics seen across the industry, including an increase in burst and pulse DDoS attacks, broadening abuse of built-in network protocols such as ARMS, WS-DD, CoAP and Jenkins to launch DDoS amplification attacks that can be carried out with limited resources and cause significant disruptions, NXNS attacks targeting DNS servers, RangeAmp attacks targeting Content Delivery Networks (CDNs), and a resurgence of Mirai-like malware capable of building large botnets through the exploitation of poorly secured IoT devices.
Companies are losing money to criminals who are launching Business Email Compromise (BEC) attacks as a more remunerative line of business than retail-accounts phishing, APWG reveals. High-ticket BEC attacks Agari reported average wire transfer loss from BEC attacks smashed all previous frontiers, spiking from $54,000 in the first quarter to $80,183 in Q2 2020 as spearphishing gangs reached for bigger returns. Scammers also requested funds in 66 percent of BEC attack in the form of … More
The post Phishing gangs mounting high-ticket BEC attacks, average loss now $80,000 appeared first on Help Net Security.
As web services, cloud storage, and big-data services continue expanding and finding their way into our lives, the gigantic hardware infrastructures they rely on–known as data centers – need to be improved to keep up with the current demand.
One promising solution for improving the performance and reducing the energy load associated with reading and writing large amounts of data is to confer storage devices with some computational capabilities and offload part of the data read/write process from CPUs.
A new way of implementing a key-value store
In a recent study, researchers from Daegu Gyeongbuk Institute of Science and Technology (DGIST), Korea, describe a new way of implementing a key-value store in solid state drives (SSDs), which offers many advantages over a more widely used method.
A key-value store (also known as key-value database) is a way of storing, managing, and retrieving data in the form of key-value pairs. The most common way to implement one is through the use of a hash function, an algorithm that can quickly match a given key with its associated stored data to achieve fast read/write access.
One of the main problems of implementing a hash-based key-value store is that the random nature of the hash function occasionally leads to long delays (latency) in read/write operations. To solve this problem, the researchers from DGIST implemented a different paradigm, called “log-structured merge-tree (LSM).” This approach relies on ordering the data hierarchically, therefore putting an upper bound on the maximum latency.
Letting storage devices compute some operations by themselves
In their implementation, nicknamed “PinK,” they addressed the most serious limitations of LSM-based key-value stores for SSDs. With its optimized memory use, guaranteed maximum delays, and hardware accelerators for offloading certain sorting tasks from the CPU, PinK represents a novel and effective take on data storage for SSDs in data centers.
Professor Sungjin Lee, who led the study, remarks: “Key-value store is a widely used fundamental infrastructure for various applications, including Web services, artificial intelligence applications, and cloud systems. We believe that PinK could greatly improve the user-perceived performance of such services.”
So far, experimental results confirm the performance gains offered by this new implementation and highlight the potential of letting storage devices compute some operations by themselves.
“We believe that our study gives a good direction of how computational storage devices should be designed and built and what technical issues we should address for efficient in-storage computing,” Prof Lee concludes.
Anti-vaccine websites, which could play a key role in promoting public hesitancy about a potential COVID-19 vaccine, are far more likely to be found via independent search engines than through an internet giant like Google. Misinformed while looking for privacy The study, led by researchers at Brighton and Sussex Medical School (BSMS), showed that independent search engines returned between 3 and 16 anti-vaccine websites in the first 30 results, while Google.com returned none. Lead author … More
The post Users turn to independent search engines for privacy, but also get misinformation appeared first on Help Net Security.
The global pandemic has seen the web take center stage. Banking, retail and other industries have seen large spikes in web traffic, and this trend is expected to become permanent.
Global brands fail to implement security controls
As attackers ramp up efforts to exploit this crisis, a slew of high-profile attacks on global brands and record-breaking fines for GDPR breaches have had little impact on client-side security and data protection deployments.
In many cases, this data leakage is taking place via whitelisted, legitimate applications, without the website owner’s knowledge. What this report indicates is that data risk is everywhere and effective controls are rarely applied.
Key findings highlight the scale of vulnerability and that the majority of global brands fail to deploy adequate security controls to guard against client-side attacks.
This website supply chain leverages client-side connections that operate outside the span of effective control in 98% of sampled websites. The client-side is a primary attack vector for website attacks today.
Websites expose data to an average of 17 domains
Despite increasing numbers of high-profile breaches, forms, found on 92% of websites expose data to an average of 17 domains. This is PII, credentials, card transactions, and medical records.
While most users would reasonably expect this data to be accessible to the website owner’s servers and perhaps a payment clearing house, the analysis shows that this data is exposed to nearly 10X more domains than intended.
Nearly one-third of websites studied expose data to more than 20 domains. This provides some insight into how and why attacks like Magecart, formjacking and card skimming continue largely unabated.
No attack is more widespread than XSS
Standards-based security controls exist that can prevent these attacks. They are infrequently applied.
Unfortunately, despite high-profile risks and the availability of controls, there has been no significant increase in the adoption of security capable of preventing client-side attacks:
- Over 99% of websites are at risk from trusted, whitelisted domains like Google Analytics. These can be leveraged to exfiltrate data, underscoring the need for continuous PII leakage monitoring and prevention. This has significant implications for data privacy, and by extension, GDPR and CCPA.
- 30% of the websites analyzed had implemented security policies – an encouraging 10% increase over 2019. However…
- Only 1.1% of websites were found to have effective security in place – an 11% decline from 2019. It indicates that while deployment volume went up, effectiveness declined more steeply. The attackers have the upper hand largely because we are not playing effective defense.
To address attacks such as XSS, Magecart and other card skimming exploits found in modern eCommerce environments, the use of client-side web security methods is beginning to emerge as a particularly useful practice.
Obviously, enterprise teams should integrate client-side protections with desired server-side countermeasures to ensure a full risk management profile (e.g., the client-side is a poor selection point to stop denial of service).
Several standards-based client-side security approaches have begun to mature that are worth examining from the perspective of website security and protection of browser sessions from malicious exploits. The best client-side security platforms automate implementation of these standards-based controls with emphasis on simplicity of administration. A typical, representative platform is used to demonstrate necessary client-side security controls.
Content security policy
To understand client-side security platforms, it helps to first explore the specifics of a standard approach known as a content security policy (CSP). This is a standard that is designed to address several types of web breaches such as cross-site scripting, click-jacking and form-jacking (all described earlier in this article series). CSP is also designed to reduce the risk of client-side malware injected from an infected advertising ecosystem.
CSPs are implemented as standard directives involving HTTP headers or page tags that specify which domains, subdomains, and resources a browser can load from a website. CSP use is consistent with the browsers any user would likely use including Chrome, Firefox, Safari, and Edge. The goal is that if malicious code is resident on a site, then visitors to that site would be prevented by the CSP from being directed to the hacker’s domain.
Figure 1. Content security policy
The example shown above in Figure 1 is taken directly from the original W3 recommendation. The CSP code can be interpreted as follows: Each source expression represents the location where content of the type specified is allowed to be pulled. To illustrate this whitelist security operation, consider that the
self keyword-source designation, in the example above, represents the set of URIs in the origin as the protected website.
Companies like Google have rolled out CSP successfully and are using it to stop attacks against their web applications daily. However, CSP is deployed only lightly in most web application environments. The challenge with CSP implementation has been its complex administration. Tala Security researchers have found, for example, that roughly two percent of website operators in the top Alexa 1000 websites deploy the standard to prevent client-side attacks. Assisting with this administrative challenge is a primary motivation for client-side platforms.
Client-side security protection results from using CSP can websites can be quite impressive. Here are some observed statistics from the Tala Security research team based on their experiences with client-side security support:
- Images – The average website in the Alexa 1000 loads images from roughly sixteen different external domains. The
img-srcdirective in CSP blocks images from any unwanted or potentially malicious sites.
- Stylesheets – The average website in the Alexa 1000 loads stylesheets from roughly two different external domains. The
style-srcdirective in CSP blocks stylesheet loads from any unwanted or potentially malicious sites.
- Fonts – The average website in the Alexa 1000 loads images from roughly one-and-a-half different external domains. The
font-srcdirective in CSP blocks font downloads from any unwanted or potentially malicious sites.
- Media – The average website in the Alexa 1000 loads images from different external domains. The
media-srcdirective in CSP blocks font downloads from any unwanted or potentially malicious sites.
An additional applicable cyber security standard from the World Wide Web Consortium (W3C) is known as subresource integrity (SRI). This standard is designed to validate resources being served up by any third party on a visited website. Such third parties include content distribution networks (CDNs), where it has not been uncommon to find malicious code being offered up to unsuspecting websites.
Client-side security platform
Client-side security platforms will make use of both CSP and SRI to provide effective client-side protections. The goal of these platform is to provide policy-based mitigation of fine-grained behavior for third-party sources where content is being served. Client-side platforms can then watch for any data collection suggestive of the attacks used by Magecart (and similar groups).
The client browser mitigation should be implemented based on artificial intelligence-based classification and learning. The software should install quickly and easily. Commercial platforms should support implementation for many target environments including Apache Nginx, IIS, NodeJS, and others. Performance and latency impacts should also be essentially non-existent and non-affecting of the user experience. Specific capabilities included in a commercial platform should include:
- Indicator evaluation – The selected platform should be designed to evaluate many different indicators of a web page’s architecture to analyze code, content, connections, and data exchange.
- Behavioral and risk modeling – The platform should include support for analysis to inform a behavioral and risk modeling task designed to highlight normal behavior and expose vulnerabilities.
- Operational improvement – Insights gained from the platform evaluation and modeling should be made available to help prevent client-side attacks such as XSS, Magecart, and the like.
The operation of world-class client side security platforms should include an on-going interaction between four different activities: Build, Monitor, Block, and Respond. The connection flow between these different lifecycle phases is depicted below.
Figure 2. Commercial client-side security lifecycle
Client-side security platforms should implement some type of information model that can be used to analyze the different behaviors on pages from the customer’s website to be protected. The security objective for such extraction should be to explicitly identify any sources of code and content on these web pages, as well as to find any data exchange support options that could involve sensitive data.
The resultant behavioral information model will thus provide a functional baseline on which to perform the necessary client-side risk management. The goal obviously should be to determine in real-time whether the site is vulnerable to attacks, third-party insertion, or other advanced breaches. As one would expect, performance of such behavioral modeling and protection in real-time complements any existing server-side security tools.
Contributing author: Aanand Krishnan, CEO, Tala Security.
As should be evident to anyone in the cyber security industry, the wide range of available web security solutions from commercial vendors will necessarily have varying degrees of effectiveness against different threats.
A premise of this article is that client-side security has been under-represented in these solutions – and to see this, it helps to briefly examine the specifics of the well-known web security solutions in use today, and their respective emphases.
Web Application Firewalls (WAFs)
The design of web application firewalls (WAFs) addressed the fact that the target of most malicious activity is not always the infrastructure surrounding a web hosting environment, but rather the application itself. By manipulating or exploiting security weaknesses in the critical applications of a business, bad actors could gain access to the most valuable assets.
WAFs are built to track the specifics of an application protocol versus the most foundational focus of an IDS/IPS. A WAF has the great advantage of being able to line up closely with the back-and-forth between user and application so that weird commands or other unusual behavior can be identified easily. Doing this properly is easier said than done, but a WAF positioned on the server side of an application architecture can be helpful.
Figure 1. WAF architecture
One challenge to WAF operation is the complexity of dealing with the incessant pace of change for applications in a modern DevOps environment. Another challenge, however, is a WAF’s inability to detect and mitigate client-side security exploits. Like an IDS/IPS, when exploit code finds its way to the user’s browser, mitigation of subsequent attack behavior is no longer in the purview of the server-side controls.
Secure Sockets Layer (SSL)
The use of Secure Sockets Layer (SSL) is an important contribution to secure eCommerce because it provides strong protection for the provision of user credential information – and, in particular, credit card numbers over the Internet. Virtually all eCommerce purchases today require some form of credit card exchange, and SSL has been invaluable in reducing the risk of this data being inappropriately observed in transit.
The infrastructure supporting SSL is surprisingly complex, and has required cooperation between various different organizations including the eCommerce vendor, the hosting provider, the browser companies, and security entities known as Certification Authorities (CAs.) Nevertheless, the SSL infrastructure for modern on-line transactional business is strong, and has benefited companies such as Amazon.com in a profound manner.
Figure 2. SSL architecture
While SSL has been a great success, its focus has been on the confidentiality of credentials, and not on the prevention of malicious attacks – especially on the client-side. Sadly, too much user training has incorrectly advised users that if they see evidence of SSL in action, that the “security issues” are covered. This might be true for avoidance of credit card sniffing attack in transit, but it is definitely not true for most web attacks, including client-side exploits.
Intrusion Detection Systems (IDS)
The most traditional means for protecting endpoints and infrastructure from security attacks involves insertion of an intrusion detection system (IDS) or intrusion prevention system (IPS) in-line with access to these components. The earliest IDS/IPS systems were built to detect attacks based on signatures of known methods, but more recent systems have been designed to include some more behavioral attributes.
Nevertheless, all IDS/IPS platforms inspect live session traffic to determine whether a given activity should be prevented from starting or terminated while on-going. This man-in-the-middle (MITM) approach its respective benefits and drawbacks, but it is common – especially since such functionality is regularly integrated into a next-generation firewall or gateway. The resulting architectural set-up for most enterprise looks as follows:
Figure 3. IDS/IPS architecture for web security
One challenge with any inspection-based solution is that encrypted communications traverse the MITM security with impunity. Another is that normal downloads to the client are not easily differentiated from malicious ones. Once a bad script finds its way past the IDS/IPS onto a client browser, the malware can run without the gateway security having any idea it is occurring. This does not remove the need for MITM security, but it does highlight a major weakness.
The provision of security for client-side attacks requires a new type of focus, one not found in many commercial solutions. It requires that security protections either pre-install on the client, or travel to the client in a dynamic manner based on the transaction being protected – usually a user with a browser visiting a website. The traditional deployment of client-side security for enterprise users has involved the following types of solutions:
Traditional anti-malware – The use of signatures as the basis for detecting malware continues to be a mainstay of modern enterprise security – and this extends to web application security. Many CISO-led teams rely on their anti-malware vendor to help reduce the risk of malware that might have been downloaded from a website. As one might expect, even with behavioral enhancements, this remains a weak control.
Virtual containers – The use of virtualized client computing environments, sometimes referred to generally as virtual containers, supports the idea that if malware finds its way to the endpoint, then it cannot reach real assets. This approach requires deployment of endpoint virtualized software, which often requires some work to minimize impact to application performance or use.
Web isolation – This technique involves a MITM gateway being positioned between the client and the website. Such processing can be software-only, or for higher assurance, implemented in hardware. The use of MITM gateways is shown here as a client-side protection because it extends the virtualization concept to the gateway.
Off-line detonation – The use of virtualized, off-line detonation is a useful means for detecting downloadable malware, and is commonly found in protection schemes for email attachments. It is also implemented frequently as part of a MITM gateway, and like isolation, complements the use of controls more specifically designed to protect the browsing session from website-born malware.
That such an assortment of methods exists is both good news and bad news for enterprise security managers: On the one hand, it is good news because these are all sensible controls, each with successful vendors supporting a range of enterprise customers. But it is also bad news in the sense that none address the problem of flexible, policy-based security policy enforcement for applications executing on the client browser.
In the next article, we’ll describe several standard application-specific controls that have emerged to address the risk of attacks such as Magecart, card skimming, and other web application and eCommerce-born exploits. The technology will be explained in the context of a typical client-side security platform, which implements content security policies, subresource integrity, and other security safeguards that should be of interest to the security team.
Contributing author: Aanand Krishnan, CEO, Tala Security.
Threats to web security are explained in this first of a three-part article series, and client-side security is shown to address a commonly missed class of cyber attack exemplified by Magecart. Traditional solutions to web security are outlined, including a new approach to web security based on client-side standards such as content security policy and subresource integrity. These emerging approaches are explained in the context of a representative client-side security platform.
Perhaps the most salient aspect of cybersecurity as a professional discipline is its continuous cycle of change. That is, as cyber attacks emerge that challenge the confidentiality, integrity, or availability of some on-line resource, corresponding protection solutions are invented to reduce the risk. Once these solutions become integrated into the underlying fabric of the resource of interest, new cyber-attacks emerge, and new solutions are invented – and the cycle continues.
In some cases, new protective cyber solutions have the side-benefit anticipating new forms of malicious attacks – and in cases where this works, security risks are often avoided in a wide range of different scenarios. Two-factor authentication, for example, was created in response to password guessing, but is now an important component in the design of new Internet of Things (IoT) machine-to-machine application protocols to reduce risk.
Nowhere is this process of introducing and mitigating cyber risk more obvious than in web security – also referred to as web application security. With valuable assets being provisioned and managed increasingly through web-based interfaces, the value of web-based exploits continues to rise. One consequence of this rise is that despite the many technologies available to protect web resources, the gap between offense and defense is growing.
A main premise in this technical series is that this web security gap stems from the fact that most application execution occurs on the modern browser. The web security community has long recognized the need to deploy functional controls to safeguard the server-side vulnerability of web servers delivering content and capability to client browsers. Too little attention, however, has been placed on this client-side vulnerability, which is attractive to attackers and largely ignored by today’s security infrastructure.
The three parts that follow in our series are intended to help address this oversight. In Part 1, we offer an introduction to the most common cyber attacks that target websites today. Part 2 then provides an overview of the web security solutions that are deployed in most production environments today. Finally, Part 3 offers an introduction to how a representative client-side security solution can help rectify the client-side weaknesses exploited by bad actors today.
Common attacks to websites
Commensurate with Tim Berners-Lee’s idea in the mid-1990’s to layer hypertext protocols and markup languages onto the Internet protocol (IP) came the emergence of offensive means to attack the infrastructure, systems, and applications that make up the now-called web. And thus was born the discipline of web security, which can be defined as the set of protective measures required to manage the security risk of web-based computing.
As one would expect, the taxonomy of web security issues quickly grew in several directions, but early focus was on avoiding denial of service attacks, protecting hosting infrastructure, and ensuring free flow of web content to users. Such focus on availability corresponded to the observation that if a website was down or not working properly, then eCommerce transactions would not occur – which had obvious revenue implications.
In addition to these infrastructure concerns, however, came a growing observation that application-level security issues might have severe consequences – often to the privacy of customers visiting a website. Thus was born the so-called web applications threat, which quickly evolved from a small concern to a massive security challenge. Even today, finding sites with exploitable vulnerabilities in their web applications is an easy task.
Several standard attack strategies have emerged in recent years that have been difficult to eradicate. These nagging problems prey on the complexity of many web application designs, and on the relative inexperience and ignorance of many web software administrators. Below, we describe these strategies – four in total – that continue to drive risk into eCommerce infrastructure and to cause challenges for many enterprise security teams:
Cross-Site Scripting (XSS)
The most common application-level web security attack is called cross-site scripting or just XSS. A cross-site attack involves a technique known as injection – where the attacker finds a way to get scripts running on a target website. The ultimate goal is for that targeted web application to send the attacker’s code to some unknowing user’s browser. The XSS attack works best when a website accepts, processes, and uses input without much checking.
Figure 1. XSS Attack Schema
Organizations such as Open Web Application Security Project (OWASP) suggest various defenses against XSS attacks. Their suggestions, many of which continue to be ignored by practitioners, involve common-sense coding and web administrative procedures that improve the processing of data from users. Most involve better validation of input data on the server side, which is a welcome security control and should be present in any web ecosystem.
Content and Ad injection
The challenge of dealing with content and ad injection attacks, also known as malvertising, has increased substantially in recent years. This should come as no surprise given the rise of the on-line advertising ecosystem as a force in modern business. Some estimates have the size of on-line advertising now reaching aggregate levels as high as $100B. Hackers and criminals understand this trend – and take advantage of exploitable weaknesses.
The way malvertising works follows a similar pattern to XSS attacks: Malicious actors find ways to inject their code onto websites through legitimate advertising networks. The goal, again similar to XSS, is to target visitors to the site, usually with the intent to redirect their browsers to some targeted website that has been planted with malware and that forms the basis for whatever attack is desired, such as credential theft.
Many observers have referenced the injection process as involving something called a drive-by download. This term references a user viewing an advertisement using a browser with an exploitable vulnerability (which is sadly a common scenario). While the user interacts with the ad, a redirection process is initiated whereby the malicious software finds its way to the unsuspecting visitor to the site.
Figure 2. Drive-By Download via Malvertising
The traditional solution to this problem involves placing a control such as a web application firewall (WAF) in-line with the access. The WAF would be programmed to use signature or behavioral analysis to stop malicious code execution from untrusted sources. As with XSS security, this server-side protection is commonly found in advertising ecosystems as a primary control. Such emphasis can address malvertising, but might not work for all forms of attacks.
The hacking group Magecart emerged several years ago, terrorizing websites with an attack known as card skimming. Normally, hacking groups tend to come and go quickly, but Magecart hit a nerve with their targeted breaches of enterprise websites and web applications. Wide ranges of different organizations saw their sites formjacked, and security solutions were not immediately evident to most victims.
Figure 3. Magecart Card Skimming
Contributing author: Aanand Krishnan, CEO, Tala Security.
There have been shifts in total web traffic broken down by the world’s largest industries as the COVID-19 pandemic has unfolded over the past several weeks, according to Imperva.
Based on a weekly average compared to Jan. 19, 2020 traffic, industries that experienced an increase in web traffic from March 1 through March 22, 2020 include:
- News (+64%)
- Food and beverages (+34%)
- Retail (+28%)
- Gaming (+28%)
- Law and government (+17%)
- Education (+17%)
Industries that faced a decrease in web traffic from March 1 through March 22, 2020 include:
- Sports (-46%)
- Adult (-42%)
- Travel (-41%)
- Automotive (-35%)
- Financial services (-7%)
- Gambling (-3%)
- Healthcare (-3%)
Spikes in attacks on government and law sectors
The report revealed increased spikes in attacks against government and law sectors as the United States launched its Democratic primaries, and early signs of change in industry traffic and attack trends due to COVID-19. Key findings between Feb. 1 and Feb. 29, 2020 include:
- First sign of shift in web usage as COVID-19 spreads globally. During the month of February, Imperva began monitoring how and if the cross-border spread of COVID-19 started to affect traffic and attack trends across multiple industries and countries. Traffic changes were detected in the News (+10%), Travel (-5%) and Finance (-5%) industries, however there were no major changes in the amount of attacks per industry and country.
- The United States and New Zealand both experienced spikes in attacks on government and law sectors. Within the United States, there was a 10% increase in the average number of attacks per site in these sectors, as Democratic primary election picked up. The top three countries of origin outside of the United States were Russia (22%), Ukraine (12%) and China (9%), and 99% of attacks overall were carried out by bots. Additionally, New Zealand experienced an 800% spike in attacks on Feb. 17 and 18.
- Web attacks originating from cloud platforms saw a 27% decline for the second month in a row. While attackers are still using cloud platforms to disseminate attacks, there was a 20% increase in attacks originating from web hosting services.
- India is the country with the highest number of spam attacks. Comment spam attacks in India are twice as popular than those in other frequently spammed countries, including Canada, Spain, the United Kingdom and the United States.
“This new research from the Cyber Threat Index is a testament to the rapidly changing security landscape, and we can expect to see some of these threats—particularly attacks on government and law sectors—continue to proliferate as we inch closer to the 2020 U.S. presidential election,” Nadav Avital, head of security research at Imperva.
“Government websites will only become an even bigger target to malicious actors, so organizations must prepare now before it’s too late. We’ll continue to monitor how this space evolves and provide recommendations for the right course of action.”
The February 2020 Index score of 782—on a scale of zero to 1000—is the highest to date, rising from 776 in January 2020.