A rise in consumer digital traffic has corresponded with a rise in fraud attacks, Arkose Labs reveals. As the year progresses and more people than ever are online, historically ‘normal’ online behavioral patterns are no longer applicable and holiday levels of digital traffic continue to occur on a near daily basis.
Fraudsters are exploiting old fraud modeling frameworks that fail to take today’s realities into account, attempting to blend in with trusted traffic and carry out attacks undetected.
“As the world becomes increasingly digital as a result of COVID-19, fraudsters are deploying an alarming volume of attacks, and continually devising new and more sophisticated ways of carrying out their attacks,” said Vanita Pandey, VP of Marketing and Strategy at Arkose Labs.
“The high fraud levels that accompany high traffic volumes are likely here to stay, even after the pandemic ends. It’s crucial that businesses are aware of the top attack trends so that they can be more vigilant than ever to successfully identify and stop fraud over the long-term.”
Bot attacks and credential stuffing skyrocket
Q3 of 2020 saw its highest ever levels of bot attacks. 1.3 billion attacks were detected in total, with 64% occurring on logins and 85% emanating from desktop computers.
Due to the widespread availability of usernames, email addresses and passwords from years of data breaches, as well as easy access to automated tools to carry out attacks at scale, credential stuffing emerged as a main driver of attack traffic. 770 million automated credential stuffing attacks were detected and stopped by Arkose Labs in Q3.
For ecommerce, every day is Black Friday
The rise in digital traffic for most of 2020 means businesses have been dealing with holiday season levels of traffic since March. With every day now resembling Black Friday, some retailers are better equipped to handle the onslaught of holiday season traffic and fraud.
However, it remains to be seen if a holiday sales bump will occur this year, given already record high traffic levels for many ecommerce businesses.
While much of 2019 saw a marked shift from automated attacks to human sweatshop-driven attacks, automated attacks dominated much of 2020, with Q3 seeing a particularly high spike. This trend is likely to revert back to more targeted attacks in Q4, as during the holiday shopping season fraudsters typically employ low-cost attackers to commit attacks that require human nuance and intelligence.
Europe emerges as the top attacking region
Nearly half of all attacks in Q3 of 2020 originated from Europe, with over 10 million sweatshop attacks coming from Russia and 7 million coming from the United Kingdom.
Many European countries, such as the United Kingdom, France, Italy and Germany, are among those whose GDP shrunk the most since the global pandemic began. A surge in attacks from nations suffering the biggest dips in economic output highlights the economic drivers that spur fraud.
Pandey said, “COVID-19 has sent the world into turmoil, upending digital traffic patterns and introducing long-lasting consequences. Habits formed during 2020 – namely conducting commerce, school, work and even socializing entirely online – will be difficult to let go of, so fraud teams must be capable of quickly cutting through digital traffic noise and spotting even the most subtle signs of attacks. In particular, using targeted friction to deter malicious activity will be key in the months and years ahead.”
76% of Americans believe they’ve encountered disinformation firsthand and 20% say they’ve shared information later shown to be incorrect or intentionally misleading, according to a research released by NortonLifeLock.
Disinformation, or false information intended to mislead or deceive people, is commonly spread by social media users and bots – automated accounts controlled by software – with the intent to sow division among people, create confusion, and undermine confidence in the news surrounding major current events, such as the 2020 U.S. presidential election, COVID-19 and social justice movements.
“Disinformation campaigns can spread like wildfire on social media and have a long-lasting impact, as people’s opinions and actions may be influenced by the false or misleading information being circulated.”
Fact-checking stop the spread of disinformation
No matter who or what posts the information, fact-checking is a best practice for consumers to help stop the spread of disinformation. According to the online survey of more than 2,000 US adults, 53% of Americans often question whether information they see on social media is disinformation or fact.
86% of Americans agree that disinformation has the ability to greatly influence someone’s opinion, but 58% acknowledge that disinformation could influence them.
Although 82% of Americans are very concerned about the spread of disinformation, 21% still say social media companies do not have the right to remove it from their platform, with Republicans being almost twice as likely as Democrats to feel this way (25% vs. 13%).
“From disinformation campaigns to deepfakes, it’s becoming increasingly difficult for people to tell real from fake online,” added Kats. “It’s important to maintain a healthy dose of skepticism and to fact check multiple sources – especially before sharing something – to help avoid spreading disinformation.”
- More than a third of Americans don’t know the true purpose of disinformation. Only 62% of Americans know that disinformation is created to cause a divide or rift between people; 72% of both Republicans and Democrats believe disinformation is created for political gain.
- 79% of Americans believe social media companies have an obligation to remove disinformation from their platforms, with the majority of Democrats (87%), Republicans (75%) and Independents (75%) supporting this.
- Democrats and Republicans disagree on who spreads disinformation the most, with Republicans most commonly stating news media outlets are most likely to spread disinformation (36%), and Democrats stating it’s U.S. politicians (28%).
- Disinformation has taken a toll on relationships, with many Americans having argued with someone (36%), unfriended/unfollowed someone on social media (30%), or taken a break from social media altogether (28%) because of disinformation.
ManageEngine unveiled findings from a report that analyzes behaviors related to personal and professional online usage patterns.
Security restrictions on corporate devices
The report combines a series of surveys conducted among nearly 1,500 employees amid the pandemic as many people were accelerating online usage due to remote work and stay-at-home orders. The findings evaluate users’ web browsing habits, opinions about AI-based recommendations, and experiences with chatbot-based customer service.
“This research illuminates the challenges of unsupervised employee behaviors, and the need for behavioral analytics tools to help ensure business security and productivity,” said Rajesh Ganesan, vice president at ManageEngine.
“While IT teams have played a crucial role in supporting remote work and business continuity during the pandemic, now is an important time to evaluate the long-term effectiveness of current strategies and augment data analytics to IT operations that will help sustain seamless, secure operations.”
Risky online behaviors could compromise corporate data and devices
Interestingly, 37% of those respondents also say that there are no security restrictions on these corporate devices. Therefore, risky online activities such as visiting unsecured websites, sharing personal information, and downloading third-party software could pose potential threats.
For example, 54% said they would still visit a website after receiving a warning about potential insecurities. This percentage is also significantly higher among younger generations – including 42% of people 18-24 years and 40% of 25-34 years.
Remote work has its hiccups, but IT teams have been responsive
79% of respondents say they experience at least one technology issue weekly while working from home. The most common issues include slowed functionality and download speeds (40%) and reliable connectivity (25%).
However, IT teams have been committed to solving these challenges. For example, 75% of respondents say it’s been easy to communicate with their IT teams to resolve these issues. Chatbots, AI, and automation are becoming increasingly more effective and trusted.
76% said their experience with chatbot-based support has been “excellent” or “satisfactory,” and 55% said their issue was resolved in a timely manner. As it relates to artificial intelligence, 67% say they trust these solutions to make recommendations for them.
The increasing comfort with automation technologies can help IT teams support both front and back-end business functions, especially during times of increased online activities due to the pandemic.
As organizations are settling into long-term remote working, new attack vectors for opportunistic cyberattackers—and new challenges for network administrators have been introduced, Nuspire reveals.
Now six months into the pandemic, attackers pivoted away from COVID-19 themes, instead utilizing other prominent media themes like the upcoming U.S. election to wreak havoc.
Increase in both botnet and exploit activity
There was an increase in both botnet and exploit activity over the course of Q2 2020 by 29% and 13% respectively—that’s more than 17,000 botnet and 187,000 exploit attacks a day.
While attackers targeted remote work technology at the source to obtain access to the enterprise in Q1 2020, there was a shift in tactics to leverage botnets to obtain a foothold in the network. Home routers typically are not monitored by IT teams therefore have become a viable attack method that avoids detection while infiltrating corporate networks.
“Threat vectors will continue to evolve as the uncertainty of our world continues to play out. That’s why our team analyzes the latest threat intelligence daily and uses this data to engage in proactive threat hunting and response to ensure our clients have the upper hand.”
- The ZeroAccess botnet made a resurgence in Q2, coming in second for most used botnet. ZeroAccess was originally terminated in 2013 but has made rare resurgences over the last seven years.
- There was a significant spike (1,310% peak mid-quarter) in exploit attempts against Shellshock, an exploit discovered in 2014, demonstrating that attackers attempt to exploit old vulnerabilities to catch old operating systems and unpatched systems.
- A new signature, dubbed MSOffice Sneaky that was released during Q2 has been identified. Documents containing malicious macros that reach out to command and control servers to download a malware of the attackers choosing. This attack vector is increasingly dangerous, especially when remote employees disconnect from their VPN.
- DoublePulsar, the exploit developed by the NSA, continues to dominate the exploit chart, consisting of 72% of all exploit attempts witnessed at Nuspire.
There’s no denying that the way people have been using the Internet and online stores has changed over the last couple of months. As consumers change their online habits, the distinction between human and bot behavior is becoming increasingly blurred, presenting cybersecurity teams with an even bigger challenge than before when it comes to differentiating humans from bots, and good bot behavior from bad.
In the past, businesses have just blocked all bot activity. That approach simply does not work today. In 2020, businesses must find a way of navigating the new bot landscape. Otherwise, at best, they risk blocking good bots and legitimate customers and, at worst, they risk bots taking over customer accounts and tarnishing their brand reputation.
The problem with bots
Why are bad bots so bad? Well, bad bots are created by bad actors to maximize personal gain from techniques such as card cracking and credential stuffing, which are used across multiple industries.
Credential stuffing involves using stolen passwords and usernames to hijack accounts—the hacker buys a list of leaked passwords and then has a bot input these passwords on other sites to try to gain access. With research revealing that more than 50% of internet users reuse the same password for multiple accounts, there’s a good chance of success.
Doing this manually won’t get results, but a bot can try thousands of credentials every minute. Hijacked accounts can then be used to commit fraud or are sold on. (Spotify and Netflix users that find random people added to their family accounts are often victims of this type of attack.)
On the other hand, you have card cracking bots which are used to create fake profiles and buy goods with stolen credit card details; the idea is to go through a list of stolen credit cards and find those that are still valid. Again, doing this manually is impossible, but bots make finding the needle in the haystack simple.
These two techniques cause two major problems for businesses. The first is reputational damage—even if a business itself was not subject to a data breach, if breached details are used on its website, consumers will hold that business accountable. Secondly, it will impact customer’s trust and loyalty. Every user affected is likely to view that business as untrustworthy, with many thinking twice about using its services again. And these negative brand perceptions can stick, causing customers to vote with their feet.
It’s about the journey
There are of course some red flags that are indicative of bot behavior, which every business must look out for. Speed is a giveaway—bots are programmed to act faster than any human possibly could. But unknown IP addresses or traffic from unexpected countries can also be characteristic of bot behavior.
However, as the landscape becomes more complex businesses need to go one step further. They must analyze what an average user journey looks like, and then consider what an unusual journey could look like.
For online retailers, a customer is likely to search for stock levels in a few different postcodes—but if a user is searching for every postcode in the UK, this could be indicative of bot behavior. It is also likely that a human would forget their username and password combination a couple of times—but not ten thousand times.
It’s clear that the “block all bots” approach doesn’t work in today’s complex environment. Rather, businesses must focus on the intent of their website traffic, through looking at user journeys. Only then will businesses truly be able to start drawing distinctions between good and bad bot behavior and human and non-human traffic.
Many businesses are at risk from bot attacks, despite an awareness of the problem and a widely held belief that they have the problem under control, Netacea reveals.
Global businesses at risk from bot attacks
The research surveyed businesses across the travel, entertainment, e-commerce and financial services sectors. It found a high awareness of how bot attacks could negatively affect a business, with over 70% understanding the most common attacks, including credential stuffing and card cracking, and 76% stating they have been attacked by bots.
However, these same businesses revealed that around 15% of their web application resources are taken up by bots. With over half of web traffic today generated by bots, this implies that businesses are unaware of a great deal of the bot traffic on their sites.
Businesses were also wholly unaware of the marketplaces where their customers’ usernames and passwords can be bought and sold, with only 1% of respondents being familiar with them.
Entertainment sites most confident
Online entertainment sites, including gaming and streaming, were the most confident in their association of a bot attack with an incident, with over half claiming not to have been attacked in the last year.
Just over 20% of e-commerce sites claimed to not have been affected, while financial services and travel sites were the most aware of the ubiquity of attacks—fewer than 5% said that they had not been the victim of an attack.
Lack of visibility may be down to a lack of responsibility
This lack of visibility may be down to a lack of responsibility: only one in ten businesses say that bot mitigation is the responsibility of a single department or person. Almost two thirds say it is the responsibility of four or more departments, making passing the problem along—or even ignoring it completely—much more of a possibility.
“Current circumstances mean that businesses are relying on their online presence more than ever before,” said Andy Still, CTO, Netacea. “This also means more opportunities for online criminal enterprises looking to increase their profits. And while the majority of businesses are not oblivious to the problem of bot attacks, the inevitable conclusion of this research is that this awareness is not leading to action.”
“High profile attacks, such as ransomware that locks down sites completely, have dominated the headlines recently, which may have led to this complacency. Bot attacks, while more subtle, can be just as devastating to a business, as accounts are stolen and sold on, card fees become crippling, and bad decisions are made on the basis of faulty data,” cautioned Still.
The research did reveal some good news—nearly all businesses were either investing in, or planning to invest in bot management, and almost none were cutting back on this vital security measure.
Software vulnerabilities are more likely to be discussed on social media before they’re revealed on a government reporting site, a practice that could pose a national security threat, according to computer scientists at the U.S. Department of Energy’s Pacific Northwest National Laboratory.
At the same time, those vulnerabilities present a cybersecurity opportunity for governments to more closely monitor social media discussions about software gaps, the researchers assert.
“Some of these software vulnerabilities have been targeted and exploited by adversaries of the United States. We wanted to see how discussions around these vulnerabilities evolved,” said lead author Svitlana Volkova, senior research scientist in the Data Sciences and Analytics Group at PNNL.
“Social cybersecurity is a huge threat. Being able to measure how different types of vulnerabilities spread across platforms is really needed.”
Social media – especially GitHub – leads the way
Their research showed that a quarter of social media discussions about software vulnerabilities from 2015 through 2017 appeared on social media sites before landing in the National Vulnerability Database, the official U.S. repository for such information. Further, for this segment of vulnerabilities, it took an average of nearly 90 days for the gap discussed on social media to show up in the national database.
The research focused on three social platforms – GitHub, Twitter and Reddit – and evaluated how discussions about software vulnerabilities spread on each of them. The analysis showed that GitHub, a popular networking and development site for programmers, was by far the most likely of the three sites to be the starting point for discussion about software vulnerabilities.
It makes sense that GitHub would be the launching point for discussions about software vulnerabilities, the researchers wrote, because GitHub is a platform geared towards software development.
The researchers found that for nearly 47 percent of the vulnerabilities, the discussions started on GitHub before moving to Twitter and Reddit. For about 16 percent of the vulnerabilities, these discussions started on GitHub even before they are published to official sites.
Codebase vulnerabilities are common
The research points at the scope of the issue, noting that nearly all commercial software codebases contain open-source sharing and that nearly 80 percent of codebases include at least one vulnerability.
Further, each commercial software codebase contains an average of 64 vulnerabilities. The National Vulnerability Database, which curates and publicly releases vulnerabilities known as Common Vulnerabilities and Exposures “is drastically growing,” the study says, “and includes more than 100,000 known vulnerabilities to date.”
In their paper, the researchers discuss which U.S. adversaries might take note of such vulnerabilities. They mention Russia, China and others and noted that there are differences in usage of the three platforms within those countries when exploiting software vulnerabilities.
According to the study, cyberattacks in 2017 later linked to Russia involved more than 200,000 victims, affected more than 300,000 computers, and caused about $4 billion in damages.
“These attacks happened because there were known vulnerabilities present in modern software,” the study says, “and some Advanced Persistent Threat groups effectively exploited them to execute a cyberattack.”
Bots or human: Both pose a threat
The researchers also distinguished between social media traffic generated by humans and automated messages from bots. A social media message crafted by an actual person and not generated by a machine will likely be more effective at raising awareness of a software vulnerability, the researchers found, emphasizing that it was important to differentiate the two.
“We categorized users as likely bots or humans, by using the Botometer tool,” the study says, “which uses a wide variety of user-based, friend, social network, temporal, and content-based features to perform bot vs. human classification.”
The tool is especially useful in separating bots from human discussions on Twitter, a platform that the researchers noted can be helpful for accounts seeking to spread an agenda.
Ultimately, awareness of social media’s ability to spread information about software vulnerabilities provides a heads-up for institutions, the study says.
“Social media signals preceding official sources could potentially allow institutions to anticipate and prioritize which vulnerabilities to address first,” it says.
“Furthermore, quantification of the awareness of vulnerabilities and patches spreading in online social environments can provide an additional signal for institutions to utilize in their open source risk-reward decision making.”
Bad bot traffic has increased compared to previous years, comprising almost one quarter (24.1%) of all website traffic and most heavily impacting the financial services industry, according to Imperva.
Bad bot traffic increases to highest levels ever
In 2019, bad bot traffic comprised 24.1% of all website traffic, rising 18.1% from the year prior. Good bot traffic consisted of 13.1% of traffic—a 25.1% decrease from 2018—while 62.8% of all website traffic came from humans.
Financial services industry hit hardest by bad bots
Every industry has a unique bot problem ranging from account takeover attacks and credential stuffing to content and price scraping. The top 5 industries with the most bad bot traffic include financial services (47.7%), education (45.7%), IT and services (45.1%), marketplaces (39.8%), and government (37.5%).
Moderate to sophisticated bad bots make up almost three quarters of bad bot traffic
Advanced persistent bots (APBs) continue to plague websites and often avoid detection by cycling through random IP addresses, entering through anonymous proxies, changing their identities, and mimicking human behavior. In 2019, 73.7% of bad bot traffic was APBs.
More than half of bad bots claim to be Google Chrome
Continuing to follow browser popularity trends, bad bots impersonated the Chrome browser 55.4% of the time. The use of data centers reduced again in 2019, accounting for 70% of bad bot traffic—down from 73.6% in 2018.
For the third year in a row, the most blocked country is Russia
In 2019, 21.1% of country blocks were Russia, followed closely by China at 19%. Despite this, with most bad bot traffic emanating from data centers, the United States remains the “bad bot superpower” with 45.9% of attacks coming from the country.
“We closely monitor how malicious bots iterate to evade detection and commit a wide range of attacks, and this year’s findings have revealed the next evolution: Bad Bots as-a-Service,” said Kunal Anand, CTO at Imperva.
“Bad Bots as-a-Service is an attempt by bot operators to legitimize their role and appeal to organizations facing increased pressure to stay ahead of competition. It’s critical that businesses spanning all industries learn which threats are most pervasive in their field and take the necessary steps to protect themselves.”
Bad bots interact with applications in the same way a legitimate user would, making them harder to detect and prevent. They enable high-speed abuse, misuse, and attacks on websites, mobile apps, and APIs. They allow bot operators, attackers, unsavory competitors, and fraudsters to perform a wide array of malicious activities.
Such activities include web scraping, competitive data mining, personal and financial data harvesting, brute-force login, digital ad fraud, spam, transaction fraud, and more.
Despite a previous warning by Ben-Gurion University of the Negev (BGU) researchers, who exposed vulnerabilities in 911 systems due to DDoS attacks, the next generation of 911 systems that now accommodate text, images and video still have the same or more severe issues.
In the study the researchers evaluated the impact of DDoS attacks on the current (E911) and next generation 911 (NG911) infrastructures in North Carolina. The research was conducted by Dr. Mordechai Guri, head of research and development, BGU Cyber Security Research Center (CSRC), and chief scientist at Morphisec Technologies, and Dr. Yisroel Mirsky, senior cyber security researcher and project manager at the BGU CSRC.
Implementation of NG911
In recent years, organizations have experienced countless DDoS attacks, during which internet-connected devices are flooded with traffic – often generated by many computers or phones called “bots” that are infected by malware by a hacker and act in concert with each other. When an attacker ties up all the available connections with malicious traffic, no legitimate information – like calling 911 in a real emergency – can make it through.
“In this study, we found that only 6,000 bots are sufficient to significantly compromise the availability of a state’s 911 services and only 200,000 bots can jeopardize the entire United States,” Dr. Guri explains.
When telephone customers dial 911 on their landlines or mobile phones, the telephone companies’ systems make the connection to the appropriate call center. Due to the limitations of original E911, the U.S. has been slowly transitioning the older circuit-switched 911 infrastructure to a packet-switched VoIP infrastructure, NG911.
It improves reliability by enabling load balancing between emergency call centers or public safety answering points (PSAP). It also expands 911 service capabilities, enabling the public to call over VoIP, transmit text, images, video, and data to PSAPs.
A number of states have implemented this and nearly all other states have begun planning or have some localized implementation of NG911.
Prevention of possible future DDoS attacks targeting 911
Many internet companies have taken significant steps to safeguard against this sort of online attack. For example, Google Shield is a service that protects news sites from attacks by using Google’s massive network of internet servers to filter out attacking traffic, while allowing through only legitimate connections. However, phone companies have not done the same.
To demonstrate how DDoS attacks could affect 911 call systems, the researchers created a detailed simulation of North Carolina’s 911 infrastructure, and a general simulation of the entire U.S. emergency-call system.
Using only 6,000 infected phones, it is possible to effectively block 911 calls from 20% of the state’s landline callers, and half of the mobile customers. “In our simulation, even people who called back four or five times would not be able to reach a 911 operator to get help,” Dr. Guri says.
The countermeasures that exist today are difficult and not without flaws. Many involve blocking certain devices from calling 911, which carries the risk of preventing a legitimate call for help. But they indicate areas where further inquiry – and collaboration between researchers, telecommunications companies, regulators, and emergency personnel – could yield useful breakthroughs.
For example, cellphones might be required to run a monitoring software to blacklist or block themselves from making fraudulent 911 calls. Or 911 systems could examine identifying information of incoming calls and prioritize those made from phones that are not trying to mask themselves.
“Many say that the new NG911 solves the DDoS problem because callers can be connected to PSAPs around the country, not just locally,” Dr. Mirsky explains. “Nationally, with complete resource sharing, the rate that callers give up trying — called the ‘despair rate’ — is still significant: 15% with 6,000 bots and 43% with 50,000 bots.
“But the system would still need to communicate locally to dispatch police, medical and fire services. As a result, the despair rate is more likely to be 56% with 6,000 bots –worse than using the original E911 infrastructure.”
According to Dr. Guri, “We believe that this research will assist the respective organizations, lawmakers and security professionals in understanding the scope of this issue and aid in the prevention of possible future attacks on the 911 emergency services. It is critical that 911 services always be available – to respond quickly to emergencies and give the public peace of mind.”
A sharp increase (57%) in high-risk vulnerabilities drove the threat index score up 8% from December 2019 to January 2020, according to the Imperva Cyber Threat Index.
Following the release of Oracle’s Critical Patch Update – which included 19 MySQL vulnerabilities—there was an unusual increase in the vulnerabilities risk component within the Index.
Specifically, there was a 57% increase in vulnerabilities that can be accessed remotely with no authentication required, have a public exploit available, or are trending in social media, meaning they pose an especially high level of risk to businesses.
A spike in public cloud web attacks
Web attacks originating from the public cloud saw a 16% spike from November to December 2019. AWS was the top source of attacks, responsible for 94% of all web attacks coming from public clouds. This suggests that public cloud companies should be auditing malicious behavior on their platforms.
Bots used the Coronavirus hype for spamming
In the same month that the coronavirus outbreak first came to light, two new spam campaigns that relied on the hype around coronavirus were observed.
These messages lure people to enter a site that tracks the spread of the virus and also offers the sale of shady pharmaceuticals.
Latest Citrix bug gained more press than hacker interest
Despite widespread concern over the recent Citrix Application Delivery Controller bug, it was only ranked as the 176th most frequent attack vector seen this month.
For comparison, high-profile attack vectors such as this typically rank among the top 20. The Citrix bug accounted for 200,000 attacks detected, while the top attack vector in January accounted for over two billion attacks.
The adult industry was the victim of higher-risk attacks
More than half (51%) of the attacks against the adult industry were remote code execution (RCE). The reason these attacks pose an inflated risk is because a remote attacker can run malicious code to hijack the server and access its data.
Most attacks target sources within the same country
Most of the top 10 countries in which attacks originated were targeting sites within the same country. The exceptions were attackers from Germany and China who targeted U.S.-based websites.
This can be attributed in part to the fact that many websites under attack from different regions are located in U.S. data centers. This finding shows that even cyber attacks conducted by foreign adversaries often appear to originate locally.
The Council to Secure the Digital Economy (CSDE), a partnership between global technology, communications, and internet companies supported by USTelecom—The Broadband Association and the Consumer Technology Association (CTA), released the International Botnet and IoT Security Guide 2020, a comprehensive set of strategies to protect the global digital ecosystem from the growing threat posed by botnets, malware and distributed attacks. International Botnet and IoT Security Guide 2020 Botnets are large networks of compromised devices under the … More
Emotet had a 730% increase in activity in September after being in a near dormant state, Nuspire discovered. Emotet, a modular banking Trojan, has added additional features to steal contents of victim’s inboxes and steal credentials for sending outbound emails. Those credentials are sent to the other bots in its botnet which are used to then transmit Emotet attack messages. When Emotet returned in September, it appeared with TrickBot and Ryuk ransomware to cause the … More
The post Researchers discover massive increase in Emotet activity appeared first on Help Net Security.