There’s an old adage in information security: “Every company gets penetration tested, whether or not they pay someone for the pleasure.” Many organizations that do hire professionals to test their network security posture unfortunately tend to focus on fixing vulnerabilities hackers could use to break in. But judging from the proliferation of help-wanted ads for offensive pentesters in the cybercrime underground, today’s attackers have exactly zero trouble gaining that initial intrusion: The real challenge seems to be hiring enough people to help everyone profit from the access already gained.
One of the most common ways such access is monetized these days is through ransomware, which holds a victim’s data and/or computers hostage unless and until an extortion payment is made. But in most cases, there is a yawning gap of days, weeks or months between the initial intrusion and the deployment of ransomware within a victim organization.
That’s because it usually takes time and a good deal of effort for intruders to get from a single infected PC to seizing control over enough resources within the victim organization where it makes sense to launch the ransomware.
This includes pivoting from or converting a single compromised Microsoft Windows user account to an administrator account with greater privileges on the target network; the ability to sidestep and/or disable any security software; and gaining the access needed to disrupt or corrupt any data backup systems the victim firm may have.
Each day, millions of malware-laced emails are blasted out containing booby-trapped attachments. If the attachment is opened, the malicious document proceeds to quietly download additional malware and hacking tools to the victim machine (here’s one video example of a malicious Microsoft Office attachment from the malware sandbox service any.run). From there, the infected system will report home to a malware control server operated by the spammers who sent the missive.
At that point, control over the victim machine may be transferred or sold multiple times between different cybercriminals who specialize in exploiting such access. These folks are very often contractors who work with established ransomware groups, and who are paid a set percentage of any eventual ransom payments made by a victim company.
THE DOCTOR IS IN
Enter subcontractors like “Dr. Samuil,” a cybercriminal who has maintained a presence on more than a dozen top Russian-language cybercrime forums over the past 15 years. In a series of recent advertisements, Dr. Samuil says he’s eagerly hiring experienced people who are familiar with tools used by legitimate pentesters for exploiting access once inside of a target company — specifically, post-exploit frameworks like the closely-guarded Cobalt Strike.
“You will be regularly provided select accesses which were audited (these are about 10-15 accesses out of 100) and are worth a try,” Dr. Samuil wrote in one such help-wanted ad. “This helps everyone involved to save time. We also have private software that bypasses protection and provides for smooth performance.”
From other classified ads he posted in August and September 2020, it seems clear Dr. Samuil’s team has some kind of privileged access to financial data on targeted companies that gives them a better idea of how much cash the victim firm may have on hand to pay a ransom demand. To wit:
“There is huge insider information on the companies which we target, including information if there are tape drives and clouds (for example, Datto that is built to last, etc.), which significantly affects the scale of the conversion rate.
– experience with cloud storage, ESXi.
– experience with Active Directory.
– privilege escalation on accounts with limited rights.
* Serious level of insider information on the companies with which we work. There are proofs of large payments, but only for verified LEADs.
* There is also a private MEGA INSIDE , which I will not write about here in public, and it is only for experienced LEADs with their teams.
* We do not look at REVENUE / NET INCOME / Accountant reports, this is our MEGA INSIDE, in which we know exactly how much to confidently squeeze to the maximum in total.
According to cybersecurity firm Intel 471, Dr. Samuil’s ad is hardly unique, and there are several other seasoned cybercriminals who are customers of popular ransomware-as-a-service offerings that are hiring sub-contractors to farm out some of the grunt work.
“Within the cybercriminal underground, compromised accesses to organizations are readily bought, sold and traded,” Intel 471 CEO Mark Arena said. “A number of security professionals have previously sought to downplay the business impact cybercriminals can have to their organizations.”
“But because of the rapidly growing market for compromised accesses and the fact that these could be sold to anyone, organizations need to focus more on efforts to understand, detect and quickly respond to network compromises,” Arena continued. “That covers faster patching of the vulnerabilities that matter, ongoing detection and monitoring for criminal malware, and understanding the malware you are seeing in your environment, how it got there, and what it has or could have dropped subsequently.”
WHO IS DR. SAMUIL?
In conducting research for this story, KrebsOnSecurity learned that Dr. Samuil is the handle used by the proprietor of multi-vpn[.]biz, a long-running virtual private networking (VPN) service marketed to cybercriminals who are looking to anonymize and encrypt their online traffic by bouncing it through multiple servers around the globe.
MultiVPN is the product of a company called Ruskod Networks Solutions (a.k.a. ruskod[.]net), which variously claims to be based in the offshore company havens of Belize and the Seychelles, but which appears to be run by a guy living in Russia.
The domain registration records for ruskod[.]net were long ago hidden by WHOIS privacy services. But according to Domaintools.com [an advertiser on this site], the original WHOIS records for the site from the mid-2000s indicate the domain was registered by a Sergey Rakityansky.
This is not an uncommon name in Russia or in many surrounding Eastern European nations. But a former business partner of MultiVPN who had a rather public falling out with Dr. Samuil in the cybercrime underground told KrebsOnSecurity that Rakityansky is indeed Dr. Samuil’s real surname, and that he is a 32- or 33-year-old currently living in Bryansk, a city located approximately 200 miles southwest of Moscow.
Neither Dr. Samuil nor MultiVPN have responded to requests for comment.
Cyber threat intelligence (CTI) sharing is a critical tool for security analysts. It takes the learnings from a single organization and shares it across the industry to strengthen the security practices of all.
By sharing CTI, security teams can alert each other to new findings across the threat landscape and flag active cybercrime campaigns and indicators of compromise (IOCs) that the cybersecurity community should be immediately aware of. As this intel spreads, organizations can work together to build upon each other’s defenses to combat the latest threat. This creates a herd-like immunity for networks as defensive capabilities are collectively raised.
Blue teams need to act more like red teams
A recent survey by Exabeam showed that 62 percent of blue teams have difficulty stopping red teams during adversary simulation exercises. A blue team is charged with defending one network. They have the benefit of knowing the ins and outs of their network better than any red team or cybercriminal, so they are well-equipped to spot abnormalities and IOCs and act fast to mitigate threats.
But blue teams have a bigger disadvantage: they mostly work in silos consisting only of members of their immediate team. They typically don’t share their threat intelligence with other security teams, vendors, or industry groups. This means they see cyber threats from a single lens. They lack the broader view of the real threat landscape external to their organization.
This disadvantage is where red teams and cybercriminals thrive. Not only do they choose the rules of the game – the when, where, and how the attack will be executed – they share their successes and failures with each other to constantly adapt and evolve tactics. They thrive in a communications-rich environment, sharing frameworks, toolkits, guidelines, exploits, and even offering each other customer support-like help.
For blue teams to move from defense to prevention, they need to take defense to the attacker’s front door. This proactive approach can only work by having timely, accurate, and contextual threat intelligence. And that requires a community, not a company. But many companies are hesitant to join the CTI community. The SANS 2020 Cyber Threat Intelligence Survey shows that more than 40% of respondents both produce and consume intelligence, leaving much room for improvement over the next few years.
Common challenges for beginning a cyber threat intelligence sharing program
One of the biggest challenges to intelligence sharing is that businesses don’t understand how sharing some of their network data can actually strengthen their own security over time. Much like the early days of open-source software, there’s a fear that if you have anything open to exposure it makes you inherently more vulnerable. But as open source eventually proved, more people collaborating in the open can lead to many positive outcomes, including better security.
Another major challenge is that blue teams don’t have the lawless luxury of sharing threat intelligence with reckless abandon: we have legal teams. And legal teams aren’t thrilled with the notion of admitting to IOCs on their network. And there is a lot of business-sensitive information that shouldn’t be shared, and the legal team is right to protect this.
The opportunity is in finding an appropriate line to walk, where you can share intelligence that contributes to bolstering cyber defense in the larger community without doing harm to your organization.
If you’re new to CTI sharing and want to get involved here are a few pieces of advice.
Clear it with your manager
If you or your organization are new to CTI sharing the first thing to do is to get your manager’s blessing before you move forward. Being overconfident in your organization’s appetite to share their network data (especially if they don’t understand the benefits) can be a costly, yet avoidable mistake.
Start sharing small
Don’t start by asking permission to share details on a data exfiltration event that currently has your company in crisis mode. Instead, ask if it’s ok to share a range of IPs that have been brute forcing logins on your site. Or perhaps you’ve seen a recent surge of phishing emails originating from a new domain and want to share that. Make continuous, small asks and report back any useful findings.
Share your experience when you can’t share intelligence
When you join a CTI group, you’re going to want to show that you’re an active, engaged member. But sometimes you just don’t have any useful intelligence to share. You can still add value to the group by lending your knowledge and experience. Your perspective might change someone’s mind on their process and make them a better practitioner, thus adding to the greater good.
Demonstrate value of sharing CTI
Tie your participation in CTI groups to any metrics that demonstrate your organization’s security posture has increased during that time. For example, show any time that participation in a CTI group has directly led to intelligence that helped decrease alerted events and helped your team get ahead of a new attack.
There’s a CTI group for everyone
From disinformation and dark web to medical devices and law enforcement, there’s a CTI segment for everything you ever wanted to be involved in. Some are invite-only, so the more active you are in public groups the more likely you’ll be asked to join groups that you’ve shown interest in or have provided useful intelligence about. These hyper-niche groups can provide big value to your organization as you can get expert consulting from top minds in the field.
The more data you have, the more points you can correlate faster. Joining a CTI sharing group gives you access to data you’d never even know about to inform better decision making when it comes to your defensive actions. More importantly, CTI sharing makes all organizations more secure and unites us under a common cause.
The COVID-19 epidemic has brought a wave of email phishing attacks that try to trick work-at-home employees into giving away credentials needed to remotely access their employers’ networks. But one increasingly brazen group of crooks is taking your standard phishing attack to the next level, marketing a voice phishing service that uses a combination of one-on-one phone calls and custom phishing sites to steal VPN credentials from employees.
According to interviews with several sources, this hybrid phishing gang has a remarkably high success rate, and operates primarily through paid requests or “bounties,” where customers seeking access to specific companies or accounts can hire them to target employees working remotely at home.
And over the past six months, the criminals responsible have created dozens if not hundreds of phishing pages targeting some of the world’s biggest corporations. For now at least, they appear to be focusing primarily on companies in the financial, telecommunications and social media industries.
“For a number of reasons, this kind of attack is really effective,” said Allison Nixon, chief research officer at New York-based cyber investigations firm Unit 221B. “Because of the Coronavirus, we have all these major corporations that previously had entire warehouses full of people who are now working remotely. As a result the attack surface has just exploded.”
TARGET: NEW HIRES
A typical engagement begins with a series of phone calls to employees working remotely at a targeted organization. The phishers will explain that they’re calling from the employer’s IT department to help troubleshoot issues with the company’s virtual private networking (VPN) technology.
The goal is to convince the target either to divulge their credentials over the phone or to input them manually at a website set up by the attackers that mimics the organization’s corporate email or VPN portal.
Zack Allen is director of threat intelligence for ZeroFOX, a Baltimore-based company that helps customers detect and respond to risks found on social media and other digital channels. Allen has been working with Nixon and several dozen other researchers from various security firms to monitor the activities of this prolific phishing gang in a bid to disrupt their operations.
Allen said the attackers tend to focus on phishing new hires at targeted companies, and will often pose as new employees themselves working in the company’s IT division. To make that claim more believable, the phishers will create LinkedIn profiles and seek to connect those profiles with other employees from that same organization to support the illusion that the phony profile actually belongs to someone inside the targeted firm.
“They’ll say ‘Hey, I’m new to the company, but you can check me out on LinkedIn’ or Microsoft Teams or Slack, or whatever platform the company uses for internal communications,” Allen said. “There tends to be a lot of pretext in these conversations around the communications and work-from-home applications that companies are using. But eventually, they tell the employee they have to fix their VPN and can they please log into this website.”
The domains used for these pages often invoke the company’s name, followed or preceded by hyphenated terms such as “vpn,” “ticket,” “employee,” or “portal.” The phishing sites also may include working links to the organization’s other internal online resources to make the scheme seem more believable if a target starts hovering over links on the page.
Allen said a typical voice phishing or “vishing” attack by this group involves at least two perpetrators: One who is social engineering the target over the phone, and another co-conspirator who takes any credentials entered at the phishing page and quickly uses them to log in to the target company’s VPN platform in real-time.
Time is of the essence in these attacks because many companies that rely on VPNs for remote employee access also require employees to supply some type of multi-factor authentication in addition to a username and password — such as a one-time numeric code generated by a mobile app or text message. And in many cases, those codes are only good for a short duration — often measured in seconds or minutes.
But these vishers can easily sidestep that layer of protection, because their phishing pages simply request the one-time code as well.
Allen said it matters little to the attackers if the first few social engineering attempts fail. Most targeted employees are working from home or can be reached on a mobile device. If at first the attackers don’t succeed, they simply try again with a different employee.
And with each passing attempt, the phishers can glean important details from employees about the target’s operations, such as company-specific lingo used to describe its various online assets, or its corporate hierarchy.
Thus, each unsuccessful attempt actually teaches the fraudsters how to refine their social engineering approach with the next mark within the targeted organization, Nixon said.
“These guys are calling companies over and over, trying to learn how the corporation works from the inside,” she said.
NOW YOU SEE IT, NOW YOU DON’T
All of the security researchers interviewed for this story said the phishing gang is pseudonymously registering their domains at just a handful of domain registrars that accept bitcoin, and that the crooks typically create just one domain per registrar account.
“They’ll do this because that way if one domain gets burned or taken down, they won’t lose the rest of their domains,” Allen said.
More importantly, the attackers are careful to do nothing with the phishing domain until they are ready to initiate a vishing call to a potential victim. And when the attack or call is complete, they disable the website tied to the domain.
This is key because many domain registrars will only respond to external requests to take down a phishing website if the site is live at the time of the abuse complaint. This requirement can stymie efforts by companies like ZeroFOX that focus on identifying newly-registered phishing domains before they can be used for fraud.
“They’ll only boot up the website and have it respond at the time of the attack,” Allen said. “And it’s super frustrating because if you file an abuse ticket with the registrar and say, ‘Please take this domain away because we’re 100 percent confident this site is going to be used for badness,’ they won’t do that if they don’t see an active attack going on. They’ll respond that according to their policies, the domain has to be a live phishing site for them to take it down. And these bad actors know that, and they’re exploiting that policy very effectively.”
SCHOOL OF HACKS
Both Nixon and Allen said the object of these phishing attacks seems to be to gain access to as many internal company tools as possible, and to use those tools to seize control over digital assets that can quickly be turned into cash. Primarily, that includes any social media and email accounts, as well as associated financial instruments such as bank accounts and any cryptocurrencies.
Nixon said she and others in her research group believe the people behind these sophisticated vishing campaigns hail from a community of young men who have spent years learning how to social engineer employees at mobile phone companies and social media firms into giving up access to internal company tools.
Traditionally, the goal of these attacks has been gaining control over highly-prized social media accounts, which can sometimes fetch thousands of dollars when resold in the cybercrime underground. But this activity gradually has evolved toward more direct and aggressive monetization of such access.
On July 15, a number of high-profile Twitter accounts were used to tweet out a bitcoin scam that earned more than $100,000 in a few hours. According to Twitter, that attack succeeded because the perpetrators were able to social engineer several Twitter employees over the phone into giving away access to internal Twitter tools.
Nixon said it’s not clear whether any of the people involved in the Twitter compromise are associated with this vishing gang, but she noted that the group showed no signs of slacking off after federal authorities charged several people with taking part in the Twitter hack.
“A lot of people just shut their brains off when they hear the latest big hack wasn’t done by hackers in North Korea or Russia but instead some teenagers in the United States,” Nixon said. “When people hear it’s just teenagers involved, they tend to discount it. But the kinds of people responsible for these voice phishing attacks have now been doing this for several years. And unfortunately, they’ve gotten pretty advanced, and their operational security is much better now.”
PROPER ADULT MONEY-LAUNDERING
While it may seem amateurish or myopic for attackers who gain access to a Fortune 100 company’s internal systems to focus mainly on stealing bitcoin and social media accounts, that access — once established — can be re-used and re-sold to others in a variety of ways.
“These guys do intrusion work for hire, and will accept money for any purpose,” Nixon said. “This stuff can very quickly branch out to other purposes for hacking.”
For example, Allen said he suspects that once inside of a target company’s VPN, the attackers may try to add a new mobile device or phone number to the phished employee’s account as a way to generate additional one-time codes for future access by the phishers themselves or anyone else willing to pay for that access.
Nixon and Allen said the activities of this vishing gang have drawn the attention of U.S. federal authorities, who are growing concerned over indications that those responsible are starting to expand their operations to include criminal organizations overseas.
“What we see now is this group is really good on the intrusion part, and really weak on the cashout part,” Nixon said. “But they are learning how to maximize the gains from their activities. That’s going to require interactions with foreign gangs and learning how to do proper adult money laundering, and we’re already seeing signs that they’re growing up very quickly now.”
WHAT CAN COMPANIES DO?
Many companies now make security awareness and training an integral part of their operations. Some firms even periodically send test phishing messages to their employees to gauge their awareness levels, and then require employees who miss the mark to undergo additional training.
Such precautions, while important and potentially helpful, may do little to combat these phone-based phishing attacks that tend to target new employees. Both Allen and Nixon — as well as others interviewed for this story who asked not to be named — said the weakest link in most corporate VPN security setups these days is the method relied upon for multi-factor authentication.
One multi-factor option — physical security keys — appears to be immune to these sophisticated scams. The most commonly used security keys are inexpensive USB-based devices. A security key implements a form of multi-factor authentication known as Universal 2nd Factor (U2F), which allows the user to complete the login process simply by inserting the USB device and pressing a button on the device. The key works without the need for any special software drivers.
The allure of U2F devices for multi-factor authentication is that even if an employee who has enrolled a security key for authentication tries to log in at an impostor site, the company’s systems simply refuse to request the security key if the user isn’t on their employer’s legitimate website, and the login attempt fails. Thus, the second factor cannot be phished, either over the phone or Internet.
In July 2018, Google disclosed that it had not had any of its 85,000+ employees successfully phished on their work-related accounts since early 2017, when it began requiring all employees to use physical security keys in place of one-time codes.
Probably the most popular maker of security keys is Yubico, which sells a basic U2F Yubikey for $20. It offers regular USB versions as well as those made for devices that require USB-C connections, such as Apple’s newer Mac OS systems. Yubico also sells more expensive keys designed to work with mobile devices. [Full disclosure: Yubico was recently an advertiser on this site].
Nixon said many companies will likely balk at the price tag associated with equipping each employee with a physical security key. But she said as long as most employees continue to work remotely, this is probably a wise investment given the scale and aggressiveness of these voice phishing campaigns.
“The truth is some companies are in a lot of pain right now, and they’re having to put out fires while attackers are setting new fires,” she said. “Fixing this problem is not going to be simple, easy or cheap. And there are risks involved if you somehow screw up a bunch of employees accessing the VPN. But apparently these threat actors really hate Yubikey right now.”
DomainTools announced the addition of four new members to the company’s leadership team to guide the company through its next phase of growth and expansion.
The new appointments include:
- James Reynolds, Chief Technology Officer
- Jeff Day, Chief Commercial Officer
- Jackie Abrams, Vice President of Product
- Jill Boon, Vice President of People
These executive appointments will enable DomainTools to more rapidly invest in new growth opportunities, accelerate delivery of its product roadmap and drive expansion of its go-to-market efforts.
“Seasoned leaders with proven track records, like James, Jeff, Jackie, and Jill, position the company to take advantage of the recent traction we have seen with our pioneering domain risk scoring technology as well as our market-leading SIEM and Orchestration integrations, while delivering the highest quality experience to our customers,” said Tim Chen, CEO of DomainTools.
“I am thrilled to welcome each of our new team members who bring with them unique experiences that play a significant role in our global growth strategy as the market-leading cyber threat intelligence organization.”
In his role as CTO James Reynolds leads all aspects of technology strategy and development encompassing R&D, engineering, and platform. With almost three decades of software development experience, much of it in the security industry, Reynolds has served in multiple senior positions including CTO and other executive roles at companies such as Synopsys, SECUDE, FICO, and Deutsche Bank.
Experienced across cloud, network, application, and cybersecurity, Reynolds also worked at DARPA leading the research and development of a cloud based CyberWar Platform.
Jeff Day joins DomainTools with two decades of technology and go-to-market experience focusing on cloud and high-growth technology companies. Jeff has held senior Marketing leadership roles at AWS, Apptio, Highspot, and HP, including building AWS’s technology partner marketing organization from scratch to support 1000’s of partners worldwide.
Additionally, Jeff has held a variety of engineering, product, sales, and channel roles across Intel, HP, and Sun Microsystems. Jeff’s experience building world-class teams and driving operational excellence will help lead DomainTools through the next stage of growth and scale.
Jackie Abrams was promoted to VP Product in March of this year and brings a wealth of security industry knowledge to the organization. Jackie’s focus is on aligning very closely to our customers so DomainTools solutions can continue to address important security and threat intelligence challenges.
She is responsible for the long-term product vision and is driving innovative solutions to help make the Internet a safer place. With deep OSINT research and digital investigations experience, Jackie previously built products and services in support of mobile threat assessment for text messages and message senders.
Jackie is an active supporter of the Messaging Malware Mobile Anti-Abuse Working Group (M3AAWG), where she collaborates with threat intelligence and abuse mitigation practitioners in the digital services and ISP space.
Jill Boon joins the DomainTools executive team as VP of People and is responsible for designing and delivering global human resources and recruiting strategies. She comes from a background both in extreme high growth tech startups, like Porch and Mighty AI, and in enterprise global organizations such as Amazon.
At Porch, she helped grow the company from 35 to 500+ in under two years, and, at Mighty AI, she served as an integral member of the company’s leadership team through a successful acquisition by Uber. Great people and teams have been the backbone of DomainTools for 20 years, and Jill will build on that legacy as the company continues to scale.
The past few weeks have seen a large number of new domain registrations beginning with the word “reopen” and ending with U.S. city or state names. The largest number of them were created just hours after President Trump sent a series of all-caps tweets urging citizens to “liberate” themselves from new gun control measures and state leaders who’ve enacted strict social distancing restrictions in the face of the COVID-19 pandemic. Here’s a closer look at who and what appear to be behind these domains.
KrebsOnSecurity began this research after reading a fascinating Reddit thread over the weekend on several “reopen” sites that seemed to be engaged in astroturfing, which involves masking the sponsors of a message or organization to make it appear as though it originates from and is supported by grassroots participants.
The Reddit discussion focused on a handful of new domains — including reopenmn.com, reopenpa.com, and reopenva.com — that appeared to be tied to various gun rights groups in those states. Their registrations have roughly coincided with contemporaneous demonstrations in Minnesota, California and Tennessee where people showed up to protest quarantine restrictions over the past few days.
Suspecting that these were but a subset of a larger corpus of similar domains registered for every state in the union, KrebsOnSecurity ran a domain search report at DomainTools [an advertiser on this site], requesting any and all domains registered in the past month that begin with “reopen” and end in “.com.”
That lookup returned approximately 150 domains; in addition to those named after the individual 50 states, some of the domains refer to large American cities or counties, and others to more general concepts, such as “reopeningchurch.com” or “reopenamericanbusiness.com.”
Many of the domains are still dormant, leading to parked pages and registration records obscured behind privacy protection services. But a review of other details about these domains suggests a majority of them are tied to various gun rights groups, state Republican Party organizations, and conservative think tanks, religious and advocacy groups.
For example, reopenmn.com forwards to minnesotagunrights.org, but the site’s WHOIS registration records (obscured since the Reddit thread went viral) point to an individual living in Florida. That same Florida resident registered reopenpa.com, a site that forwards to the Pennsylvania Firearms Association, and urges the state’s residents to contact their governor about easing the COVID-19 restrictions.
Reopenpa.com is tied to a Facebook page called Pennsylvanians Against Excessive Quarantine, which sought to organize an “Operation Gridlock” protest at noon today in Pennsylvania among its 68,000 members.
Both the Minnesota and Pennsylvania gun advocacy sites include the same Google Analytics tracker in their source code: UA-60996284. A cursory Internet search on that code shows it also is present on reopentexasnow.com, reopenwi.com and reopeniowa.com.
More importantly, the same code shows up on a number of other anti-gun control sites registered by the Dorr Brothers, real-life brothers who have created nonprofits (in name only) across dozens of states that are so extreme in their stance they make the National Rifle Association look like a liberal group by comparison.
This 2019 article at cleveland.com quotes several 2nd Amendment advocates saying the Dorr brothers simply seek “to stir the pot and make as much animosity as they can, and then raise money off that animosity.” The site dorrbrotherscams.com also is instructive here.
A number of other sites — such as reopennc.com — seem to exist merely to sell t-shirts, decals and yard signs with such slogans as “Know Your Rights,” “Live Free or Die,” and “Facts not Fear.” WHOIS records show the same Florida resident who registered this North Carolina site also registered one for New York — reopenny.com — just a few minutes later.
Some of the concept reopen domains — including reopenoureconomy.com (registered Apr. 15) and reopensociety.com (Apr. 16) — trace back to FreedomWorks, a conservative group that the Associated Press says has been holding weekly virtual town halls with members of Congress, “igniting an activist base of thousands of supporters across the nation to back up the effort.”
Reopenoc.com — which advocates for lifting social restrictions in Orange County, Calif. — links to a Facebook page for Orange County Republicans, and has been chronicling the street protests there. The messaging on Reopensc.com — urging visitors to digitally sign a reopen petition to the state governor — is identical to the message on the Facebook page of the Horry County, SC Conservative Republicans.
Reopenmississippi.com was registered on April 16 to In Pursuit of LLC, an Arlington, Va.-based conservative group with a number of former employees who currently work at the White House or in cabinet agencies. A 2016 story from USA Today says In Pursuit Of LLC is a for-profit communications agency launched by billionaire industrialist Charles Koch.
Many of the reopen sites that have redacted names and other information about their registrants nevertheless hold other clues, mainly based on precisely when they were registered. Each domain registration record includes a date and timestamp down to the second that the domain was registered. By grouping the timestamps for domains that have obfuscated registration details and comparing them to domains that do include ownership data, we can infer more information.
For example, more than 50 reopen domains were registered within an hour of each other on April 17 — between 3:25 p.m. ET and 4:43 ET. Most of these lack registration details, but a handful of them did (until the Reddit post went viral) include the registrant name Michael Murphy, the same name tied to the aforementioned Minnesota and Pennsylvania gun rights domains (reopenmn.com and reopenpa.com) that were registered within seconds of each other on April 8.
A Google spreadsheet documenting much of the domain information sourced in this story is available here.
No one responded to the email addresses and phone numbers tied to Mr. Murphy, who may or may not have been involved in this domain registration scheme. Those contact details suggest he runs a store in Florida that makes art out of reclaimed or discarded items, and that he operates a Web site design company in Florida.
Update, April 21, 6:40 a.m. ET: Mother Jones has published a compelling interview with Mr. Murphy, who says he registered thousands of dollars worth of “reopen” and “liberate” domains to keep them out of the hands of people trying to organize protests. KrebsOnSecurity has not be able to validate this report, but it’s a fascinating twist to this tale: How an ‘Old Hippie’ Got Accused of Astroturfing the Right-Wing Campaign to Reopen the Economy.
As much as President Trump likes to refer to stories critical of him and his administration as “fake news,” this type of astroturfing is not only dangerous to public health, but it’s reminiscent of the playbook used by Russia to sow discord, create phony protest events, and spread disinformation across America in the lead-up to the 2016 election.
This entire astroturfing campaign also brings to mind a “local news” network called Local Government Information Services (LGIS), an organization founded in 2018 which operates a huge network of hundreds of sites that purport to be local news sites in various states. However, most of the content is generated by automated computer algorithms that consume data from reports released by U.S. executive branch federal agencies.
The relatively scarce actual bylined content on these LGIS sites is authored by freelancers who are in most cases nowhere near the localities they cover. Other content not drawn from government reports often repurpose press releases from conservative Web sites, including gunrightswatch.com, taxfoundation.org, and The Heritage Foundation. For more on LGIS, check out the 2018 coverage from The Chicago Tribune and the Columbia Journalism Review.
Security experts are poring over thousands of new Coronavirus-themed domain names registered each day, but this often manual effort struggles to keep pace with the flood of domains invoking the virus to promote malware and phishing sites, as well as non-existent healthcare products and charities. As a result, domain name registrars are under increasing pressure to do more to combat scams and misinformation during the COVID-19 pandemic.
By most measures, the volume of new domain registrations that include the words “Coronavirus” or “Covid” has closely tracked the spread of the deadly virus. The Cyber Threat Coalition (CTC), a group of several thousand security experts volunteering their time to fight COVID-related criminal activity online, recently published data showing the rapid rise in new domains began in the last week of February, around the same time the Centers for Disease Control began publicly warning that a severe global pandemic was probably inevitable.
“Since March 20th, the number of risky domains registered per day has been decreasing, with a notable spike around March 30th,” wrote John Conwell, principal data scientist at DomainTools [an advertiser on this site]. “Interestingly, legitimate organizations creating domains in response to the COVID-19 crisis were several weeks behind the curve from threat actors trying to take advantage of this situation. This is a pattern DomainTools hasn’t seen before in other crises.”
Security vendor Sophos looked at telemetry from customer endpoints to illustrate the number of new COVID-related domains that actually received traffic of late. As the company noted, one challenge in identifying potentially malicious domains is that many of them can sit dormant for days or weeks before being used for anything.
“We can see a rapid and dramatic increase of visits to potentially malicious domains exploiting the Coronavirus pandemic week over week, beginning in late February,” wrote Sophos’ Rich Harang. “Even though still a minority of cyber threats use the pandemic as a lure, some of these new domains will eventually be used for malicious purposes.”
CTC spokesman Nick Espinosa said the first spike in visits was on February 25, when group members saw about 4,000 visits to the sites they were tracking.
“The following two weeks starting on March 9 saw rapid growth, and from March 23 onwards we’re seeing between 75,000 to 130,000 visits per weekday, and about 40,000 on the weekends,” Espinosa said. “Looking at the data collected, the pattern of visits are highest on Monday and Friday, and the lowest visit count is on the weekend. Our data shows that there were virtually no customer hits on COVID-related domains prior to February 23.”
Milwaukee-based Hold Security has been publishing daily and weekly lists of all COVID-19 related domain registrations (without any scoring assigned). Here’s a graph KrebsOnSecurity put together based on that data set, which also shows a massive spike in new domain registrations in the third week of March, trailing off considerably over the past couple of weeks.
Not everyone is convinced we’re measuring the right things, or that the current measurements are accurate. Neil Schwartzman, executive director of the anti-spam group CAUCE, said he believes DomainTool’s estimates on the percentage of new COVID/Coronavirus-themed domains that are malicious are too high, and that many are likely benign and registered by well-meaning people seeking to share news or their own thoughts about the outbreak.
“But there’s the rub,” he said. “Bad guys get to hide amidst the good really effectively, so each one needs to be reviewed on its own. And that’s a substantial amount of work.”
At the same time, Schwartzman said, focusing purely on domains may obscure the true size and scope of the overall threat. That’s because scammers very often will establish multiple subdomains for each domain, meaning that a single COVID-related new domain registration could eventually be tied to a number of different scammy or malicious sites.
Subdomains can not only make phishing domains appear more legitimate, but they also tend to lengthen the domain so that key parts of it get pushed off the URL bar in mobile browsers.
To that end, he said, it makes perhaps the most sense to focus on new domain registrations that have encryption certificates tied to them, since the issuance of an SSL certificate for a domain is usually a sign that it is about to be put to use. As noted in previous stories here, roughly 75 percent of all phishing sites now have the padlock (start with “https://”), mainly because the major Web browsers display security alerts on sites that don’t.
Schwartzman said more domain registrars should follow the example of Los Angeles-based Namecheap Inc., which last month pledged to stop accepting the automated registration of website names that include words or phrases tied to the COVID-19 pandemic. Since then, a handful of other registrars have said they plan to manually review all such registrations going forward.
The Internet Corporation for Assigned Names and Numbers (ICANN), the organization that oversees the registrar industry, recently sent a letter urging registrars to be more proactive, but stopped short of mandating any specific actions.
Schwartzman called ICANN’s response “weak tea.”
“It’s absolutely ludicrous that ICANN hasn’t stepped up, and they will bear significant responsibility for any deaths that may happen as a result of all this,” Schwartzman said. “This is a CYA response at best, and dictates to no one that they should do anything.”
Michael Daniel, president of the Cyber Threat Alliance — a cybersecurity industry group that’s also been working to fight COVID-19 related fraud — agreed, saying more pressure needs to be applied to the registrar community.
“It’s really hard to do anything about this unless the registrars step up and do something on their own,” Daniel said. “It’s either that or the government gets involved. That doesn’t mean some [registrars] aren’t doing what they can, but in general what the industry is doing is nowhere near as fast as the bad guys are generating these domains.”
The U.S. government may well soon get more involved. Earlier this week, Senators Cory Booker (D-N.J.), Maggie Hassan (D-N.H.) and Mazie K. Hirono (D-Hawaii) sent letters to eight domain name company leaders, demanding to know what they were doing to combat the threat of malicious domains, and urging them to do more.
“As cybercriminals and other malevolent actors seek to take advantage of the Coronavirus pandemic, it is critical that domain name registrars like yours (1) exercise diligence and ensure that only legitimate organizations can register Coronavirus-related domain names and domain names referencing online communications platforms; (2) act quickly to suspend, cancel, or terminate registrations for domains that are involved in unlawful or harmful activity; and (3) cooperate with law enforcement to help bring to justice cybercriminals profiting from the Coronavirus pandemic,” the senators wrote.
As Covid-19 spreads across the globe and countries do their best to slow down the infection rate, cybercriminals’ onslaught against worried users is getting more intense by the day. The latest scheme includes a malicious Android tracker app that supposedly allows users to keep an eye on the spread of the virus, but locks victims’ phone and demands money to unlock it.
Also, as many have already discovered, the spread of potentially very dangerous disinformation is reaching massive proportions.
Ransomware disguised as Fake Covid-19 tracker app
The DomainTools security research team is warning about a discovered a malicious domain (coronavirusapp[.]site) distributing a fake Coronavirus outbreak tracker app (Covid 19 Tracker), which will purportedly provide users tracking and statistical information about Covid-19 and heatmap visuals.
Once downloaded and run, the app locks the screen of the device and shows a ransom note claiming that the phone has been encrypted and that all the contents (contacts, pictures, videos, etc.) will be erased if the victim does not pay $100 in Bitcoin in the next 48 hours.
“Since Android Nougat has rolled out, there is protection in place against this type of attack. However, it only works if you have set a password. If you haven’t set a password on your phone to unlock the screen, you’re still vulnerable to the CovidLock ransomware,” the researchers noted.
But there is good news for those who fell for the trick: the researchers have reverse engineered the decryption key and will make them public (check the update at the end of this item).
This is not the first time that cybercriminals have taken advantage of the public’s demand for Covid-19 information in the helpful form of a global map: earlier this month Malwarebytes researchers warned about a site that delivers information-stealing malware while purportedly showing users updated coronavirus cases on a global map:
Many cybersecurity companies have detected a considerable increase of coronavirus-related domains registered globally, some of which are bound to be used for phishing, malware delivery, snake oil peddling and disinformation.
The latter has become quite a problem, as fake news spreads fast through social networks.
Users are urged to check the source of each piece of information they receive and to get their information directly from official sources like the World Health Organization, which is, by the way, actively fighting the “infodemic” of fake coronavirs-themed news online.
For those who really want to see the spreading of Covid-19 in a map format, Microsoft created a web portal for tracking infections across the globe, which is based on official sources.
UPDATE (March 16, 2020, 8:35 a.m. PT):
DomainTools has published an in-depth analysis of the fake Covid 19 Tracker app (i.e., the CovidLock malware), as well as the decryption key victims can use do unblock their device/decrypt its contents: 4865083501.
“CovidLock’s author did not bother implementing any type of obfuscation of the key in the application’s source code. While it’s easy to write about how this is not sophisticated from a malware development standpoint, it’s important to note that CovidLock is still effective at its lock-screen attack,” they noted.
Security pros anticipate automation will reduce IT security headcount, but not replace human expertise
The majority of companies (77 percent) continue to use or plan to use automation in the next three years, according to a Ponemon Institute and DomainTools survey.
The biggest takeaway in this year’s study is that 51 percent of respondents now believe that automation will decrease headcount in the IT security function, an increase from 30 percent in last year’s study. Further, concerns by employees losing their jobs because of automation have increased to 37 percent over last year’s 28 percent.
Meanwhile, cybersecurity skills shortage continues to be a problem. Sixty-nine percent of organizations’ IT security functions are understaffed; a slight improvement over last year’s 75 percent.
Mixed opinions about automation
The adoption of automation tools for cybersecurity this past year has had mixed reviews. Overall, 74 percent agree that automation enables IT security staff to focus on more serious vulnerabilities and overall network security. Interestingly, automation highlights a renewed focus on the importance of the human role in security. Of respondents:
- Only 40 percent believe automation reduces human error
- Half believe automation will make jobs more complex
- Fifty-four percent think automation will never replace human intuition and hands-on experience
- Seventy-four percent (a rise from last year’s 68 percent) say that automation is not capable of certain tasks done by IT security staff.
The number one roadblock of companies that considered automation and do not plan to automate is a lack of in-house expertise (53 percent), followed by a heavy reliance on legacy IT environments.
“The perspective around the effects of automated technologies for IT security continues to shift year after year,” said Dr. Larry Ponemon, chairman and founder of the Ponemon Institute.
“As adoption of automation becomes more mainstream and improves the effectiveness and efficiency of IT security staff, they are anticipating that they will be able to accomplish more with fewer bodies.
“What is likely is for there to be a consolidation of existing roles, rather than an elimination. This means better opportunities for employees to up-level their current skills to create more value-added roles as the human side of security remains as important as ever.”
The benefits of automation
The report revealed that regulatory compliance standards such as GDPR and others are a growing global influence in an organization’s use of automation, with 72 percent citing that over last year’s 66 percent.
This is reflected in the need for familiarity with security regulations and standards in both entry-level and highly experienced job candidates in the US – topping the list of knowledge requirements for the first time at 81 percent.
Automation is not a quick, fix-all solution, though it is proving to deliver tangible benefits and results. A majority (60 percent) of employees state that automation is reducing stress in their lives and 43 percent say it increases productivity.
Enhancing the capabilities of security staff
Automation delivers productivity benefits such as reducing false positives and/or false negatives (43 percent), increasing the speed of analyzing threats (42 percent), and prioritizing threats and vulnerabilities (39 percent).
“Automation is already improving the productivity of security personnel across industries. We are still in the early stages of adoption and just touching the surface of how automation will enhance the capabilities of security staff and evolve security roles,” said Corin Imai, Senior Security Advisor, DomainTools.
“However, the human factor remains the most important player in information security. Automation will never fully replace human intuition and expertise, and those that become experts in deploying and managing automation solutions will have a new valuable skill set for many years to come.”
Additional trends revealed in the report include:
- Almost half of respondents (48 percent) are sharing threat intelligence to collaborate with industry peers.
- Forty-seven percent of organizations do not invest in training or onboarding of security personnel.
- Fifty-three percent of respondents have seen an increase in attackers’ use of automation.
- Only 41 percent of CEOs and/or board of directors are briefed on the use of automation.
Over the past year, deepfakes, a machine learning model that is used to create realistic yet fake or manipulated audio and video, started making headlines as a major emerging cyber threat. The first examples of deepfakes seen by the general public were mainly amateur videos created using free deepfake tools, typically of celebrities’ faces superimposed into pornographic videos.
Even though these videos were of fairly low quality and could be reasonably distinguished as illegitimate, people understood the potential impact this new technology could have on our ability to separate fact from fiction. This is especially of concern in the world of politics, where deepfake technology can be weaponized against political figures or parties to manipulate public perception and sway elections or even the stock market.
A few years ago, deepfake technology would be limited to use by nation-states with the resources and advanced technology needed to develop such a powerful tool. Now, because deepfake toolkits are freely available and easy to learn, anyone with internet access, time, and motive can churn out deepfake videos in real-time and flood social media channels with fake content.
Also, as the toolkits become smarter, they require less material to work from to generate fake content. The earlier generation of tools required hours of video and audio – big data sets – for the machine to analyze and then manipulate. This meant people in the spotlight such as politicians, celebrities, high-profile CEOs or anyone with a large web presence had a higher chance of being spoofed. Now, a video can be fabricated from a single photo. In the future, where all it takes is your Facebook profile image and an audio soundbite of you from an Instagram story, everybody becomes a target.
The reality of non-reality
Deepfakes are so powerful because they subvert a basic human understanding of reality: if you see it and hear it, it must be real. Deepfakes untether truth from reality. They also elicit an emotional response. If you see something upsetting, and then find out it was fake, you have still had a negative emotional reaction take place and make subconscious associations with what you saw and how you feel.
This October, Governor Gavin Newsom signed California’s AB 730, known as the “Anti-Deepfake Bill,” into law with the intent to quell the spread of malicious deepfakes before the 2020 election. While a laudable effort, the law itself falls flat. It places an artificial timeline that only applies to deepfake content distributed with “actual malice” within 60 days of an election. It exempts distribution platforms from the responsibility to monitor and remove deepfake content and instead relies on producers of the videos to self-identify and claim ownership, and the burden of proof for “actual malice” will not be a clear-cut process.
This law was likely not designed with the primary purpose to be enforced, but more likely serve as the first step by lawmakers to show that they understand deepfakes are a serious threat to democracy and this is a battle that is just beginning. Ideally, this law will influence and inform other state and federal efforts and serve as a starting template to build upon with more effective and enforceable legislation.
Deepfake technology as a business threat in 2020
To date, most of the discussion around deepfake technology has been centered around its potential for misinformation campaigns and mass manipulation fueled through social media, especially in the realm of politics. 2020 will be the year we start to see deepfakes become a real threat to the enterprise, one that cyber defense teams are not yet equipped to handle.
Spearphishing targets high-level employees, typically to trick them into completely a manual task such as paying a fake invoice, sending physical documents, or manually resetting a user’s credentials for the cybercriminal. These tend to be more difficult to detect from a technology perspective, as the email doesn’t contain any suspicious links or attachments and is commonly used in conjunction with a BEC attack (when hackers gain control of an employees’ email account, allowing them to send emails from legitimate addresses). According to the FBI, BEC attacks have cost organizations worldwide more than $26 billion over the past three years.
Deepfakes have the ability to supercharge these attacks. Imagine receiving an email from your company’s CEO asking you to engage with some financial action, then receiving a follow-up text message from your CEO’s mobile number, and finally a voicemail with their voice addressing you by name, referencing previous conversations you’ve had with them, and all in the CEO’s voice.
There comes a point where the attack breaks the truth-barrier and it makes more sense to accept the request as real and authentic than to consider the possibility that it’s fake. Eventually, when the deepfake technology advances even further, it’s easy to imagine a scenario where you are on a video call with what you think is your CEO but is a deepfake video being created in real-time. Earlier this year a CEO was deceived by an AI-generated voice into transferring $243,000 to a bank account he believed to be of a company supplier.
Currently, the security industry has no appliances, email filters, or any technology to defend against deepfakes. There is progress being made, however. For example, Facebook, Microsoft, and university researchers launched the Deepfake Detection Challenge as a rallying cry to jumpstart the development of open source deepfake detection tools. The Defense Advanced Research Projects Agency (DARPA) made an announcement for the Semantic Forensics or SemaFor program which aims to develop “semantic forensics” as an additional method of defense to the statistical detection techniques used in the past.
The only remedy that currently exists is to educate users about these new types of attacks and to be on the alert for any behavior that seems out of the ordinary from the recipient, no matter how small. Trust is no longer a luxury we can afford.
Coming off of a year of major data breaches making headline news, it’s easy to draw the conclusion that security teams are losing the cybersecurity battle, a DomainTools survey reveals. Security teams remain confident Security pros are reporting real progress being made as confidence in their programs continues to grow: Thirty percent of respondents gave their program an “A” grade this year, doubling over two years from 15 percent in 2017. Less than four percent … More
The post Cyber threats continue to evolve, but security teams remain confident appeared first on Help Net Security.