Fraudsters redirected email and web traffic destined for several cryptocurrency trading platforms over the past week. The attacks were facilitated by scams targeting employees at GoDaddy, the world’s largest domain name registrar, KrebsOnSecurity has learned.
The incident is the latest incursion at GoDaddy that relied on tricking employees into transferring ownership and/or control over targeted domains to fraudsters. In March, a voice phishing scam targeting GoDaddy support employees allowed attackers to assume control over at least a half-dozen domain names, including transaction brokering site escrow.com.
And in May of this year, GoDaddy disclosed that 28,000 of its customers’ web hosting accounts were compromised following a security incident in Oct. 2019 that wasn’t discovered until April 2020.
This latest campaign appears to have begun on or around Nov. 13, with an attack on cryptocurrency trading platform liquid.com.
“A domain hosting provider ‘GoDaddy’ that manages one of our core domain names incorrectly transferred control of the account and domain to a malicious actor,” Liquid CEO Mike Kayamori said in a blog post. “This gave the actor the ability to change DNS records and in turn, take control of a number of internal email accounts. In due course, the malicious actor was able to partially compromise our infrastructure, and gain access to document storage.”
In the early morning hours of Nov. 18 Central European Time (CET), cyptocurrency mining service NiceHash disccovered that some of the settings for its domain registration records at GoDaddy were changed without authorization, briefly redirecting email and web traffic for the site. NiceHash froze all customer funds for roughly 24 hours until it was able to verify that its domain settings had been changed back to their original settings.
“At this moment in time, it looks like no emails, passwords, or any personal data were accessed, but we do suggest resetting your password and activate 2FA security,” the company wrote in a blog post.
NiceHash founder Matjaz Skorjanc said the unauthorized changes were made from an Internet address at GoDaddy, and that the attackers tried to use their access to its incoming NiceHash emails to perform password resets on various third-party services, including Slack and Github. But he said GoDaddy was impossible to reach at the time because it was undergoing a widespread system outage in which phone and email systems were unresponsive.
“We detected this almost immediately [and] started to mitigate [the] attack,” Skorjanc said in an email to this author. “Luckily, we fought them off well and they did not gain access to any important service. Nothing was stolen.”
Skorjanc said NiceHash’s email service was redirected to privateemail.com, an email platform run by Namecheap Inc., another large domain name registrar. Using Farsight Security, a service which maps changes to domain name records over time, KrebsOnSecurity instructed the service to show all domains registered at GoDaddy that had alterations to their email records in the past week which pointed them to privateemail.com. Those results were then indexed against the top one million most popular websites according to Alexa.com.
The result shows that several other cryptocurrency platforms also may have been targeted by the same group, including Bibox.com, Celsius.network, and Wirex.app. None of these companies responded to requests for comment.
In response to questions from KrebsOnSecurity, GoDaddy acknowledged that “a small number” of customer domain names had been modified after a “limited” number of GoDaddy employees fell for a social engineering scam. GoDaddy said the outage between 7:00 p.m. and 11:00 p.m. PST on Nov. 17 was not related to a security incident, but rather a technical issue that materialized during planned network maintenance.
“Separately, and unrelated to the outage, a routine audit of account activity identified potential unauthorized changes to a small number of customer domains and/or account information,” GoDaddy spokesperson Dan Race said. “Our security team investigated and confirmed threat actor activity, including social engineering of a limited number of GoDaddy employees.”
“We immediately locked down the accounts involved in this incident, reverted any changes that took place to accounts, and assisted affected customers with regaining access to their accounts,” GoDaddy’s statement continued. “As threat actors become increasingly sophisticated and aggressive in their attacks, we are constantly educating employees about new tactics that might be used against them and adopting new security measures to prevent future attacks.”
Race declined to specify how its employees were tricked into making the unauthorized changes, saying the matter was still under investigation. But in the attacks earlier this year that affected escrow.com and several other GoDaddy customer domains, the assailants targeted employees over the phone, and were able to read internal notes that GoDaddy employees had left on customer accounts.
What’s more, the attack on escrow.com redirected the site to an Internet address in Malaysia that hosted fewer than a dozen other domains, including the phishing website servicenow-godaddy.com. This suggests the attackers behind the March incident — and possibly this latest one — succeeded by calling GoDaddy employees and convincing them to use their employee credentials at a fraudulent GoDaddy login page.
In August 2020, KrebsOnSecurity warned about a marked increase in large corporations being targeted in sophisticated voice phishing or “vishing” scams. Experts say the success of these scams has been aided greatly by many employees working remotely thanks to the ongoing Coronavirus pandemic.
A typical vishing scam begins with a series of phone calls to employees working remotely at a targeted organization. The phishers often will explain that they’re calling from the employer’s IT department to help troubleshoot issues with the company’s email or virtual private networking (VPN) technology.
The goal is to convince the target either to divulge their credentials over the phone or to input them manually at a website set up by the attackers that mimics the organization’s corporate email or VPN portal.
On July 15, a number of high-profile Twitter accounts were used to tweet out a bitcoin scam that earned more than $100,000 in a few hours. According to Twitter, that attack succeeded because the perpetrators were able to social engineer several Twitter employees over the phone into giving away access to internal Twitter tools.
An alert issued jointly by the FBI and the Cybersecurity and Infrastructure Security Agency (CISA) says the perpetrators of these vishing attacks compile dossiers on employees at their targeted companies using mass scraping of public profiles on social media platforms, recruiter and marketing tools, publicly available background check services, and open-source research.
The FBI/CISA advisory includes a number of suggestions that companies can implement to help mitigate the threat from vishing attacks, including:
• Restrict VPN connections to managed devices only, using mechanisms like hardware checks or installed certificates, so user input alone is not enough to access the corporate VPN.
• Restrict VPN access hours, where applicable, to mitigate access outside of allowed times.
• Employ domain monitoring to track the creation of, or changes to, corporate, brand-name domains.
• Actively scan and monitor web applications for unauthorized access, modification, and anomalous activities.
• Employ the principle of least privilege and implement software restriction policies or other controls; monitor authorized user accesses and usage.
• Consider using a formalized authentication process for employee-to-employee communications made over the public telephone network where a second factor is used to
authenticate the phone call before sensitive information can be discussed.
• Improve 2FA and OTP messaging to reduce confusion about employee authentication attempts.
• Verify web links do not have misspellings or contain the wrong domain.
• Bookmark the correct corporate VPN URL and do not visit alternative URLs on the sole basis of an inbound phone call.
• Be suspicious of unsolicited phone calls, visits, or email messages from unknown individuals claiming to be from a legitimate organization. Do not provide personal information or information about your organization, including its structure or networks, unless you are certain of a person’s authority to have the information. If possible, try to verify the caller’s identity directly with the company.
• If you receive a vishing call, document the phone number of the caller as well as the domain that the actor tried to send you to and relay this information to law enforcement.
• Limit the amount of personal information you post on social networking sites. The internet is a public resource; only post information you are comfortable with anyone seeing.
• Evaluate your settings: sites may change their options periodically, so review your security and privacy settings regularly to make sure that your choices are still appropriate.
A critical vulnerability (CVE-2020-27955) in Git Large File Storage (Git LFS), an open source Git extension for versioning large files, allows attackers to achieve remote code execution if the Windows-using victim is tricked into cloning the attacker’s malicious repository using a vulnerable Git version control tool, security researcher Dawid Golunski has discovered.
It can be exploited in a variety of popular Git clients in their default configuration – GitHub CLI, GitHub Desktop, SmartGit, SourceTree, GitKraken, Visual Studio Code, etc. – and likely other clients/development IDEs (i.e., those install git with the Git LFS extension by default).
“Web applications / hosted repositories running on Windows which allow users to import their repositories from a URL may also be exposed to this vulnerability,” Golunski added.
About the vulnerability (CVE-2020-27955)
Golunski found that Git LFS does not specify a full path to git binary when executing a new git process via a specific exec.Command() function.
“As the exec.Command() implementation on Windows systems include the current directory, attackers may be able to plant a backdoor in a malicious repository by simply adding an executable file named: git.bat, git.exe, git.cmd or any other extension that is used on the victim’s system (PATHEXT environment dependent), in the main repo’s directory. As a result, the malicious git binary planted in this way will get executed instead of the original git binary located in a trusted path,” he explained.
The vulnerability can be triggered if the victim is tricked into cloning the attacker’s malicious repository using a vulnerable Git version control tool.
Golunski says that CVE-2020-27955 is trivial to exploit, and has released PoC exploit code, as well as video demonstrations of the exploit in action on various Git clients.
What to do?
The vulnerability affects Git LFS versions 2.12 or earlier on Windows systems (but not on Unix). According to the Git LFS maintainers, there is no workaround for this issue other than avoiding untrusted repositories.
Affected users and product vendors are advised to update to the latest Git LFS version (v2.12.1, released on Wednesday), which plugged the security hole. Git for Windows has also been updated to include this Git LFS version.
After five months in beta, the GitHub Code Scanning security feature has been made generally available to all users: for free for public repositories, as a paid option for private ones.
“So much of the world’s development happens on GitHub that security is not just an opportunity for us, but our responsibility. To secure software at scale, we need to make a base-level impact that can drive the most change; and that starts with the code,” Grey Baker, GitHub’s Senior Director of Product Management, told Help Net Security.
“Everything we’ve built previously was about responding to security incidents (dependency scanning, secret scanning, Dependabot) — reacting in real time, quickly. Our future state is about fundamentally preventing vulnerabilities from ever happening, by moving security core into the developer workflow.”
GitHub Code Scanning
The Code Scanning feature is powered by CodeQL, a powerful static analysis engine built by Semmle, which was acquired by GitHub in September 2019.
“We want developers to be able to use their tools of choice, for any of their projects on GitHub, all within the native GitHub experience they love. We’ve partnered with more than a dozen open source and commercial security vendors to date and we’ll continue to integrate code scanning with other third-party vendors through GitHub Actions and Apps,” Baker noted.
“The major value add here is that developers can work, and stay within, the code development ecosystem in which they’re most accustomed to while using their preferred scanning tools,” explained James Brotsos, Senior Solutions Engineer at Checkmarx.
“GitHub is an immensely popular resource for developers, so having something that ensures the security of code without hindering agility is critical. Our ability to automate SAST and SCA scans directly within GitHub repos simplifies workflows and removes tedious steps for the development cycle that can traditionally stand in the way of achieving DevSecOps.”
Checkmarx’s SCA (software composition analysis) help developers discover and remedy vulnerabilities within open source components that are being included into the application and prioritizing them accordingly based on severity. Checkmarx SAST (static application security testing) scans proprietary code bases – even uncompiled – to detect new and existing vulnerabilities.
“This is all done in an automated fashion, so as soon as a pull request takes place, a scan is triggered, and results are embedded directly into GitHub. Together, these integrations paint a holistic picture of the entire application’s security posture to ensure all potential gaps are accounted for,” Brotsos added.
Leon Juranic, CTO at DefenseCode, said that they are very excited by this initiative, as it provides access to security analysis to over 50+ million Github users.
“Having the security analysis results displayed as code scanning alerts in GitHub provides an convenient way to triage and prioritize fixes, a process that could be cumbersome usually requiring scrolling through many pages of exported reports, going back and forth between your code and the reported results, or reviewing them in dashboards provided by the security tool. The ease of use now means you can initiate scans, view, fix, and close alerts for potential vulnerabilities in your project’s code in an environment that is already familiar and where most of your other workflows are done,” he noted.
A week ago, GitHub also announced additional support for container scanning and standards and configuration scanning for infrastructure as code, with integration by 42Crunch, Accurics, Bridgecrew, Snyk, Aqua Security, and Anchore.
The benefits and future plans
“We expect code scanning to prevent thousands of vulnerabilities from ever existing, by catching them at code review time. We envisage a world with fewer software vulnerabilities because security review is an automated part of the developer workflow,” Baker explained.
“During the code scanning beta, developers fixed 72% of the security errors found by CodeQL and reported in the code scanning pull request experience. Achieving such a high fix rate is the result of years of research, as well as an integration that makes it easy to understand each result.”
Over 12,000 repositories tried code scanning during the beta, and another 7,000 have enabled it since it became generally available, he says, and the reception has been really positive, with many highlighting valuable security finds.
“We’ll continue to iterate and focus on feedback from the community, including around access control and permissions, which are of high priority to our users,” he concluded.
As enterprises look to differentiate themselves through digital innovation, recent research found that nearly two-thirds will be prolific software producers, with code deployed daily, by 2025.
However, this increased emphasis on speed and volume comes at a price, as vulnerable software and applications are now the leading cause of security breaches.
With development cycles accelerating and software becoming more complex due to the evolution of APIs, microservices, containers, and more, automated solutions that are purpose-built for DevOps and enable developers to find and fix flaws more quickly and easily are required.
Checkmarx’s new GitHub Action integrates the company’s application security testing (AST) solutions – Checkmarx SAST (CxSAST) and Checkmarx SCA (CxSCA) – directly with GitHub code scanning, giving developers more flexibility and power to work with their preferred tools of choice to secure proprietary and open source code.
By automatically triggering SAST and SCA security scans in the event of a pull request, and embedding results directly into the GitHub CI/CD pipeline, Checkmarx streamlines developer workflows and empowers them to code more confidently without sacrificing speed and security.
“Checkmarx and GitHub share a similar mission in that we’re both focused on helping developers strike a balance between software development speed and security,” said Robert Nilsson, VP of Product Management, Checkmarx.
“The key to this lies within the power of automation, which helps to simplify the implementation and process of security testing in today’s fast-paced DevOps environments. We’re excited to bring our best-in-class, automated SAST and SCA solutions to the GitHub community and are confident this will enhance developers’ experience and ability in finding and fixing code-borne vulnerabilities.”
Key features and benefits include:
- Ability to scan raw source code before a build takes place, enabling greater efficiency between developers and AppSec teams when using GitHub Actions
- Prioritized SAST and SCA scan results to focus and expedite developer remediation efforts on vulnerabilities that pose the greatest threat
- Automated results feedback loop to eliminate the need for manual intervention when opening and closing defects
- Direct links into the Checkmarx Software Security Platform and access to its dedicated service and support resources for even more comprehensive results and coverage and
- Links to just-in-time, lesson-specific training via Checkmarx Codebashing and online resources for remediation guidance to elevate developers’ secure coding skills.
“GitHub is dedicated to providing open source and enterprise developers with the best possible software development experience,” said John Leon, VP of Business Development, GitHub. “Checkmarx’s new GitHub Action further enables the community to develop secure software, without compromising speed or quality, all within the native GitHub experience.”
GuidePoint Security released a new open source tool that enables a red team to easily build out the necessary infrastructure.
The RedCommander tool solves a major challenge for red teams around the installation and operationalization of infrastructure by combining automation scripts and other tools into a deployable package.
RedCommander is a series of Ansible Playbooks that automate the tedious tasks required to stand up covert command and control channels during a red team exercise. This open source tool is intended to be a stepping stone for more advanced configurations during red team assessments.
Once an operator spins up several servers and configures redirectors, they can leverage RedCommander to modify and monitor their command and control servers for blue team investigations by way of RedELK. The result provides the operator with a full-spectrum overview of a Red Team exercise while simultaneously centralizing logs for Indicators of Compromise (IOC) analysis.
“Exercising defensive responses is a crucial security practice for any organization,” says Alex Williams, the creator of RedCommander and a senior consultant in the GuidePoint Security Threat & Attack Simulation practice.
“RedCommander makes it easier for red teams to deploy their infrastructure in a more customized fashion, giving them a true infrastructure for success.”
Two UCLA computer scientists have shown that existing compilers, which tell quantum computers how to use their circuits to execute quantum programs, inhibit the computers’ ability to achieve optimal performance.
Specifically, their research has revealed that improving quantum compilation design could help achieve computation speeds up to 45 times faster than currently demonstrated.
Better quantum computer performance
The computer scientists created a family of benchmark quantum circuits with known optimal depths or sizes. In computer design, the smaller the circuit depth, the faster a computation can be completed.
Smaller circuits also imply more computation can be packed into the existing quantum computer. Quantum computer designers could use these benchmarks to improve design tools that could then find the best circuit design.
“We believe in the ‘measure, then improve’ methodology,” said lead researcher Jason Cong, a Distinguished Chancellor’s Professor of Computer Science at UCLA Samueli School of Engineering.
“Now that we have revealed the large optimality gap, we are on the way to develop better quantum compilation tools, and we hope the entire quantum research community will as well.”
Cong and graduate student Daniel (Bochen) Tan tested their benchmarks in four of the most used quantum compilation tools.
Tan and Cong have made the benchmarks, named QUEKO, open source and available on the software repository GitHub.
Many issues yet to be addressed
Quantum computers utilize quantum mechanics to perform a great deal of computations simultaneously, which has the potential to make them exponentially faster and more powerful than today’s best supercomputers. But many issues need to be addressed before these devices can move out of the research lab.
For example, due to the sensitive nature of how quantum circuits work, tiny environmental changes, such as small temperature fluctuations, can interfere with quantum computation. When that happens, the quantum circuits are called decoherent — which is to say they have lost the information once encoded in them.
“If we can consistently halve the circuit depth by better layout synthesis, we effectively double the time it takes for a quantum device to become decoherent,” Cong said.
“This compilation research could effectively extend that time, and it would be the equivalent to a huge advancement in experimental physics and electrical engineering,” Cong added. “So we expect these benchmarks to motivate both academia and the industry to develop better layout synthesis tools, which in turn will help drive advances in quantum computing.”
How it all started?
Cong and his colleagues led a similar effort in the early 2000s to optimize integrated circuit design in classical computers. That research effectively pushed two generations of advances in computer processing speeds, using only optimized layout design, which shortened the distance between the transistors that comprise the circuit. This cost-efficient improvement was achieved without any other major investments in technological advances, such as physically shrinking the circuits themselves.
“Quantum processors in existence today are extremely limited by environmental interference, which puts severe restrictions on the length of computations that can be performed,” said Mark Gyure, executive director of the UCLA Center for Quantum Science and Engineering, who was not involved in this study.
“That’s why the recent research results from Professor Cong’s group are so important because they have shown that most implementations of quantum circuits to date are likely extremely inefficient and more optimally compiled circuits could enable much longer algorithms to be executed. This could result in today’s processors solving much more interesting problems than previously thought. That’s an extremely important advance for the field and incredibly exciting.”
Solar Security has announced the release of a new version of its app security analyzer, Solar appScreener 3.6, which supports Pascal and features improved integration with GitLab, GitHub and Bitbucket code version management and storage systems.
To meet international customers’ needs, the new version of our app vulnerability and undocumented feature analyzer, Solar appScreener 3.6, now supports Pascal. The predecessor of Delphi, this language underpins a variety of legacy systems that organizations around the globe actively employ for their internal needs.
“In the 1990s, Pascal variants were widely used to develop various software solutions, from research applications to computer games. Today, its derivative, Object Pascal, underlies some Windows applications.
“Now, together with Pascal support, Solar appScreener can analyze applications in 34 programming languages, surpassing all competing systems in both domestic and international markets,” said Daniil Chernov, Head of Software Security Solutions Center at Solar Security LLC.
Tighter Solar appScreener integration with GitLab, GitHub and Bitbucket is an important step towards better automation of code vulnerability scanning. This integration allows the analyzer to monitor, in an unattended mode, the submission of a new code version in a repository, automatically start analyzing new code fragments for vulnerabilities, and then send scan results to a responsible employee.
While the above functionality previously required manual configuration, since version 3.6, it has been available out of the box. Remarkably, the new code submission is now monitored not via a CI/CD server, but directly from the repository by means of push- and tag-events. This makes life easier for those companies that do not use CI/CD servers or bypass them in their development process.
The new version is also more user-friendly. Therefore, its interface now supports the creation of empty projects that do not contain any scans but allow for integrations with repositories to be configured in advance in order to automatically analyze a code in the future.
This feature is relevant, for example, when developers fail to build on the source before Solar appScreener implementation in their company, while the customer wants to start vulnerability monitoring from a more or less complete app version.
In addition, the interface allows for event log exporting, which is useful, for example, when an error was made when starting a scan and the analysis process was not performed correctly, but the customer cannot figure out the cause of failure on their own.
Now, a user can export required log files from the system in a couple of clicks, and Solar appScreener technical support team will quickly fix the error and help to start the process correctly.
Moreover, Solar appScreener 3.6 also supports Prometheus multi-platform analytic tools and Grafana interactive visualization, which is good news for large companies that already leverage these tools for system health monitoring.
This functionality is demanded by those customers that need up-to-date information on analyzer state, including high process latency, failures, system load and performance, etc.
GitHub has made available two new security features for open and private repositories: code scanning (as a GitHub-native experience) and secret scanning.
With the former, it aims to prevent vulnerabilities from ever being introduced into software and, ideally, help developers eliminate entire bug classes forever. With the latter, it wants to make sure that developers are not inadvertently leaking secrets (e.g., cloud tokens, passwords, etc.) in their repositories.
The code scanning feature, available for set up in every GitHub repository (in the Security tab), is powered by CodeQL, a semantic code analysis engine that GitHub has made available last year.
While code analysis with CodeQL is not new, this new feature makes it part of the developers’ code review workflow.
With code scanning enabled, every ‘git push’ is scanned for potential security vulnerabilities. Results are displayed in the pull request for the developer to analyze, and additional information about the vulnerability and recommendations on how to fix things are offered, so they can learn from their mistakes.
Any public project can sign up for code scanning for free – GitHub will pay for the compute resources needed.
For a peek of how this will work in practice, check out this demonstration by Grey Baker, Director of Product Management at GitHub (start the video at 31:40):
Secret scanning (formerly “token scanning”) has been available for public repositories since 2018, but it can now be used for private repositories as well.
“With over ten million potential secrets identified, customers have asked to have the same capability for their private code. Now secret scanning also watches private repositories for known secret formats and immediately notifies developers when they are found,” explained Shanku Niyogi, Senior VP of Product at GitHub.
“We’ve worked with many partners to expand coverage, including AWS, Azure, Google Cloud, npm, Stripe, and Twilio.”
Software vulnerabilities are more likely to be discussed on social media before they’re revealed on a government reporting site, a practice that could pose a national security threat, according to computer scientists at the U.S. Department of Energy’s Pacific Northwest National Laboratory.
At the same time, those vulnerabilities present a cybersecurity opportunity for governments to more closely monitor social media discussions about software gaps, the researchers assert.
“Some of these software vulnerabilities have been targeted and exploited by adversaries of the United States. We wanted to see how discussions around these vulnerabilities evolved,” said lead author Svitlana Volkova, senior research scientist in the Data Sciences and Analytics Group at PNNL.
“Social cybersecurity is a huge threat. Being able to measure how different types of vulnerabilities spread across platforms is really needed.”
Social media – especially GitHub – leads the way
Their research showed that a quarter of social media discussions about software vulnerabilities from 2015 through 2017 appeared on social media sites before landing in the National Vulnerability Database, the official U.S. repository for such information. Further, for this segment of vulnerabilities, it took an average of nearly 90 days for the gap discussed on social media to show up in the national database.
The research focused on three social platforms – GitHub, Twitter and Reddit – and evaluated how discussions about software vulnerabilities spread on each of them. The analysis showed that GitHub, a popular networking and development site for programmers, was by far the most likely of the three sites to be the starting point for discussion about software vulnerabilities.
It makes sense that GitHub would be the launching point for discussions about software vulnerabilities, the researchers wrote, because GitHub is a platform geared towards software development.
The researchers found that for nearly 47 percent of the vulnerabilities, the discussions started on GitHub before moving to Twitter and Reddit. For about 16 percent of the vulnerabilities, these discussions started on GitHub even before they are published to official sites.
Codebase vulnerabilities are common
The research points at the scope of the issue, noting that nearly all commercial software codebases contain open-source sharing and that nearly 80 percent of codebases include at least one vulnerability.
Further, each commercial software codebase contains an average of 64 vulnerabilities. The National Vulnerability Database, which curates and publicly releases vulnerabilities known as Common Vulnerabilities and Exposures “is drastically growing,” the study says, “and includes more than 100,000 known vulnerabilities to date.”
In their paper, the researchers discuss which U.S. adversaries might take note of such vulnerabilities. They mention Russia, China and others and noted that there are differences in usage of the three platforms within those countries when exploiting software vulnerabilities.
According to the study, cyberattacks in 2017 later linked to Russia involved more than 200,000 victims, affected more than 300,000 computers, and caused about $4 billion in damages.
“These attacks happened because there were known vulnerabilities present in modern software,” the study says, “and some Advanced Persistent Threat groups effectively exploited them to execute a cyberattack.”
Bots or human: Both pose a threat
The researchers also distinguished between social media traffic generated by humans and automated messages from bots. A social media message crafted by an actual person and not generated by a machine will likely be more effective at raising awareness of a software vulnerability, the researchers found, emphasizing that it was important to differentiate the two.
“We categorized users as likely bots or humans, by using the Botometer tool,” the study says, “which uses a wide variety of user-based, friend, social network, temporal, and content-based features to perform bot vs. human classification.”
The tool is especially useful in separating bots from human discussions on Twitter, a platform that the researchers noted can be helpful for accounts seeking to spread an agenda.
Ultimately, awareness of social media’s ability to spread information about software vulnerabilities provides a heads-up for institutions, the study says.
“Social media signals preceding official sources could potentially allow institutions to anticipate and prioritize which vulnerabilities to address first,” it says.
“Furthermore, quantification of the awareness of vulnerabilities and patches spreading in online social environments can provide an additional signal for institutions to utilize in their open source risk-reward decision making.”
Fugue open sources Regula to evaluate Terraform for security misconfigurations and compliance violations
Regula rules are written in Rego, the open source policy language employed by the Open Policy Agent project and can be integrated into CI/CD pipelines to prevent cloud infrastructure deployments that may violate security and compliance best practices.
“Developers design, build, and modify their own cloud infrastructure environments, and they increasingly own the security and compliance of that infrastructure,” said Josh Stella, co-founder and CTO of Fugue.
“Fugue builds solutions that empower engineers operating in secure and regulated cloud environments, and Regula quickly and easily checks that their Terraform scripts don’t violate policy—before they deploy infrastructure.”
Regula initially supports rules that validate Terraform scripts written for AWS infrastructure, and includes mapping to CIS AWS Foundations Benchmark controls where relevant. Regula also includes helper libraries that enable users to easily build their own rules that conform to enterprise policies.
At launch, Fugue has provided examples of Regula working with GitHub Actions for CI/CD, and with Fregot, a tool that enables developers to easily evaluate Rego expressions, debug code, and test policies. Fugue open sourced Fregot in November 2019.
Regula can identify serious cloud misconfiguration risk contained in Terraform scripts, many of which may not be flagged by common compliance standards. The initial release of Regula includes rules that can identify dangerously permissive IAM policies and security group rules, VPCs with flow logs disabled, EBS volumes with encryption disabled, and untagged cloud resources.
Regula works independently of Fugue, but can be integrated with Fugue for end-to-end cloud infrastructure security and compliance. The Fugue SaaS product integrates cloud security into each stage of the software development life cycle.
Fugue enables users to visualize cloud infrastructure environments, detect post-deployment misconfigurations and automatically enforce critical resource configurations through self-healing.
Both Regula and Fugue utilize the open-source Rego policy language, and developers can easily create their own rules for Regula and Fugue using a similar syntax. In addition to Fugue Enterprise, Fugue offers Developer, a free tier available to individual engineers who need to ensure continuous security and compliance of their cloud infrastructure environments.
GitHub, the world’s largest open source code repository and leading software development platform, has launched GitHub Security Lab. “Our team will lead by example, dedicating full-time resources to finding and reporting vulnerabilities in critical open source projects,” said Jamie Cool, VP of Product Management, Security at GitHub. GitHub Security Lab GitHub Security Lab is a program aimed at researchers, maintainers, and companies that want to contribute to the overall security of open source software. Current … More
The post GitHub Security Lab aims to make open source software more secure appeared first on Help Net Security.
Orvis, a Vermont-based retailer that specializes in high-end fly fishing equipment and other sporting goods, leaked hundreds of internal passwords on Pastebin.com for several weeks last month, exposing credentials the company used to manage everything from firewalls and routers to administrator accounts and database servers, KrebsOnSecurity has learned. Orvis says the exposure was inadvertent, and that many of the credentials were already expired.
Based in Sunderland, VT. and founded in 1856, privately-held Orvis is the oldest mail-order retailer in the United States. The company has approximately 1,700 employees, 69 retail stores and 10 outlets in the US, and 18 retail stores in the UK.
In late October, this author received a tip from Wisconsin-based security firm Hold Security that a file containing a staggering number of internal usernames and passwords for Orvis had been posted to Pastebin.
Reached for comment about the source of the document, Orvis spokesperson Tucker Kimball said it was only available for a day before the company had it removed from Pastebin.
“The file contains old credentials, so many of the devices associated with the credentials are decommissioned and we took steps to address the remaining ones,” Kimball said. “We are leveraging our existing security tools to conduct an investigation to determine how this occurred.”
However, according to Hold Security founder Alex Holden, this enormous passwords file was actually posted to Pastebin on two separate occasions last month, the first being on Oct. 4, and the second Oct. 22. That finding was corroborated by 4iq.com, a company that aggregates information from leaked databases online.
Orvis did not respond to follow-up requests for comment via phone and email; the last two email messages sent by KrebsOnSecurity to Orvis were returned simply as “blocked.”
It’s not unusual for employees or contractors to post bits of sensitive data to public sites like Pastebin and Github, but the credentials file apparently published by someone working at or for Orvis is by far the most extreme example I’ve ever witnessed.
For instance, included in the Pastebin files from Orvis were plaintext usernames and passwords for just about every kind of online service or security product the company has used, including:
-Data backup services
-Multiple firewall products
-Call recording services
-Orvis wireless networks (public and private)
-Employee wireless phone services
-Oracle database servers
-Microsoft 365 services
-Microsoft Active Directory accounts and passwords
-Battery backup systems
-Mobile payment services
-Door and Alarm Codes
-Apple ID credentials
By all accounts, this was a comprehensive goof: The Orvis credentials file even contained the combination to a locked safe in the company’ server room.
The only clue about the source of the Orvis password file is a notation at the top of the document that reads “VT Technical Services.”
Holden said this particular exposure also highlights the issue with third parties, as the issue most likely originated not from Orvis staff itself.
“This is a continuously growing trend of exposures created not by the victims but by those that they consider to be trusted partners,” Holden said.
It’s fairly remarkable that a company can spend millions on all the security technology under the sun and have all of it potentially undermined by one ill-advised post to Pastebin, but that is certainly the reality we live in today.
Long gone are the days when one could post something for a few hours to a public document hosting service and expect nobody to notice. Today there are a number of third-party services that regularly index and preserve such postings, regardless of how ephemeral those posts may be.
“Pastebin and other similar repositories are constantly being monitored and any data put out there will be preserved no matter how brief the posting is,” Holden said. “In the current threat landscape, we see data exposures nearly as often as we see data breaches. These exposures vary in scope and impact, and this particular one is as bad as they come without specific data exposures.”
If you’re responsible for securing your organization’s environment, it would be an excellent idea to create some tools for monitoring for your domains and brands at Pastebin, Github and other sites where employees sometimes publish sensitive corporate data, inadvertently or otherwise. There are many ways to do this; here’s one example.
Have you built such monitoring tools for your organization or employer? If so, please feel free to sound off about your approach in the comments below.