The financial services industry has the best flaw fix rate across six industries and leads a majority of industries in uncovering flaws within open source components, Veracode reveals.
Fixing open source flaws is critical because the attack surface of applications is much larger than developers expect when open source libraries are included indirectly.
The findings came as a result of an analysis of 130,000 applications from 2,500 companies.
Fixing open source flaws
The research found that financial services organizations have the smallest proportion of applications with flaws and the second-lowest prevalence of severe flaws behind the manufacturing sector.
It also has the highest fix rate among all industries, fixing 75% of flaws. Still, the research found that financial services firms require about six and a half months to resolve half of the flaws they find, indicating it is slower than other industries to remediate.
“However, developers in the financial services industry are often limited by the nature of the environments they are working in, as applications tend to be older, have a medium flaw density, and aren’t consistently following DevSecOps practices compared to other industries.
“With some additional training and sticking to best practices, they can quickly remediate issues and start to reduce security debt.”
Financial services specific findings
The research found compelling evidence that certain developer behaviors associated with DevSecOps yield substantial benefits to software security. The findings detail that financial services firms:
- Are a leading industry when it comes to fixing flaws in their open source software and establishing strong scan cadences.
- Fall to middle-of-the-road for scanning frequency and integrating security testing, and are not likely to be using dynamic analysis (DAST) scanning technology to uncover vulnerabilities.
- Outperform averages across all industries in dealing with issues related to cryptography, input validation, Cross-Site Scripting, and credentials management – all things related to protecting users of financial applications.
Offensive Security has released Kali Linux 2020.4, the latest version of its popular open source penetration testing platform. You can download it or upgrade to it. Kali Linux 2020.4 changes The changes in this version include: ZSH is now Kali’s new default shell on desktop images and cloud, Bash remains the default shell for other platforms (ARM, containers, NetHunter, WSL) for the time being. Users can, of course, use that which they prefer, but be … More
The post Kali Linux 2020.4 released: New default shell, fresh tools, and more! appeared first on Help Net Security.
The importance of applications and digital services has skyrocketed in 2020. Connectivity and resilience are imperative to keeping people connected and business moving forward. Visibility into network traffic, especially in distributed edge environments and with malicious attacks on the rise, is a critical part of ensuring uptime and performance.
“NS1 created pktvisor to address our need for more visibility across our global anycast network,” said Shannon Weyrick, VP of architecture at NS1. “By efficiently summarizing and collecting key metrics at all of our edge locations we gain a deep understanding of traffic patterns in real time, enabling rich visualization and fast automation which further increase our resiliency and performance. We are big users of and believers in open source software. As this tool will benefit other organizations leveraging distributed edge architectures, we’ve made it open and we invite the developer community to help drive future updates and innovation.”
More about pktvisor
Pktvisor summarizes network traffic in real time directly on edge nodes with Apache data sketches. The summary information may be visualized locally via the included CLI UI, and simultaneously centrally collected via HTTP to your time series database of choice, to drive global visualizations and automation.
- Packet counts and rates (w/percentiles), breakdown by ingress/egress, protocol
- DNS counts and rates, breakdown by protocol, response code
- Cardinality: Source and destination IP, DNS Qname
- DNS transaction timings (w/percentiles)
- Top 10 heavy hitters for IPs and ports; DNS Qnames, Qtypes, Result Codes; slow DNS transactions, NX, SRVFAIL, REFUSED Qnames; and GeoIP and ASN
The metrics pktvisor provides can help network and security teams by supplementing existing metrics to help understand traffic patterns, identify attacks, and gather information on how to mitigate them.
Available as a Docker container, it is easy to install and has low network and storage requirements. Due to its summarizing design, the amount of data collected is a function of the number of hosts being collected, not a function of traffic rates, so spikes or even DDoS attacks will not affect downstream collection systems.
The machine identity attack surface is exploding, with a rapid increase in all types of machine identity-related security events in 2018 and 2019, according to Venafi. For example, the number of reported machine identity-related cyberattacks grew by over 400% during this two-year period.
“We have seen machine use skyrocket in organizations over the last five years, but many businesses still focus their security controls primarily on human identity management,” said Kevin Bocek, VP of security strategy and threat intelligence at Venafi.
“Digital transformation initiatives are in jeopardy because attackers are able to exploit wide gaps in machine identity management strategies. The COVID-19 pandemic is driving faster adoption of cloud, hybrid and microservices architectures, but protecting machine identities for these projects are often an afterthought.
“The only way to mitigate these risks is to build comprehensive machine identity management programs that are as comprehensive as customer, partner and employee identity and access management strategies.”
- Between 2015 and 2019, the number of reported cyberattacks that used machine identities grew by more than 700%, with this amount increasing by 433% between the years 2018 and 2019 alone.
- From 2015 to 2019, the number of vulnerabilities involving machine identities grew by 260%, increasing by 125% between 2018 and 2019.
- The use of commodity malware that abuses machine identities doubled between the years 2018 and 2019 and grew 300% over the five years leading up to 2019.
- Between 2015 and 2019, the number of reported advanced persistent threats (APTs) that used machine identities grew by 400%. Reports of these attacks increased by 150% between 2018 and 2019.
“As our use of cloud, hybrid, open source and microservices use increases, there are many more machine identities on enterprise networks—and this rising number correlates with the accelerated number of threats,” said Yana Blachman, threat intelligence researcher at Venafi.
“As a result, every organization’s machine identity attack surface is getting much bigger. Although many threats or security incidents frequently involve a machine identity component, too often these details do not receive enough attention and aren’t highlighted in public reports.
“This lack of focus on machine identities in cyber security reporting has led to a lack of data and focus on this crucial area of security. As a result, the trends we are seeing in this report are likely just the tip of the iceberg.”
Some of the world’s most skilled nation-state cyber adversaries and notorious ransomware gangs are deploying an arsenal of new open-sourced tools, actively exploiting corporate email systems and using online extortion to scare victims into paying ransoms, according to a report from Accenture.
The report examines the tactics, techniques and procedures employed by some of the most sophisticated cyber adversaries and explores how cyber incidents could evolve over the next year.
“Since COVID-19 radically shifted the way we work and live, we’ve seen a wide range of cyber adversaries changing their tactics to take advantage of new vulnerabilities,” said Josh Ray, who leads Accenture Security’s cyber defense practice globally.
“The biggest takeaway from our research is that organizations should expect cybercriminals to become more brazen as the potential opportunities and pay-outs from these campaigns climb to the stratosphere.
“In such a climate, organizations need to double down on putting the right controls in place and by leveraging reliable cyber threat intelligence to understand and expel the most complex threats.”
Sophisticated adversaries mask identities with off-the-shelf tools
Throughout 2020, CTI analysts have observed suspected state-sponsored and organized criminal groups using a combination of off-the-shelf tooling — including “living off the land” tools, shared hosting infrastructure and publicly developed exploit code — and open source penetration testing tools at unprecedented scale to carry out cyberattacks and hide their tracks.
For example, Accenture tracks the patterns and activities of an Iran-based hacker group referred to as SOURFACE (also known as Chafer or Remix Kitten). Active since at least 2014, the group is known for its cyberattacks on the oil and gas, communications, transportation and other industries in the U.S., Israel, Europe, Saudi Arabia, Australia and other regions.
CTI analysts have observed SOURFACE using legitimate Windows functions and freely available tools such as Mimikatz for credential dumping. This technique is used to steal user authentication credentials like usernames and passwords to allow attackers to escalate privileges or move across the network to compromise other systems and accounts while disguised as a valid user.
According to the report, it is highly likely that sophisticated actors, including state-sponsored and organized criminal groups, will continue to use off-the-shelf and penetration testing tools for the foreseeable future as they are easy to use, effective and cost-efficient.
New, sophisticated tactics target business continuity
The report notes how one notorious group has aggressively targeted systems supporting Microsoft Exchange and Outlook Web Access, and then uses these compromised systems as beachheads within a victim’s environment to hide traffic, relay commands, compromise e-mail, steal data and gather credentials for espionage efforts.
Operating from Russia, the group, refered to as BELUGASTURGEON (also known as Turla or Snake), has been active for more than 10 years and is associated with numerous cyberattacks aimed at government agencies, foreign policy research firms and think tanks across the globe.
Ransomware feeds new profitable, scalable business model
Ransomware has quickly become a more lucrative business model in the past year, with cybercriminals taking online extortion to a new level by threatening to publicly release stolen data or sell it and name and shame victims on dedicated websites.
The criminals behind the Maze, Sodinokibi (also known as REvil) and DoppelPaymer ransomware strains are the pioneers of this growing tactic, which is delivering bigger profits and resulting in a wave of copycat actors and new ransomware peddlers.
Additionally, the infamous LockBit ransomware emerged earlier this year, which — in addition to copying the extortion tactic — has gained attention due to its self-spreading feature that quickly infects other computers on a corporate network.
The motivations behind LockBit appear to be financial, too. CTI analysts have tracked cybercriminals behind it on Dark Web forums, where they are found to advertise regular updates and improvements to the ransomware, and actively recruit new members promising a portion of the ransom money.
The success of these hack-and-leak extortion methods, especially against larger organizations, means they will likely proliferate for the remainder of 2020 and could foreshadow future hacking trends in 2021. In fact, CTI analysts have observed recruitment campaigns on a popular Dark Web forum from the threat actors behind Sodinokibi.
Microsoft and Adobe released out-of-band security updates for Visual Studio Code, the Windows Codecs Library, and Magento.
All the updates fix vulnerabilities that could be exploited for remote code execution, but the good news is that none of them are being actively exploited by attackers (yet!).
“To exploit this vulnerability, an attacker would need to convince a target to clone a repository and open it in Visual Studio Code. Attacker-specified code would execute when the target opens the malicious ‘package.json’ file,” Microsoft explained.
If the target uses an account with administrative privileges, the attacker can take complete control of the affected system.
Microsoft has also fixed a RCE (CVE-2020-17022) in the way that Microsoft Windows Codecs Library handles objects in memory, which could be triggered by a program processing a specially crafted image file.
It only affects Windows 10 users, and only if they installed the optional HEVC or “HEVC from Device Manufacturer” media codecs from Microsoft Store.
“Affected customers will be automatically updated by Microsoft Store. Customers do not need to take any action to receive the update,” the company noted, and explained that “servicing for store apps/components does not follow the monthly ‘Update Tuesday’ cadence, but are offered whenever necessary.”
Among the plugged security holes are two critical ones:
- CVE-2020-24407 – a file upload allow list bypass that could be exploited to achieve code execution
- CVE-2020-24400 – an SQL injection that could allow for arbitrary read or write access to database
COVID-19 has forced developer agility into overdrive, as the tech industry’s quick push to adapt to changing dynamics has accelerated digital transformation efforts and necessitated the rapid introduction of new software features, patches, and functionalities.
During this time, organizations across both the private and public sector have been turning to open source solutions as a means to tackle emerging challenges while retaining the rapidity and agility needed to respond to evolving needs and remain competitive.
Since well before the pandemic, software developers have leveraged open source code as a means to speed development cycles. The ability to leverage pre-made packages of code rather than build software from the ground up has enabled them to save valuable time. However, the rapid adoption of open source has not come without its own security challenges, which developers and organizations should resolve safely.
Here are some best practices developers should follow when implementing open source code to promote security:
Know what and where open source code is in use
First and foremost, developers should create and maintain a record of where open source code is being used across the software they build. Applications today are usually designed using hundreds of unique open source components, which then reside in their software and workspaces for years.
As these open source packages age, there is an increasing likelihood of vulnerabilities being discovered in them and publicly disclosed. If the use of components is not closely tracked against the countless new vulnerabilities discovered every year, software leveraging these components becomes open to exploitation.
Attackers understand all too well how often teams fall short in this regard, and software intrusions via known open source vulnerabilities are a highly common sources of breaches. Tracking open source code usage along with vigilance around updates and vulnerabilities will go a long way in mitigating security risk.
Understand the risks before adopting open source
Aside from tracking vulnerabilities in the code that’s already in use, developers must do their research on open source components before adopting them to begin with. While an obvious first step is ensuring that there are no known vulnerabilities in the component in question, other factors should be considered focused on the longevity of the software being built.
Teams should carefully consider the level of support offered for a given component. It’s important to get satisfactory answers to questions such as:
- How often is the component patched?
- Are the patches of high quality and do they address the most pressing security issues when released?
- Once implemented, are they communicated effectively and efficiently to the user base?
- Is the group or individual who built the component a trustworthy source?
Leverage automation to mitigate risk
It’s no secret that COVID-19 has altered developers’ working conditions. In fact, 38% of developers are now releasing software monthly or faster, up from 27% in 2018. But this increased pace often comes paired with unwanted budget cuts and organizational changes. As a result, the imperative to “do more with less” has become a rallying cry for business leaders. In this context, it is indisputable that automation across the entire IT security portfolio has skyrocketed to the top of the list of initiatives designed to improve operational efficiency.
While already an important asset for achieving true DevSecOps agility, automated scanning technology has become near-essential for any organization attempting to stay secure while leveraging open source code. Manually tracking and updating open source vulnerabilities across an organization’s entire software suite is hard work that only increases in difficulty with the scale of an organization’s software deployments. And what was inefficient in normal times has become unfeasible in the current context.
Automated scanning technologies alleviate the burden of open source security by handling processes that would otherwise take up precious time and resources. These tools are able to detect and identify open source components within applications, provide detailed risk metrics regarding open source vulnerabilities, and flag outdated libraries for developers to address. Furthermore, they provide detailed insight into thousands of public open source vulnerabilities, security advisories and bugs, to ensure that when components are chosen they are secure and reputable.
Finally, these tools help developers prioritize and triage remediation efforts once vulnerabilities are identified. Equipped with the knowledge of which vulnerabilities present the greatest risk, developers are able to allocate resources most efficiently to ensure security does not get in the way of timely release cycles.
Confidence in a secure future
When it comes to open source security, vigilance is the name of the game. Organizations must be sure to reiterate the importance of basic best practices to developers as they push for greater speed in software delivery.
While speed has long been understood to come at the cost of software security, this type of outdated thinking cannot persist, especially when technological advancements in automation have made such large strides in eliminating this classically understood tradeoff. By following the above best practices, organizations can be more confident that their COVID-19 driven software rollouts will be secure against issues down the road.
Misconfigured or unsecured databases exposed on the open web are a fact of life. We hear about some of them because security researchers tell us how they discovered them, pinpointed their owners and alerted them, but many others are found by attackers first.
It used to take months to scan the Internet looking for open systems, but attackers now have access to free and easy-to-use scanning tools that can find them in less than an hour.
“There’s no way to leave unsecured data online without opening the data up to attack. This is why it’s crucial to always enable security and authentication features when setting up databases, so that your organization avoids this risk altogether.”
What do attackers do with exposed databases?
Bressers has been involved in the security of products and projects – especially open-source – for a very long time. In the past two decades, he created the product security division at Progeny Linux Systems and worked as a manager of the Red Hat product security team and headed the security strategy in Red Hat’s Platform Business Unit.
He now manages bug bounties, penetration testing and security vulnerability programs for Elastic’s products, as well as the company’s efforts to improve application security, add new and improve existing security features as needed or requested by customers.
The problem with exposed Elasticsearch (MariaDB, MongoDB, etc.) databases, he says, is that they are often left unsecured by developers by mistake and companies don’t discover the exposure quickly.
“The scanning tools do most of the work, so it’s up to the attacker to decide if the database has any data worth stealing,” he noted, and pointed out that this isn’t hacking, exactly – it’s mining of open services.
Attackers can quickly exfiltrate the accessible data, hold it for ransom, sell it to the highest bidder, modify it or simply delete it all.
“Sometimes there’s no clear advantage or motive. For example, this summer saw a string of cyberattacks called the Meow Bot attacks that have affected at least 25,000 databases so far. The attacker replaced the contents of every afflicted database with the word ‘meow’ but has not been identified or revealed anything behind the purpose of the attack,” he explained.
Advice for organizations that use clustered databases
Open-source database platforms such as Elasticsearch have built-in security to prevent attacks of this nature, but developers often disable those features in haste or due to a lack of understanding that their actions can put customer data at risk, Bressers says.
“The most important thing to keep in mind when trying to secure data is having a clear understanding of what you are securing and what it means to your organization. How sensitive is the data? What level of security needs to be applied? Who should have access?” he explained.
“Sometimes working with a partner who is an expert at running a modern database is a more secure alternative than doing it yourself. Sometimes it’s not. Modern data management is a new problem for many organizations; make sure your people understand the opportunities and challenges. And most importantly, make sure they have the tools and training.”
Secondly, he says, companies should set up external scanning systems that continuously check for exposed databases.
“These may be the same tools used by attackers, but they immediately notify security teams when a developer has mistakenly left sensitive data unlocked. For example, a free scanner is available from Shadowserver.”
Elastic offers information and documentation on how to enable the security features of Elasticsearch databases and prevent exposure, he adds and points out that security is enabled by default in their Elasticsearch Service on Elastic Cloud and cannot be disabled.
Defense in depth
No organization will ever be 100% safe, but steps can be taken to decrease a company’s attack surface. “Defense in depth” is the name of the game, Bressers says, and in this case, it should include the following security layers:
- Discovery of data exposure (using the previously mentioned external scanning systems)
- Strong authentication (SSO or usernames/passwords)
- Prioritization of data access (e.g., HR may only need access to employee information and the accounting department may only need access to budget and tax data)
- Deployment of monitoring infrastructures and automated solutions that can quickly identify potential problems before they become emergencies, isolate infected databases, and flag to support and IT teams for next steps
He also advises organizations that don’t have the internal expertise to set security configurations and managing a clustered database to hire of service providers that can handle data management and have a strong security portfolio, and to always have a mitigation plan in place and rehearse it with their IT and security teams so that when something does happen, they can execute a swift and intentional response.
After five months in beta, the GitHub Code Scanning security feature has been made generally available to all users: for free for public repositories, as a paid option for private ones.
“So much of the world’s development happens on GitHub that security is not just an opportunity for us, but our responsibility. To secure software at scale, we need to make a base-level impact that can drive the most change; and that starts with the code,” Grey Baker, GitHub’s Senior Director of Product Management, told Help Net Security.
“Everything we’ve built previously was about responding to security incidents (dependency scanning, secret scanning, Dependabot) — reacting in real time, quickly. Our future state is about fundamentally preventing vulnerabilities from ever happening, by moving security core into the developer workflow.”
GitHub Code Scanning
The Code Scanning feature is powered by CodeQL, a powerful static analysis engine built by Semmle, which was acquired by GitHub in September 2019.
“We want developers to be able to use their tools of choice, for any of their projects on GitHub, all within the native GitHub experience they love. We’ve partnered with more than a dozen open source and commercial security vendors to date and we’ll continue to integrate code scanning with other third-party vendors through GitHub Actions and Apps,” Baker noted.
“The major value add here is that developers can work, and stay within, the code development ecosystem in which they’re most accustomed to while using their preferred scanning tools,” explained James Brotsos, Senior Solutions Engineer at Checkmarx.
“GitHub is an immensely popular resource for developers, so having something that ensures the security of code without hindering agility is critical. Our ability to automate SAST and SCA scans directly within GitHub repos simplifies workflows and removes tedious steps for the development cycle that can traditionally stand in the way of achieving DevSecOps.”
Checkmarx’s SCA (software composition analysis) help developers discover and remedy vulnerabilities within open source components that are being included into the application and prioritizing them accordingly based on severity. Checkmarx SAST (static application security testing) scans proprietary code bases – even uncompiled – to detect new and existing vulnerabilities.
“This is all done in an automated fashion, so as soon as a pull request takes place, a scan is triggered, and results are embedded directly into GitHub. Together, these integrations paint a holistic picture of the entire application’s security posture to ensure all potential gaps are accounted for,” Brotsos added.
Leon Juranic, CTO at DefenseCode, said that they are very excited by this initiative, as it provides access to security analysis to over 50+ million Github users.
“Having the security analysis results displayed as code scanning alerts in GitHub provides an convenient way to triage and prioritize fixes, a process that could be cumbersome usually requiring scrolling through many pages of exported reports, going back and forth between your code and the reported results, or reviewing them in dashboards provided by the security tool. The ease of use now means you can initiate scans, view, fix, and close alerts for potential vulnerabilities in your project’s code in an environment that is already familiar and where most of your other workflows are done,” he noted.
A week ago, GitHub also announced additional support for container scanning and standards and configuration scanning for infrastructure as code, with integration by 42Crunch, Accurics, Bridgecrew, Snyk, Aqua Security, and Anchore.
The benefits and future plans
“We expect code scanning to prevent thousands of vulnerabilities from ever existing, by catching them at code review time. We envisage a world with fewer software vulnerabilities because security review is an automated part of the developer workflow,” Baker explained.
“During the code scanning beta, developers fixed 72% of the security errors found by CodeQL and reported in the code scanning pull request experience. Achieving such a high fix rate is the result of years of research, as well as an integration that makes it easy to understand each result.”
Over 12,000 repositories tried code scanning during the beta, and another 7,000 have enabled it since it became generally available, he says, and the reception has been really positive, with many highlighting valuable security finds.
“We’ll continue to iterate and focus on feedback from the community, including around access control and permissions, which are of high priority to our users,” he concluded.
According to a recent study, only a minority of software developers are actually working in a software development company. This means that nowadays literally every company builds software in some form or another.
As a professional in the field of information security, it is your task to protect information, assets, and technologies. Obviously, the software built by or for your company that is collecting, transporting, storing, processing, and finally acting upon your company’s data, is of high interest. Secure development practices should be enforced early on and security must be tested during the software’s entire lifetime.
Within the (ISC)² common body of knowledge for CISSPs, software development security is listed as an individual domain. Several standards and practices covering security in the Software Development Lifecycle (SDLC) are available: ISO/IEC 27024:2011, ISO/IEC TR 15504, or NIST SP800-64 Revision 2, to name some.
All of the above ask for continuous assessment and control of artifacts on the source-code level, especially regarding coding standards and Common Weakness Enumerations (CWE), but only briefly mention static application security testing (SAST) as a possible way to address these issues. In the search for possible concrete tools, NIST provides SP 500-268 v1.1 “Source Code Security Analysis Tool Function Specification Version 1.1”.
In May 2019, NIST withdrew the aforementioned SP800-64 Rev2. NIST SP 500-268 was published over nine years ago. This seems to be symptomatic for an underlying issue we see: the standards cannot keep up with the rapid pace of development and change in the field.
A good example is the dawn of the development language Rust, which addresses a major source of security issues presented by the classically used language C++ – namely memory management. Major players in the field such as Microsoft and Google saw great advantages and announced that they would focus future developments towards Rust. While the standards mention development languages superior to others, neither the mechanisms used by Rust nor Rust itself is mentioned.
In the field of Static Code Analysis, the information in NIST SP 500-268 is not wrong, but the paper simply does not mention advances in the field.
Let us briefly discuss two aspects: First, the wide use of open source software gave us insight into a vast quantity of source code changes and the reasoning behind them (security, performance, style). On top of that, we have seen increasing capacities of CPU power to process this data, accompanied by algorithmic improvements. Nowadays, we have a large lake of training data available. To use our company as an example, in order to train our underlying model for C++ alone, we are scanning changes in over 200,000 open source projects with millions of files containing rich history.
Secondly, in the past decade, we’ve witnessed tremendous advances in machine learning. We see tools like GPT-3 and their applications in source code being discussed widely. Classically, static source code analysis was the domain of Symbolic AI—facts and rules applied to source code. The realm of source code is perfectly suited for this approach since software source code has a well-defined syntax and grammar. The downside is that these rules were developed by engineers, which limits the pace in which rules can be generated. The idea would be to automate the rule construction by using machine learning.
Recently, we see research in the field of machine learning being applied to source code. Again, let us use our company as an example: By using the vast amount of changes in open source, our system looks out for patterns connected to security. It presents possible rules to an engineer together with found cases in the training set—both known and fixed, as well as unknown.
Also, the system supports parameters in the rules. Possible values for these parameters are collected by the system automatically. As a practical example, taint analysis follows incoming data to its use inside of the application to make sure the data is sanitized before usage. The system automatically learns possible sources, sanitization, and sink functions.
Back to the NIST Special Papers: With the withdrawal of SP 800-64 Rev 2, users were pointed to NIST SP 800-160 Vol 1 for the time being until a new, updated white paper is published. This was at the end of May 2019. The nature of these papers is to only describe high-level best practices, list some examples, and stay rather vague in concrete implementation. Yet, the documents are the basis for reviews and audits. Given the importance of the field, it seems as if a major component is missing. It is also time to think about processes that would help us to keep up with the pace of technology.
Over a year has passed since Nmap had last been updated, but this weekend Gordon “Fyodor” Lyon announced Nmap 7.90.
Nmap is a widely used free and open-source network scanner.
The utility is used for network inventorying, port scanning, managing service upgrade schedules, monitoring host or service uptime, etc.
It works on most operating systems: Linux, Windows, macOS, Solaris, and BSD.
First and foremost, Nmap 7.90 comes with Npcap 1.0.0, the first completely stable version of the raw packet capturing/sending driver for Windows.
Prior to Npcap, Nmap used Winpcap, but the driver hasn’t been updated since 2013, didn’t always work on Windows 10, and depended on long-deprecated Windows APIs.
“While we created Npcap for Nmap, it turns out that many other projects and companies had the same need. Wireshark switched to Npcap with their big 3.0.0 release last February, and Microsoft publicly recommends Npcap for their Azure ATP (Advanced Threat Protection) product,” Lyon explained.
“We introduced the Npcap OEM program allowing companies to license Npcap OEM for use within their products or for company-internal use with commercial support and deployment automation. This project that was expected to be a drain on our resources (but worthwhile since it makes Nmap so much better) is now helping to fund the Nmap project. The Npcap OEM program has also helped ensure Npcap’s stability by deploying it on some of the fastest networks at some of the largest enterprises in the world.”
Nmap 7.90 also comes with:
- New fingerprints for better OS and service/version detection
- 3 new NSE scripts, new protocol libraries and payloads for host discovery, port scanning and version detection
- 70+ smaller bug fixes and improvements
- Build system upgrades and code quality improvements
“We also created a special ‘Nmap OEM Edition’ for the companies who license Nmap to handle host discovery within their products. We have been selling such licenses for more than 20 years and it’s about time OEM’s have an installer more customized to their needs,” Lyon added.
Microsoft has released a new report outlining enterprise cyberattack trends in the past year (July 2019 – June 2020) and offering advice on how organizations can protect themselves.
Based on over 8 trillion daily security signals and observations from the company’s security and threat intelligence experts, the Microsoft Digital Defense Report 2020 draws a distinction between attacks mounted by cybercriminals and those by nation-state attackers.
The cybercrime threat
In the past year, cybercriminals:
- Were quick to exploit the fear and uncertainty associated with COVID-19 as a lure in phishing emails, and the popularity of some SaaS offerings and other services
- Exploited the lack of basic security hygiene and well-known vulnerabilities to gain access to enterprise systems and networks
- Exploited supply chain (in)security by hitting vulnerable third-party services, open source software and IoT devices and using them as a way into the target organization
More often than not, phishing emails impersonate a well-known service such as Office 365 (Microsoft), Zoom, Amazon or Apple, in an attempt to harvest login credentials.
“While credential phishing and BEC continue to be the dominant variations, we also see attacks on a user’s identity and credential being attempted via password reuse and password spray attacks using legacy email protocols such as IMAP and SMTP,” Microsoft noted.
The attackers’ reason for exploiting these legacy authentication protocols is simple: they don’t support multi-factor authentication (MFA). Microsoft advises on enabling MFA and disabling legacy authentication.
Cybercriminals are also:
- Increasingly use cloud services and compromised email and web hosting infrastructures to orchestrate phishing campaigns
- Rapidly changing campaigns (sending domains, email addresses, content templates, and URL domains)
- Constantly changing and evolving payload delivery mechanisms (poisoned search results, custom 404 pages hosting phishing payloads, etc.)
One of the biggest and most disruptive cybercrime threat in the past year was ransomware – particularly “human-operated” ransomware wielded by gangs that target ogranizations they believe will part with big sums if affected.
These gangs sweep the internet for easy entry points or use commodity malware to gain access to company networks and change ransomware payloads and attack tools depending on the “terrain” they landed in (and to avoid attribution).
“Ransomware criminals are intimately familiar with systems management concepts and the struggles IT departments face. Attack patterns demonstrate that cybercriminals know when there will be change freezes, such as holidays, that will impact an organization’s ability to make changes (such as patching) to harden their networks,” Microsoft explained.
“They’re aware of when there are business needs that will make businesses more willing to pay ransoms than take downtime, such as during billing cycles in the health, finance, and legal industries. Targeting networks where critical work was needed during the COVID-19 pandemic, and also specifically attacking remote access devices during a time when unprecedented numbers of people were working remotely, are examples of this level of knowledge.”
Some of them have even shortened their in-network dwell time before deploying the ransomware, going from initial entry to ransoming the entire network in less than 45 minutes.
Gerrit Lansing, Field CTO, Stealthbits, commented that the speed at which a targeted ransomware attack can happen is really determined by one thing: how quickly an adversary can compromise administrative privileges in Microsoft Active Directory.
“Going from initial infiltration to total ownership of Active Directory can be a matter of seconds. Once these privileges are compromised, an adversary’s ability to deploy ransomware to all machines joined to Active Directory is unfettered, which explains how an adversary can go from initial infiltration to total ransomware infection in such a short period of time,” he noted.
Finally, to counter the threat of supply chain insecurity, Microsoft advises companiessupply to:
- Vet their service providers thoroughly
- Use systems to automatically identify open source software components and vulnerabilities in them
- Map IoT assets, apply security policies to reduce the attack surface, and to use a different network for IoT devices and be familiar with all exposed interfaces
The company has been following and mapping the activities of a number of nation-state actors and has found that – based on the nation state notifications they deliver to their customers – the attackers’ primary targets are not in the critical infrastructure sectors.
Instead, the top targeted industry sectors are non-governmental organizations (advocacy groups, human rights organizations, nonprofit organizations, etc.) and professional services (consulting firms and contractors):
Microsoft found the most common attack techniques used by nation-state actors in the past year are reconnaissance, credential harvesting, malware, and VPN exploits. Web shell-based attacks are also on the rise.
The report delineates steps organizations can take to counter each of these threats as well as to improve their security and the security of their remote workforce.
“Given the leap in attack sophistication in the past year, it is more important than ever that we take steps to establish new rules of the road for cyberspace; that all organizations, whether government agencies or businesses, invest in people and technology to help stop attacks; and that people focus on the basics, including regular application of security updates, comprehensive backup policies, and, especially, enabling MFA. Our data shows that enabling MFA would alone have prevented the vast majority of successful attacks,” the Microsoft Security Team concluded.
Win-KeX provides a Kali Desktop Experience for Windows Subsystem for Linux (WSL 2), and version 2.0 comes with useful features.
Win-KeX 2.0 features
- Win-KeX SL (Seamless Edition) – no more borders
- Sound support
- Multi-session support
- KeX sessions can be run as root
- Able to launch “kex” from anywhere – no more cd-ing into the Kali filesystem required
- Shared clipboard – cut and paste content between Kali and Windows apps
Win-KeX now supports concurrent sessions:
- Win-KeX as unprivileged user
- Win-KeX as root user
- Win-KeX SL
Two dedicated modes
Win-KeX now supports two dedicated modes:
1. Win-KeX Window mode is the classic Win-KeX look and feel with one dedicated window for the Kali Linux desktop. To launch Win-KeX in Window mode with sound support, type:
kex --win -s
2. Win-KeX SL mode provides a seamless integration of Kali Linux into the Windows desktop with the Windows Start menu at the bottom and the Kali panel at the top of the screen. All applications are launched in their own windows sharing the same desktop as Windows applications.
kex --sl --s
Microsoft has open-sourced OneFuzz, its own internal continuous developer-driven fuzzing platform, allowing developers around the world to receive fuzz testing results directly from their build system.
Fuzzing is an automated software testing technique that involves entering random, unexpected, malformed and/or invalid data into a computer program. The goal is to reveal exceptions (e.g., crashes, memory leaks, etc.) and unexpected behaviors that could affect the program’s security and performance.
Azure-powered continuous developer-driven fuzzing
Project OneFuzz is an extensible, self-hosted Fuzzing-As-A-Service platform for Azure that aggregates several existing fuzzers and (through automation) bakes in crash detection, coverage tracking and input harnessing.
The tool is used by Microsoft’s internal teams to strengthen the security development of Windows, Microsoft Edge, and other software products.
“Traditionally, fuzz testing has been a double-edged sword for developers: mandated by the software-development lifecycle, highly effective in finding actionable flaws, yet very complicated to harness, execute, and extract information from,” Microsoft Security principal security software engineering lead Justin Campbell and senior director for special projects management Mike Walker noted.
“That complexity required dedicated security engineering teams to build and operate fuzz testing capabilities making it very useful but expensive. Enabling developers to perform fuzz testing shifts the discovery of vulnerabilities to earlier in the development lifecycle and simultaneously frees security engineering teams to pursue proactive work.”
The tool’s capabilities
As the two explained, OneFuzz will allow developers to launch fuzz jobs – ranging in size from a few virtual machines to thousands of cores – with a single command line baked into the build system.
The tool’s features include:
- Composable fuzzing workflows: Open source allows users to onboard their own fuzzers, swap instrumentation, and manage seed inputs.
- Built-in ensemble fuzzing: By default, fuzzers work as a team to share strengths, swapping inputs of interest between fuzzing technologies.
- Programmatic triage and result deduplication: It provides unique flaw cases that always reproduce.
- On-demand live-debugging of found crashes: It lets users summon a live debugging session on-demand or from their build system.
- Transparent design that allows introspection into every stage.
- Detailed telemetry: Easy monitoring of all fuzzing
- Multi-platform by design: Fuzzing can be performed on Windows and varios Linux OSes, by using one’s own OS build, kernel, or nested hypervisor.
- Crash reporting notification callbacks: Currently supporting Azure DevOps Work Items and Microsoft Teams messages
- Code Coverage KPIs: Users can monitor their progress and motivate testing using code coverage as key metric.
OneFuzz will be available to the rest of the world in a few days (via GitHub). Microsoft will continue to update and expand it with contributions from the company’s various teams, and welcomes contributions and suggestions from the wider open-source community.
COVID-19 has upended the way we do all things. In this interview, Mike Bursell, Chief Security Architect at Red Hat, shares his view of which IT security changes are ongoing and which changes enterprises should prepare for in the coming months and years.
How has the pandemic affected enterprise edge computing strategies? Has the massive shift to remote work created problems when it comes to scaling hybrid cloud environments?
The pandemic has caused major shifts in the ways we live and work, from video calls to increased use of streaming services, forcing businesses to embrace new ways to be flexible, scalable, efficient and cost-saving. It has also exposed weaknesses in the network architectures that underpin many companies, as they struggle to cope with remote working and increased traffic. We’re therefore seeing both an accelerated shift to edge computing, which takes place at or near the physical location of either the end-user or the data source, and further interest in hybrid cloud strategies which don’t require as much on-site staff time.
Changing your processes to make the most of this without damaging your security posture requires thought and, frankly, new policies and procedures. Get your legal and risk teams involved – but don’t forget your HR department. HR has a definite role to play in allowing your key employees to continue to do the job you need them to do, but in ways that are consonant with the new world we’re living in.
However, don’t assume that these will be – or should be! – short-term changes. If you can find more efficient or effective ways of managing your infrastructure, without compromising your risk profile while also satisfying new staff expectations, then everyone wins.
What would you say are the most significant challenges for enterprises that want to build secure and future-proof application infrastructures?
One challenge is that although some of the technology is now quite mature, the processes for managing it aren’t, yet. And by that I don’t just mean technical processes, but how you arrange your teams and culture to suit new ways of managing, deploying, and (critically) automating your infrastructure. Add to this new technologies such as confidential computing (using Trusted Execution Environments to protect data in use), and there is still a lot of change.
The best advice is to plan for change – technical, process and culture – but do not, whatever you do, leave security till last. It has to be front and centre of any plans you make. One concrete change that you can make immediately is taking your security people off just “fire-fighting duty”, where they have to react to crises as they come in: businesses can consider how to use them in a more proactive way.
People don’t scale, and there’s a global shortage of security experts. So, you need to use the ones that you have as effectively as you can, and, crucially, give them interesting work to do, if you plan to retain them. It’s almost guaranteed that there are ways to extend their security expertise into processes and automation which will benefit your broader teams. At the same time, you can allow those experts to start preparing for new issues that will arise, and investigating new technologies and methodologies which they can then reapply to business processes as they mature.
How has cloud-native management evolved in the last few years and what are the current security stumbling blocks?
One of the areas of both maturity and immaturity is in terms of workload isolation. We can think of three types: workload from workload isolation (preventing workloads from interfering with each other – type 1); host from workload isolation (preventing workloads from interfering with the host – type 2); workload from host isolation (preventing hosts from interfering with workloads – type 3).
The technologies for types 1 and 2 are really quite mature now, with containers and virtual machines combining a variety of hardware and software techniques such virtualization, cgroups and SELinux. On the other hand, protecting workloads from malicious or compromised hosts is much more difficult, meaning that regulators – and sensible enterprises! – are unwilling to have some workloads execute in the public cloud.
Technologies like secure and measured boot, combined with TPM capabilities by projects such as Keylime (which is fully open source) are beginning to address this, and we can expect major improvement as confidential computing (and open source projects like Enarx which uses TEEs) matures.
In the past few years, we’ve seen a huge interest in Kubernetes deployments. What common mistakes are organizations making along the way? How can they be addressed?
One of the main mistakes we see businesses make is attempting to deploy Kubernetes without the appropriate level of in house expertise. Kubernetes is an ecosystem, rather than a one-off executable, that relies on other services provided by open source projects. It requires IT teams to fully understand the architecture that is made up of applications and network layers.
Once implemented, businesses must also maintain the ecosystem in parallel to any software running on top. When it comes to implementation, businesses are advised to follow open standards – those decided upon by the open source Kubernetes community as a whole, rather than a specific vendor. This will prevent teams from running into unexpected roadblocks, and helps to ensure a smooth learning curve for new team members.
Another mistake organizations can make is ignoring small but important details, like the backwards compatibility of Kubernetes with older versions is very important. It’s easy to overlook the fact that these may not have important security updates that can transfer, so IT teams must be mindful when merging code across versions, and check regularly for available updates.
Open source remains one of the building blocks of enterprise IT. What’s your take on the future of open source code in large business networks?
Open source is here to stay, and that’s a good thing, not least for security. The more security experts there are to look at code, the more likely that bugs will be found and fixed. Of course, security experts are short on the ground, and busy, so it’s important that large enterprises make a commitment to getting involved with open source and committing resources to it.
Another issue that people also get confused by thinking that just because a project is open source, it’s ready to use. There’s a difference between an open source project and an enterprise product which is based on that project. In the latter case, you get all the benefits of testing, patching, upgrading, vulnerability processes, version management and support. In the former case, you need to manage everything yourself – including ensuring that you have sufficient expertise in house to cope with any issues that come up.
Columbia University researchers have released Crylogger, an open source dynamic analysis tool that shows which Android apps feature cryptographic vulnerabilities.
They also used it to test 1780 popular Android apps from the Google Play Store, and the results were abysmal:
- All apps break at least one of the 26 crypto rules
- 1775 apps use an unsafe pseudorandom number generator (PRNG)
- 1,764 apps use a broken hash function (SHA1, MD2, MD5, etc.)
- 1,076 apps use the CBC operation mode (which is vulnerable to padding oracle attacks in client-server scenarios)
- 820 apps use a static symmetric encryption key (hardcoded)
Each of the tested apps with an instrumented crypto library were run in Crylogger, which logs the parameters that are passed to the crypto APIs during the execution and then checks their legitimacy offline by using a list of crypto rules.
“Cryptographic (crypto) algorithms are the essential ingredients of all secure systems: crypto hash functions and encryption algorithms, for example, can guarantee properties such as integrity and confidentiality,” the researchers explained.
“A crypto misuse is an invocation to a crypto API that does not respect common security guidelines, such as those suggested by cryptographers or organizations like NIST and IETF.”
To confirm that the cryptographic vulnerabilities flagged by Crylogger can actually be exploited, the researchers manually reverse-engineered 28 of the tested apps and found that 14 of them are vulnerable to attacks (even though some issues may be considered out-of-scope by developers because they require privilege escalation for effective exploitation).
Comparing the results of Crylogger (a dynamic analysis tool) with those of CryptoGuard (an open source static analysis tool for detecting crypto misuses in Java-based applications) when testing 150 apps, the researchers found that the former flags some issues that the latter misses, and vice versa.
The best thing for developers would be to test their applications with both before they offer them for download, the researchers noted. Also, Crylogger can be used to check apps submitted to app stores.
“Using a dynamic tool on a large number of apps is hard, but Crylogger can refine the misuses identified with static analysis because, typically, many of them are false positives that cannot be discarded manually on such a large number of apps,” they concluded.
As noted at the beginning of this piece, too many apps break too many cryptographic rules. What’s more, too many app and library developers are choosing to effectively ignore these problems.
The researchers emailed 306 developers of Android apps that violate 9 or more of the crypto rules: only 18 developers answered back, and only 8 of them continued to communicate after that first email and provided useful feedback on their findings. They also contacted 6 developers of popular Android libraries and received answers from 2 of them.
The researchers chose not to reveal the names of the vulnerable apps and libraries because they fear that information would benefit attackers, but they shared enough to show that these issues affect all types of apps: from media streaming and newspaper apps, to file and password managers, authentication apps, messaging apps, and so on.
GuidePoint Security released a new open source tool that enables a red team to easily build out the necessary infrastructure.
The RedCommander tool solves a major challenge for red teams around the installation and operationalization of infrastructure by combining automation scripts and other tools into a deployable package.
RedCommander is a series of Ansible Playbooks that automate the tedious tasks required to stand up covert command and control channels during a red team exercise. This open source tool is intended to be a stepping stone for more advanced configurations during red team assessments.
Once an operator spins up several servers and configures redirectors, they can leverage RedCommander to modify and monitor their command and control servers for blue team investigations by way of RedELK. The result provides the operator with a full-spectrum overview of a Red Team exercise while simultaneously centralizing logs for Indicators of Compromise (IOC) analysis.
“Exercising defensive responses is a crucial security practice for any organization,” says Alex Williams, the creator of RedCommander and a senior consultant in the GuidePoint Security Threat & Attack Simulation practice.
“RedCommander makes it easier for red teams to deploy their infrastructure in a more customized fashion, giving them a true infrastructure for success.”
GrammaTech has released Swap Detector, an open source tool that enables developers and DevOps teams to identify errors due to swapped function arguments, which can also be present in deployed code.
Modern software development involves the use of third-party APIs, libraries, and/or frameworks that are complex, rapidly evolving, and sometimes poorly documented. According to industry estimates, open source components can represent up to 90% of the code in the average application. Meanwhile, API usage errors are a common source of security and reliability vulnerabilities.
“Traditional static-analysis techniques do not take advantage of the vast wealth of information on what represents error-free coding practices available in the open-source domain,” says Alexey Loginov, VP of Research at GrammaTech. “With Swap Detector we applied Big Data analysis techniques, what we call Big Code analysis, to the Fedora RPM open-source repository to baseline correct API usage. This allowed us to develop error-detection capabilities that far exceed the scalability and accuracy of conventional approaches to program analysis.”
Swap Detector consumes input information about a call site, and optionally, function declaration information pertaining to that call site. If it detects a potential swapped-argument error at that call site, it outputs an appropriate warning message and a score for the warning.
The Swap Detector interface integrates with a variety of static analysis tools, such as Clang Static Analyzer, Clang-Tidy, and PyLint. Although initially focused on C/C++ programs, Swap Detector is applicable to programs in other languages; and is especially beneficial for languages that are interpreted and not compiled.
Swap Detector uses multiple error-detection techniques, layered together to increase accuracy. For example, it compares argument names used in call sites with the parameter names used in corresponding declarations. In addition, it uses “Big Code” techniques, applying statistical information about usages of “known good” API-usage patterns collected from a large corpus of code, and flagging usages that are statistically anomalous as potential errors.
To improve the precision of the reported warnings, Swap Detector applies false-positive reduction strategies to the output of both techniques.
Kali Linux 2020.3 changes
New features include:
- Kali NetHunter – Kali’s mobile pentesting platform/app – has been augmented with Bluetooth Arsenal, which combines a set of Bluetooth tools in the app with pre-configured workflows and use cases. “You can use your external adapter for reconnaissance, spoofing, listening to and injecting audio into various devices, including speakers, headsets, watches, or even cars,” Offensive Security explained
- Kali NetHunter now also supports Nokia 3.1 and Nokia 6.1 phones
- The team has pre-generated 19 ARM images (“alternate flavors” of Kali for different ARM hardware) but has also refreshed build-scripts for ARM devices, so that users can quickly self generate images for those devices (39 in total)
- Win-KeX (Windows + Kali Desktop EXperience) provides a persistent-session GUI
There’s also some visual changes/upgrades:
- The design of Kali’s GNOME desktop environment has been improved
- There are new themed icons for tools
- Improved support for HiDPI (High Dots Per Inch) displays
A new default shell in the offing
Last but not least, one big announcement: the company aims to switch bash (aka “Bourne-Again SHell”) with ZSH as Kali’s default shell.
ZSH is based on the same shell as bash, but has additional features and support for plugins and themes.
The switch is scheduled to happen in the next iteration of the distro. In the meantime, users are urged to try it out and offer feedback.
“We hope we have the right balance of design and functionality, but we know these typically don’t get done perfect the first time. And, we don’t want to overload the default shell with too many features, as lower powered devices will then struggle or it may be hard to on the eyes to read,” the company explained.
“We will be doing extensive testing during this next cycle so we reserve the right to delay the default change, or change direction all together.”
Wondering about the future of Kali Linux? Read our interview with Jim O’Gorman, Chief Content and Strategy officer at Offensive Security and leader of the Kali team.
There has been a massive 430% surge in next generation cyber attacks aimed at actively infiltrating open source software supply chains, Sonatype has found.
Rise of next-gen software supply chain attacks
According to the report, 929 next generation software supply chain attacks were recorded from July 2019 through May 2020. By comparison 216 such attacks were recorded in the four years between February 2015 and June 2019.
The difference between “next generation” and “legacy” software supply chain attacks is simple but important: next generation attacks like Octopus Scanner and electron-native-notify are strategic and involve bad actors intentionally targeting and surreptitiously compromising “upstream” open source projects so they can subsequently exploit vulnerabilities when they inevitably flow “downstream” into the wild.
Conversely, legacy software supply chain attacks like Equifax are tactical and involve bad actors waiting for new zero day vulnerabilities to be publicly disclosed and then racing to take advantage in the wild before others can remediate.
“Our research shows that commercial engineering teams are getting faster in their ability to respond to new zero day vulnerabilities. Therefore, it should come as no surprise that next generation supply chain attacks have increased 430% as adversaries are shifting their activities ‘upstream’ where they can infect a single open source component that has the potential to be distributed ‘downstream” where it can be strategically and covertly exploited.”
Speed remains critical when responding to legacy software supply chain attacks
According to the report, enterprise software development teams differ in their response times to vulnerabilities in open source software components:
- 47% of organizations became aware of new open source vulnerabilities after a week, and
- 51% of organizations took more than a week to remediate the open source vulnerabilities
The researchers discovered that not all organizations prioritize improved risk management practices at the expense of developer productivity. This year’s report reveals that high performing development teams are 26x faster at detecting and remediating open source vulnerabilities, and deploy changes to code 15x more frequently than their peers.
High performers are also:
- 59% more likely to be using automated software composition analysis (SCA) to detect and remediate known vulnerable OSS components across the SDLC
- 51% more likely to centrally maintain a software bill of materials (SBOMs) for applications
- 4.9x more likely to successfully update dependencies and fix vulnerabilities without breakage
- 33x more likely to be confident that OSS dependencies are secure (i.e., no known vulnerabilities)
- 1.5 trillion component download requests projected in 2020 across all major open source ecosystems
- 10% of java OSS component downloads by developers had known security vulnerabilities
- 11% of open source components developers build into their applications are known vulnerable, with 38 vulnerabilities discovered on average
- 40% of npm packages contain dependencies with known vulnerabilities
- New open source zero-day vulnerabilities are exploited in the wild within 3 days of public disclosure
- The average enterprise sources code from 3,500 OSS projects including over 11,000 component releases.
“We found that high performers are able to simultaneously achieve security and productivity objectives,” said Gene Kim, DevOps researcher and author of The Unicorn Project. “It’s fantastic to gain a better understanding of the principles and practices of how this is achieved, as well as their measurable outcomes.”
“It was really exciting to find so much evidence that this much-discussed tradeoff between security and productivity is really a false dichotomy. With the right culture, workflow, and tools development teams can achieve great security and compliance outcomes together with class-leading productivity,” said Dr. Stephen Magill, Principal Scientist at Galois & CEO of MuseDev.
Guardicore unveiled new capabilities for Infection Monkey, its free, open source breach and attack simulation (BAS) tool that maps to the MITRE ATT&CK knowledge base and tests network adherence to the Forrester Zero Trust framework.
Infection Monkey is a self-propagating testing tool that hundreds of information technology teams from across the world use to test network adherence to the zero trust framework, and find weaknesses in their on-premises and cloud-based data centers.
Over the past four years, Infection Monkey has gained significant momentum and popularity amongst the cybersecurity community. With more than 3,200 stars on Github, Infection Monkey is trusted by large enterprises, educational institutions, and more, and has garnered praise from distinguished industry analysts such as Dr. Chase Cunningham, principal analyst at Forrester.
“In cyberspace and in cyber warfare exploitation, attacks succeed because they locate and leverage the weak points in systems and networks. In order to defend from this type of attack cycle, it is necessary to continually test the system for those likely weak points. But this can be difficult, especially when dealing with large infrastructures that are bridged between cloud, non-cloud, on premises, off premises, and a wide variety of other potential configurations. Infection Monkey is one of the most well-aligned tools that fits this need. I’m a huge fan.” – from Chase Cunningham’s Cyber Warfare – Trust, Tactics, and Strategies.
Expanded MITRE ATT&CK techniques and reporting
Cybersecurity experts and enterprise DevSecOps teams continue to rely on the MITRE-developed ATT&CK framework as the foundation for network security tests and assessments.
Infection Monkey 1.9.0 now offers a total of 32 MITRE ATT&CK techniques available for testing. These new attack techniques enable cybersecurity professionals to exhaustively test their network like never before while also empowering them to easily communicate steps towards actionable remediation with all relevant stakeholders, from IT to the C-suite.
As the cybersecurity skills gap continues to widen and IT teams find themselves short-staffed, Infection Monkey 1.9.0 received several interface improvements that ensure the tool can be easily implemented – and most importantly valuable – with no additional staff or education.
Infection Monkey’s user interface has been significantly upgraded for configuration, making it easier than ever to set up a variety of different test scenarios on the network. In addition, Infection Monkey 1.9.0 now runs more stealthily to avoid interruptions in attack simulations, and improve coverage rates.
Secure by default
Guardicore is committed to ensuring that Infection Monkey offers the highest standards of quality and safety as a tool. Deployed in enterprise production data centers and cloud deployments, delivering a secure and stable tool is a top priority.
Therefore, Infection Monkey 1.9.0 now requires a secure login by default and has also been verified as secure by Synk.io, a security firm that continuously scans for vulnerabilities in software dependencies. Users can be confident that the tool is safe and secure for deployment.
“Our mission with Infection Monkey is to equip cybersecurity professionals with a valuable open source tool that helps improve their security posture against cybercriminals,” said Shay Nehmad, Team Lead and Open Source Software Developer, Guardicore. “With this new version, we have made it easier than ever to use the tool’s sophisticated features.”