Over a year has passed since Nmap had last been updated, but this weekend Gordon “Fyodor” Lyon announced Nmap 7.90.
Nmap is a widely used free and open-source network scanner.
The utility is used for network inventorying, port scanning, managing service upgrade schedules, monitoring host or service uptime, etc.
It works on most operating systems: Linux, Windows, macOS, Solaris, and BSD.
First and foremost, Nmap 7.90 comes with Npcap 1.0.0, the first completely stable version of the raw packet capturing/sending driver for Windows.
Prior to Npcap, Nmap used Winpcap, but the driver hasn’t been updated since 2013, didn’t always work on Windows 10, and depended on long-deprecated Windows APIs.
“While we created Npcap for Nmap, it turns out that many other projects and companies had the same need. Wireshark switched to Npcap with their big 3.0.0 release last February, and Microsoft publicly recommends Npcap for their Azure ATP (Advanced Threat Protection) product,” Lyon explained.
“We introduced the Npcap OEM program allowing companies to license Npcap OEM for use within their products or for company-internal use with commercial support and deployment automation. This project that was expected to be a drain on our resources (but worthwhile since it makes Nmap so much better) is now helping to fund the Nmap project. The Npcap OEM program has also helped ensure Npcap’s stability by deploying it on some of the fastest networks at some of the largest enterprises in the world.”
Nmap 7.90 also comes with:
- New fingerprints for better OS and service/version detection
- 3 new NSE scripts, new protocol libraries and payloads for host discovery, port scanning and version detection
- 70+ smaller bug fixes and improvements
- Build system upgrades and code quality improvements
“We also created a special ‘Nmap OEM Edition’ for the companies who license Nmap to handle host discovery within their products. We have been selling such licenses for more than 20 years and it’s about time OEM’s have an installer more customized to their needs,” Lyon added.
Since Edward Snowden’s revelations of sweeping internet surveillance by the NSA, the push to encrypt the web has been unrelenting.
Bolstered by Google’s various initiatives (e.g., its prioritizing of websites that use encryption in Google Search results, making Chrome mark HTTP sites as “not secure,” and tracking of worldwide HTTPS usage), CloudFlare’s Universal SSL offer and the advent of Let’s Encrypt, nearly seven years later various sources put the percentage of encrypted internet traffic between 80% and 90% across all platforms.
That’s good news for end users who wish their interactions with various websites to be safe from eavesdropping by third parties – whether they be hackers, companies or governments.
But with the sweet comes the sour: criminals are exploiting users’ erroneous belief that a site with HTTPS in its URL can be considered completely safe to trick them into trusting phishing sites.
According to SophosLabs, nearly one-third of malware and unwanted applications enter the enterprise network through TLS-encrypted flows.
Also, nearly a quarter of malware now communicates over HTTPS connections, making it more difficult for businesses to spot active infections within their networks, especially because – a recent survey has revealed – only 3.5% of organizations are actually decrypting their network traffic to properly inspect it.
Why so few? What’s stopping them? The number one reason is that they are concerned about firewall performance, but they also cite privacy concerns, degraded user experience (websites not loading properly) and complexity as important factors for their decision to not do it.
Covert malicious activity
Malware that communicates via TLS-secured connections includes well-known and nasty malware families like TrickBot, IcedID and Dridex.
The use of transport-layer encryption is just one of the methods for keeping the malware’s existence on compromised systems secret, but it helps it covertly download additional modules and configuration files and send the collected data to an outside server.
“We’ve also observed that, increasingly, more malicious functions are being orchestrated from the command and control server, rather than implemented in the malware binary, and the C2s make decisions about what the malware should do next based on the exfiltrated data, which increases the volume of network traffic,” Sophos researcher Luca Nagy pointed out.
“Malware authors also want to empower their binaries with newer features and refresh them more often, which also increases the need for secure network communication, to prevent network-level protection tools from discovering an active infection inside the network every time it downloads an updated version of itself.”
Performance before protection? It doesn’t have to be
Some respondents in the previously mentioned survey were also unaware of the need to decrypt network traffic, even though it’s (or should be) common knowledge that malware often uses encrypted connections for communication.
Connections to “safe” destinations like financial websites may, perhaps, be exempted from inspection, but most other encrypted traffic coming in and going out of the corporate network should be decrypted and analyzed.
The problem with this is that many firewall offerings are not up to the task of inspecting a huge volume of encrypted sessions without causing applications to break or degrade network performance.
Not all, though: Sophos’ XG Firewall, with its new “Xstream” architecture, was architected from the ground up with performance in mind, allowing users to decrypt and see all traffic at a performance level that is just about wire speed.
A new firewall for your traffic decryption needs
“With Sophos XG Firewall, IT managers can immediately deploy TLS inspection without concerns over performance or breaking incompatible devices on the network, and they can turn it on for different parts of the network with flexible policy setting options,” Dan Schiappa, chief product officer at Sophos, told Help Net Security.
“We’ve created the ability to inspect all TLS traffic across all protocols and ports, eliminating enormous security blind spots. Sophos XG Firewall scans all TLS encrypted traffic – not just web traffic. This is important because criminals are constantly trying to avoid attention and use non-standard communication ports to evade detection.”
Other new features include support for TLS 1.3 (which many other solutions don’t have); FastPath policy controls that accelerate performance of SD-WAN applications and traffic, including Voice over IP, SaaS and others, to up to wire speed; and an enhanced Deep Packet Inspection (DPI) engine that dynamically risk-assesses traffic streams and matches them to the appropriate threat scanning level.
Schiappa also said that they’ve wired data science and threat intel much deeper than ever before: AI-enhanced threat intelligence from SophosLabs provides insights needed to understand and adjust defenses to protect against a constantly changing threat landscape.
Finally, user-friendliness should not be discounted: Sophos XG Firewall is simple to use and manage on a single cloud-based platform – Sophos Central – where organizations can easily layer and manage multiple firewalls as well as synchronize their security applications.
Group-IB is a known quantity in the information security arena: in the sixteen years since its inception, the company – now headquartered in Singapore – has detected and detailed many high-profile threats, performed over a thousand successful investigations across the globe and gained widespread recognition for helping private and public entities and law enforcement worldwide track down and prosecute cybercriminals.
To be able to do that, it has been steadily building an international infrastructure for threat detection, hunting and investigating cybercrime around the world. This infrastructure includes, among other things:
- The largest computer forensics laboratory in Eastern Europe
- An early warning system for proactive cyber defense based on their own threat intelligence, attribution and incident response practices
- A certified emergency response service (CERT-GIB), which is member of the Forum of Incident Response and Security Teams (FIRST) and Trusted Introducer
- Databases containing extensive threat and threat actor information
The company was, at the beginning, mostly a provider of digital forensics and cyber investigation services. In time, though, they realized that the solutions available to organizations were not keeping pace with the ever-morphing threat landscape, so they decided to work on and offer their own.
It all started with the creation of Group-IB Threat Intelligence (TI), an attack attribution and prediction system and service that’s based on data collected from a wide variety of sources (investigations, network sensors, honeypots, OSINT, card shops, and much more), automated information extraction and correlation technologies, and is supported by expert analysts, incident responders and investigators around the world.
It was followed by:
- Group-IB Threat Detection System (TDS) – A threat-actor-centric (instead of malware-centric) detection and proactive threat hunting solution
- Secure Bank – A fraud and attack prevention solution for the financial services industry, which detects threats like account takeovers, credit fraud, malicious web injections, banking trojans, remote access software, social engineering, etc. (keeps more than 100 million banking customers secure by monitoring 16 million online banking sessions every day)
- Secure Portal – A fraud and attack prevention solution for ecommerce websites and online services (prevents account takeovers, identifies fake accounts and blocks bots, fraudulent activities, fraudulent ticket sales, and so on)
- Brand Protection – A service designed to detect and eliminate threats to one’s brand on the Internet (brand abuse, Internet fraud, copyright infringement, counterfeiting)
- Anti-Piracy – intelligence-driven protection of content online
Most of these solutions are powered by Group-IB TI. More recently, though, they gained another thing in common: an integrated Graph Network Analysis system for cybercrime investigations, threat attribution, and detection of phishing and fraud.
Graph Network Analysis
Many threat intelligence solutions have graph-making capabilities and the company has considered a number of graph network analysis providers before finally deciding to develop their own tool for mapping adversary infrastructure, Group-IB CTO and Head of Threat Intelligence Dmitry Volkov told Help Net Security.
None of the considered solutions gathered and used the wide variety of data and historic data Group-IB experts deem crucial for creating a complete picture for better visibility. None of them had the automated graph creation option and were able to reliably identify and exclude irrelevant results. Finally, none allowed operators to specify the ownership timeframe of the entered suspicious domain, IP address, email or SSL certificate fingerprint.
“Domain name and IP addresses change ownership – today they are used by a threat actor, tomorrow by a legitimate company or a random individual, so the timeframe within which the threat actor owned the suspicious domain name or IP address is very important information for the creation of a relevant and accurate graph,” Volkov explained.
The interface of the graph network analysis tool
The user decides how wide they want to cast the net by specifying the number of steps the tool should take when identifying direct links between elements, but the tool’s automated mode builds the graph of the links to the searched element. And, if they switch on the “refine” option, it will automatically remove from the resulting graph all the elements it deems irrelevant.
The graph network analysis tool attributing the search element to a specific threat actor
Analysts and investigators who don’t trust the tool to create a graph that contains all the crucial elements can always turn “refine” off and specify one step to build the graph themselves and then remove irrelevant elements from it.
Though, Volkov pointed out, after performing numerous manual checks and consistently seeing that the tool did a great job when allowed to do it automatically, their own experts have come to trust and prefer that option.
Improving graph accuracy
“The initial goal was just to create a useful tool for our internal analysts, and we didn’t plan to incorporate it in our products. But some of our clients saw how we were using it to do our research in-house and wanted to be able to do the same, so we decided to share it,” Volkov shared.
The company’s developers and experts have been working on the Graph Network Analysis tool for the past few years. The first version was good, but very slow. In time, they managed to improve both the speed and the effectiveness by experimenting with different types of data and different approaches to data enrichment, processing and correlation.
There are still two versions of the tool: a standalone one that’s used by Group-IB’s experts and one that’s incorporated in the company’s products. New features are first added and tested on the former, then incorporated in the latter if they prove useful.
Group-IB is constantly working on enriching the tool with data and designing new algorithms using machine learning to improve the graph’s accuracy.
“All of Group-IB’s products are being constantly fine-tuned thanks to the permanent monitoring of the cyberspace for new threats and our incident response operations and cyber investigations,” Volkov pointed out. “And we’re always analyzing existing solutions on the market, pinpointing their weak spots and shortcomings, thinking of ways to eliminate them and striving to provide the best technologies to our customers.”
The tool’s capabilities
Mapping adversary infrastructure and (hopefully) identifying the threat actor has many advantages for the targeted organization and its customers, but also for other organizations, their customers and, in general, the wider populace.
“The main goal of network graph analysis is to track down projects that cybercriminals carried out in the past — legal and illegal projects that bear similarities, links in their infrastructure, and connections to the infrastructure involved in the incident being investigated,” Volkov explained.
If the users are very lucky and a cybercriminal’s legal project is detected, discovering their real identity becomes simple. If only illegal projects are detected, that goal becomes more difficult to achieve.
But even if the identity of the attacker remains elusive, discovering details about their previous attacks can help pinpoint their preferred tactics, techniques, procedures, tools and malware, and that information can be handy for disrupting ongoing attacks or even preventing those that are yet to be launched (e.g., by identifying attacker infrastructure at the preparation stage).
The tool can be leveraged by SOC/CERT analysts, threat hunters, threat intelligence analysts and digital forensic specialists, and it’s great for improving the speed of incident response, fast cybercrime investigations, proactive phishing and global threat hunting, and pinpointing malicious servers hidden behind proxy services.
It’s also used for IoC enrichment and event correlation (i.e., discovering when certain attacks are linked and are likely different stages of a single multiphase attack).
Group-IB Graph Network Analysis was designed based on indicators of compromise discovered and collected by the company’s cybercrime investigators, incident responders and malware analysts in the last 16 years.
To this have been added or made available through data-sharing agreements and subscriptions many other data sets containing:
- Domain registration data
- DNS records (domain records, files, profiles, tags)
- Service banners (domains, redirections, error codes)
- Service fingerprints on IP addresses (which services are running and which ports are open)
- Hidden registration data (IDs, hosting providers)
- Historic registration data and that related to hosting transfers
- SSL certificate registration data.
They have also made an effort to come up with new methods of extracting data that is not available using ordinary means. “We cannot reveal details for obvious reasons, but in some cases, mistakes made by hackers during domain registration or server configuration help us discover their emails, pseudonyms, or backend addresses,” Volkov said.
An advantage for all threat hunters
The tool queries both the company’s internal databases and external sources of information (e.g., WHOIS, public sandboxes, etc.) and the whole network graph creation happens in mere seconds.
And everybody wins in the scenario where the tool is used by Group-IB’s clients.
“By giving visibility to our clients, we reduce our analysts’ load and get interesting feedback from our clients. When they do the analyses themselves, they may achieve results that are more interesting and relevant to them, and when they share those results with us, we have a better understanding about the threats that target organizations in their industry, sector or geographic region,” Volkov concluded.
“This allows us to tune our research capabilities and detection engines to improve our whole ecosystem and, on a global scale, it improves our detection, prevention and hunting processes for every client.”
As the technologies we rely on continue to evolve, they are growing at a rate that outpaces our ability to protect them. This increasing risk potential necessitates a change in approach and the ability for organizations to automate more of their network security operations to reduce their cyber-attack surface.
One of the primary ways this issue is compounded is from the widely acknowledged labor shortage of IT security specialists, which results in overworked resources and increases in misconfigurations caused by human errors. Security analysts and engineers spend the vast majority of their time worrying about vulnerabilities, but Gartner believes that through 2023, 99 percent of all firewall breaches will be the result of misconfigurations, not flaws. IBM also noted in a recent survey that a 424 percent increase in data breaches due to cloud misconfigurations were caused by human errors.
Recognizing these findings, the need for enterprises to automate network security policy management processes to reduce human errors and improve efficiencies is proven, but some organizations are still leery of making the automation transition for fear of losing control over their IT security visibility and decision making. Luckily, they don’t need to choose between automation and maintaining control.
Organizations can protect against these concerns by beginning with a form of automation that matches their current IT security capabilities, then advancing to increasing methods of automation as their confidence and technical maturity level grows.
Improve network control, reduce complexity and errors
Some organizations may believe that automating network security operations will reduce their visibility and control over policies, change processes and ability to comply with security and privacy regulations. However, automation can actually provide more control by eliminating guesswork and manual management for these areas, which reduces the likelihood of misconfigurations and increased risk.
Network security policy automation provides numerous benefits to organizations including minimizing human error; increasing operational efficiency while reducing security costs; streamlining the friction between DevOps and SecOps; increasing overall security agility; and decreasing compliance violations by proactively checking against regulation and internal compliance measures prior to implementing new changes.
Create a customized approach to network security automation
I recognize that not every organization is ready to fully automate security processes out of the gate. Therefore, I recommend they first acknowledge their current IT security maturity and then define how they want to evolve their automated processes over time. These decisions should be based on the company’s business goals, staffing resources, customer needs and technical sophistication.
The next step is to place the company on an automation transformation curve to determine its technology advancement path. I like to think of the automation spectrum as having four key stages, which improve security process time and efficiency:
1. Design Automation: Offers a basic level of automation, where security specialists still manually monitor and react to environmental changes. Meanwhile, the automated system provides intelligent design recommendations to suggest network security improvements, and auto-generated compliance and risk-scoring reports to improve workflows and correction time.
2. Implementation Automation: Continues to improve speed and efficiency by also providing automated network security rule implementation, verification and documentation. This stage is still primarily driven by operator control but increases automation to enable security specialists to direct their attention to more critical needs.
3. Zero-Touch Automation: The network system now monitors and reacts to environmental changes, but the security specialists remain in control of global policies. At this stage, implementation changes are deployed to all devices automatically, and intent-based standards and golden rule guardrails can be easily defined to alleviate time-consuming routine changes.
4. Adaptive Security Enforcement: For some time, our industry has considered zero-touch automation the end-state, but now a new stage goes beyond this type of automation to create a truly adaptive network security model. This automation approach is scalable across systems and automatically recalibrates global security policies as it auto-detects any underlying network and infrastructure changes. This approach also enables businesses to maintain control over security operations, while maximizing efficiencies and gaining continuing compliance with security policies.
This multi-staged approach allows organizations to match their pace of automation to meet their current network security capabilities and future ambitions. To determine where to start, enterprises should survey the type of processes they want to fully automate, partially automate or remain untouched. Then the company can automate within their comfort level to move as fast as their systems allow.
Explore the next frontier of network security automation
I believe that the new frontier of network security automation will help enterprises move beyond zero touch implementation to continuously adapt their security processes to gain real-time visibility and control over global network changes, achieve new levels of efficiencies, and free up IT security resources for more strategic initiatives.
This adaptive network security model also provides the flexibility needed to respond to critical incidents and apply additional changes across all environments as they occur. Businesses shouldn’t have to make a choice between speed or security, and by continuously monitoring and adapting their network systems, protecting global polices across all environments and maintaining compliance, they wouldn’t have to make any tradeoffs.
There is an automated network security policy management solution that meets the needs and capabilities of every organization. Organizations don’t need to fear automation as a threat to lose control or visibility over their hybrid network environments. By selecting the right form of automation for their current needs, enterprises can reduce human errors and improve their security agility now while they prepare for the future.