With up to 75 percent of remote device management projects deemed “not successful,” in 2020, IoT deployment has been limited in realizing its full potential.
Path to IoT project success
However, a new wave of affordable silicon that provides a wide array of features and functionality, in conjunction with the maturation of pre-packed software, will lead to a substantial increase in IoT project success in the upcoming year, predict experts at Sequitur Labs.
According to Verified Market Research, the global IoT market size was valued at $212.1 Billion in 2018 and is expected to witness a growth of 25.68% to reach $1.3 trillion by 2026.
While there are many reasons for IoT deployment struggles, the most common ones involve project complexity, lack of required skills and the inability to implement effective security.
With recent improvements that enable vendors to implement a new generation of functionality into their solutions and device updates, ensuring a substantial increase in the success of IoT projects.
Being heavily involved in the IoT security space, there are several advancements in 2021 that are expected to move the industry forward in several key areas.
Improved industrial IoT remote device management and control
COVID-19 has not only forced people to work remotely, it has also accelerated the need to configure, control and manage industrial devices remotely as well. As a result, the vast majority of industrial end points are expected to support IP-based networks (like Ethernet and Wi-fi) rather than purpose-built networks (for example, Modbus or Profibus).
The devices can be connected to the internet, and as such will also require the ability to boot safely, update securely, enable system recovery, secure sensitive applications and data storage.
Increased cloud integration
Smart device platforms from Google (Google Assistant), Amazon (Alexa) and Apple (Apple Homekit) have emerged as the central communications point in the connected home. Each of these vendors require compliance from their ecosystem partners in order to join their solution.
With the number of connected devices in the home accelerating, the need for device security will become more critical than ever in the coming year.
Increased deployment of IoT for medical devices
Medical products such as remote monitoring devices and sensors for medical equipment are accelerating in adoption. The benefits include lower medical management costs, reduction in hospital stay time and effective equipment monitoring.
The risk of a corrupted or compromised device is high in this industry, and as sheer volumes of remotely monitored and controlled products increase, so do security needs.
Device authentication, secure monitoring for updates, maintenance and health diagnostics, and protection against remote attacks will drive the need for purpose-based solutions in this industry.
“There is huge potential in the deployment of IoT devices into industries that will improve the way people work, communicate and live. However, successful implementation will be limited if these devices cannot be used securely,” said Philip Attfield, CEO, Sequitur Labs.
“The advances in securing remote devices over the past year will lead to incredible innovations in the marketplace, expected to accelerate artificial intelligence and significant technological benefits at the edge.”
Cisco has fixed three bugs in its Cisco Webex video conferencing offering that may allow attackers to:
- Join Webex meetings without appearing in the participant list (CVE-2020-3419)
- Covertly maintain an audio connection to a Webex meeting after being expelled from it (CVE-2020-3471)
- Gain access to information (name, email, IP address, device info) on meeting attendees without being admitted to the meeting (CVE-2020-3441)
About the Cisco Webex vulnerabilities
The three flaws were discovered by IBM researchers, after the company’s research department and the Office of the CISO decided to analyze their primary tool for remote meetings (i.e., Cisco Webex).
“These vulnerabilities work by exploiting the handshake process that Webex uses to establish a connection between meeting participants,” the researchers shared.
“These flaws affect both scheduled meetings with unique meeting URLs and Webex Personal Rooms. Personal rooms may be easier to exploit because they are often based on a predictable combination of the room owner’s name and organization name. These technical vulnerabilities could be further exploited with a combination of social engineering, open source intelligence (OSINT) and cognitive overloading techniques.”
The vulnerabilities can all be exploited by unauthenticated, remote attackers, either by sending crafted requests to a vulnerable Cisco Webex Meetings or Cisco Webex Meetings Server site or by browsing the Webex roster.
More details about the possible attacks are available in this blog post, though details about the flaws will be limited until more users are able to implement the provided updates/patches.
Patches and security updates
The bugs affect both Cisco Webex Meetings sites (cloud-based) and Cisco Webex Meetings Server (on-premises).
Cisco addressed them in Cisco Webex Meetings sites a few days ago and no user action is required.
Users of Cisco Webex Meetings Server are advised to upgrade to 3.0MR3 Security Patch 5 or 4.0MR3 Security Patch 4, which contain the needed fixes.
CVE-2020-3419 also affects all Cisco Webex Meetings apps releases 40.10.9 and earlier for iOS and Android, so users are urged to implement the provided updates.
New research into what happens after a new software vulnerability is discovered provides an unprecedented window into the outcomes and effectiveness of responsible vulnerability disclosure and exploit development.
The analysis of 473 publicly exploited vulnerabilities challenges long-held assumptions of the security space – namely, disclosure of exploits before a patch is available does not create a sense of urgency among companies to fix the problem.
The research was conducted by Kenna Security and the Cyentia Institute. It examines how the common practices among security researchers impact the overall security of corporate IT networks.
The importance of timing
The analysis found that when exploit code is made public prior to the release of a patch, cybercriminals get a critical head start. At the same time, when exploits are released before patches, it takes security teams more time to address the problem, even after the patch is released.
“The debate over responsible disclosure has existed for decades, but this data provides an objective correlation between vulnerability discovery, disclosure, and patch delivery for the first time ever,” said Ed Bellis, CTO of Kenna Security.
“However, the results raise several questions about responsible exposure, demonstrating that the timing of exploit code release can shift the balance in favor of attackers or defenders.”
Whether exploit code is released first or a patch is released first, the research found that there are periods of time when attackers have the momentum and when defenders have momentum – a reflection of the fact that no matter when a patch is released, some companies simply don’t or can’t install it before attackers make their move.
For approximately nine of the 15 months studied in this analysis, attackers were able to exploit vulnerabilities at a higher rate than defenders were patching, while defenders had the upper hand for six months.
The vulnerability disclosure practice
At the heart of the vulnerability disclosure practice is a mix of competing incentives for software publishers, IT teams, and the independent security researchers that find software vulnerabilities.
When a vulnerability is found, researchers disclose its existence and the relevant code they used to exploit the application. The publisher sets about creating a patch and pushing the patch to its user base. Occasionally, however, software publishers don’t engage, declining to create a patch or notify users of a vulnerability.
In these cases, researchers will publicly disclose the vulnerability to warn the larger community and spur the publisher to take action. Google, for example, tells software publishers that it will release details of the vulnerabilities it discovers within 90 days of notification, except in a few scenarios.
- When exploit code is publicly released before a patch, attackers get, on average, a 47 day head start
- Only 6% of those exploits were detected by more than 1/100 organizations
- Exploit code was already available for over 50% of the vulnerabilities in our sample by the time they were published to the CVE List
- In great news for defenders, over 80% of exploited vulnerabilities have a patch available prior to, or along with, CVE publication
- About one-third of vulnerabilities have exploit code published before a patch is made available
- About 7% of vulnerabilities are exploited before a CVE is published, a patch is available, and exploit code is released
“For decision-makers and researchers across the cybersecurity community, this research provides a vital, never before seen window into the lifecycle of vulnerabilities and exploitations,” said Jay Jacobs, partner, Cyentia Institute.
“These findings offer prominent paths for future research that could ultimately make the IT infrastructure more secure.”
Despite the strong relationship between disclosure of exploitation code and weaponization, the research requires some caveats. It’s possible that release of exploit code doesn’t facilitate exploitation, but detection of exploits in the wild, because the release of the code enabled faster creation of anti-virus signatures.
“This new report reignites the conversation on responsible disclosure. More research will help draw more definitive conclusions, but for now, we can say that where there’s smoke, there’s fire,” said Wade Baker, partner and co-founder of Cyentia Institute. “Release of exploit code before a patch seems to have a negative effect on corporate security.”
Offensive Security has released Kali Linux 2020.4, the latest version of its popular open source penetration testing platform. You can download it or upgrade to it. Kali Linux 2020.4 changes The changes in this version include: ZSH is now Kali’s new default shell on desktop images and cloud, Bash remains the default shell for other platforms (ARM, containers, NetHunter, WSL) for the time being. Users can, of course, use that which they prefer, but be … More
The post Kali Linux 2020.4 released: New default shell, fresh tools, and more! appeared first on Help Net Security.
Today’s Internet is a hectic place. A lot of different web technologies and services are “glued together” and help users shop online, watch the newest movies, or stream the newest hits while jogging. But these (paid) services are also constantly threatened by attackers – and no company, no matter how big, is completely immune. Take the recent Twitter compromise as an example: the attackers hijacked a number of influential Twitter accounts, including those belonging to … More
Researchers at the University of Birmingham have managed to break Intel SGX, a set of security functions used by Intel processors, by creating a $30 device to control CPU voltage.
Break Intel SGX
The work follows a 2019 project, in which an international team of researchers demonstrated how to break Intel’s security guarantees using software undervolting. This attack, called Plundervolt, used undervolting to induce faults and recover secrets from Intel’s secure enclaves.
Intel fixed this vulnerability in late 2019 by removing the ability to undervolt from software with microcode and BIOS updates.
Taking advantage of a separate voltage regulator chip
But now, a team in the University’s School of Computer Science has created a $30 device, called VoltPillager, to control the CPU’s voltage – thus side-stepping Intel’s fix. The attack requires physical access to the computer hardware – which is a relevant threat for SGX enclaves that are often assumed to protect against a malicious cloud operator.
The bill of materials for building VoltPillager is:
- Teensy 4.0 Development Board: $22
- Bus Driver/ Buffer * 2: $1
- SOT IC Adapter * 2: $13 for 6
How to build Voltpillager Board
This research takes advantage of the fact that there is a separate voltage regulator chip to control the CPU voltage. VoltPillager connects to this unprotected interface and precisely controls the voltage. The research show that this hardware undervolting can achieve the same (and more) as Plundervolt.
Zitai Chen, a PhD student in Computer Security at the University of Birmingham, says: “This weakness allows an attacker, if they have control of the hardware, to breach SGX security. Perhaps it might now be time to rethink the threat model of SGX. Can it really protect against malicious insiders or cloud providers?”
ESET researchers have discovered ModPipe, a modular backdoor that gives its operators access to sensitive information stored in devices running ORACLE MICROS Restaurant Enterprise Series (RES) 3700 POS (point-of-sale) – a management software suite used by hundreds of thousands of bars, restaurants, hotels and other hospitality establishments worldwide.
The majority of the identified targets were from the United States.
Containing a custom algorithm
What makes the backdoor distinctive are its downloadable modules and their capabilities, as it contains a custom algorithm designed to gather RES 3700 POS database passwords by decrypting them from Windows registry values.
This shows that the backdoor’s authors have deep knowledge of the targeted software and opted for this sophisticated method instead of collecting the data via a simpler yet “louder” approach, such as keylogging.
Exfiltrated credentials allow ModPipe’s operators access to database contents, including various definitions and configuration, status tables and information about POS transactions.
“However, based on the documentation of RES 3700 POS, the attackers should not be able to access some of the most sensitive information – such as credit card numbers and expiration dates – which is protected by encryption. The only customer data stored in the clear and thus available to the attackers should be cardholder names,” cautions ESET researcher Martin Smolár, who discovered ModPipe.
“Probably the most intriguing parts of ModPipe are its downloadable modules. We’ve been aware of their existence since the end of 2019, when we first found and analyzed its basic components,” explains Smolár.
- GetMicInfo targets data related to the MICROS POS, including passwords tied to two database usernames predefined by the manufacturer. This module can intercept and decrypt these database passwords, using a specifically designed algorithm.
- ModScan 2.20 collects additional information about the installed MICROS POS environment on the machines by scanning selected IP addresses.
- ProcList with main purpose is to collect information about currently running processes on the machine.
“ModPipe’s architecture, modules and their capabilities also indicate that its writers have extensive knowledge of the targeted RES 3700 POS software. The proficiency of the operators could stem from multiple scenarios, including stealing and reverse engineering the proprietary software product, misusing its leaked parts or buying code from an underground market,” adds Smolár.
What can you do?
To keep the operators behind ModPipe at bay, potential victims in the hospitality sector as well as any other businesses using the RES 3700 POS are advised to:
- Use the latest version of the software.
- Use it on devices that run updated operating system and software.
- Use reliable multilayered security software that can detect ModPipe and similar threats.
They are often the target of many attackers who search for them like gold. Some can be easily found, while others can be more difficult to come by. However, inevitably, they can certainly be the weakest link in the security for your entire organization. What is this highly desirable, often stolen, and targeted resource? Passwords. Specifically, Active Directory passwords.
Most enterprise organizations use Microsoft Active Directory (AD) as their centralized identity and access management solution. The standard AD username and password provide users access to any number of systems, including email, file shares, windows desktops, terminal servers, SharePoint, and many other systems integrated with Active Directory.
End-users often use dangerous, easy to remember passwords for their user accounts, even with Active Directory password policies in place. Finding risky passwords in your environment is more important than you might think. Why is that? How can password security in your organization be bolstered?
Why finding risky passwords is important
Ransomware attacks and data breaches are continuously making news headlines. There is often a common thread among data breach events or ransomware attacks – stolen or weak credentials. Take note of the following:
- Kaspersky – “The vast majority of data breaches are caused by stolen or weak credentials. If malicious criminals have your username and password combination, they have an open door into your network.”
- Verizon 2020 DBIR – “Over 80% of breaches within Hacking involve Brute force or the Use of lost or stolen credentials.”
- Infosecurity Magazine – “A year ago, researchers found that 2.2 billion leaked records, known as Collection 1-5…With this treasure trove, hackers can simply test email and password combinations on different sites, hoping that a user has reused one. This popular technique is known as credential stuffing and is the culprit of many recent data breaches.”
Cybercriminals are after your organization’s passwords. Why are passwords such a target? Put simply, stealing credentials is the path of least resistance into your environment. If an attacker has your username and password combination, they have a “wide open door” to your network and business-critical systems. These may include email, websites, bank accounts, and other PII sources. Even worse, if an attacker can get their hands on administrator credentials, they have the “keys to the kingdom” and can do anything they want.
Attackers use any number of techniques to get their hands on stolen credentials. These may include brute force attacks, password spraying, and also, using databases of leaked passwords. Leaked passwords that result from prior data breaches are also known as pwned passwords.
Passwords are hashed in Active Directory and cannot be read, even by administrators. So, how can you effectively find weak, reused, and even breached passwords in your environment?
Built-in tools are not enough
There is no built-in functionality in Active Directory that natively allows you to check for reused or breached passwords. The only real built-in tool in Active Directory that administrators have at their disposal is Active Directory password policy. Password policies are part of an Active Directory Group Policy Object, and they define the required characteristics for passwords. These characteristics may include uppercase, lowercase, numbers, special characters, and minimum characters. While this helps prevent weak password usage, certain passwords are still easily guessed with letter and number substitutions. Additionally, most organizations enable the minimums for password length and complexity.
Below is an example of a default, unconfigured Active Directory password policy.
Active Directory password policy
Specops Password Auditor: Bolstering Active Directory password security
Native tools are not enough to protect your environment from weak, reused, and breached credentials. Hackers are quick to capitalize on these types of passwords used to have easy access to your business-critical data. Specops Password Auditor, a free tool, provides an automated tool to proactively scan and find weak, reused, and breached passwords in use in your Active Directory environment. The best part – it makes this process extremely easy.
After installation, define the domain, scan root, and the domain controller you would like to use for the scan process.
Defining the domain, scan root, and domain controller
The Password Auditor will:
- Search Active Directory users
- Read password policies
- Check for breached passwords
- Reads user details
- Check password policy usage
- Read custom password expiration
Running the Specops Password Auditor scan
- Blank passwords
- Breached passwords
- Identical passwords
- Admin accounts
- Stale admin accounts
- Password not required
- Password never expires
- Expiring passwords
- Expired Passwords
- Password policies
- Password policy usage
- Password policy compliance
It scans various Active Directory user account attributes, including:
After Password Auditor scans the environment, it presents you with an easy-to-read dashboard. The dashboard quickly displays relevant password information. Critical points of interest are noted with the red “bubble tips” with the number of findings for the particular password risk.
Scan results displaying password risks in the environment
When you click the password finding details, you will see the specific list of user accounts with the password risk displayed. Additionally, Specops Password Auditor shows the location, last logon, and associated password policy of the particular user account.
Displaying Active Directory user accounts with known breached passwords
Specops Password Auditor allows you to easily handoff official reports to management, internal or external auditors, and others with the Get PDF Report function.
Generating the Password Auditor report
The Specops Password Auditor executive summary report allows quickly handing over information to business stakeholders in the environment. The report contains concise, easy-to-read information regarding the password audit and risk level.
The overview page of the Password Auditor report
Cybercriminals are capitalizing on weak, reused, and breached passwords in Active Directory environments. By stealing credentials, attackers gain easy access to business-critical data and systems. There are no native tools found in Active Directory to find reused or breached passwords.
Using Specops Password Auditor allows quickly gaining visibility to weak, reused, and breached passwords in the environment and auditing many other important AD components such as password policies. You can also generate and provide a concise and easy-to-read executive summary report to provide to business stakeholders and auditors.
Learn more about Specops Password Auditor here.
The global number of industrial IoT connections will increase from 17.7 billion in 2020 to 36.8 billion in 2025, representing an overall growth rate of 107%, Juniper Research found.
The research identified smart manufacturing as a key growth sector of the industrial IoT market over the next five years, accounting for 22 billion connections by 2025.
The research predicted that 5G and LPWA (Low Power Wide Area) networks will play pivotal roles in creating attractive service offerings to the manufacturing industry, and enabling the realisation of the ‘smart factory’ concept, in which real-time data transmission and high connection densities allow highly-autonomous operations for manufacturers.
5G to maximise benefits of smart factories
The report identified private 5G services as crucial to maximising the value of a smart factory to service users, by leveraging the technology to enable superior levels of autonomy amongst operations.
It found that private 5G networks will prove most valuable when used for the transmission of large amounts of data in environments with a high density of connections, and where significant levels of data are generated. In turn, this will enable large-scale manufacturers to reduce operational spend through efficiency gains.
Software revenue to dominate industrial IoT market value
The research forecasts that over 80% of global industrial IoT market value will be attributable to software spend by 2025, reaching $216 billion. Software tools leveraging machine learning for enhanced data analysis and the identification of network vulnerabilities are now essential to connected manufacturing operations.
Research author Scarlett Woodford noted: “Manufacturers must exercise caution when implementing IoT technology, resisting the temptation to introduce connectivity to all aspects of operations. Instead, manufacturers must focus on the collection of data on the most valuable areas to drive efficiency gains.”
For the third time in two weeks, Google has patched Chrome zero-day vulnerabilities that are being actively exploited in the wild: CVE-2020-16009 is present in the desktop version of the browser, CVE-2020-16010 in the mobile (Android) version. About the vulnerabilities (CVE-2020-16009, CVE-2020-16010) As per usual, Google has refrained from sharing much detail about each of the patched vulnerabilities, so all we know is this: CVE-2020-16009 is an inappropriate implementation flaw in V8, Chrome’s open source … More
The post Google fixes two actively exploited Chrome zero-days (CVE-2020-16009, CVE-2020-16010) appeared first on Help Net Security.
Specops Password Policy is a powerful tool for overcoming the limitations of the default password policies present in Microsoft Active Directory environments. To be fair, Microsoft did revise and upgrade the default password policy and introduced additional, granular fine-tuning options over the years, but for some enterprise environments that’s still not enough, so Specops Password Policy to the rescue!
For the purpose of this review, the installation was done on a server containing all necessary services: Specops Sentinel – a password filter that is installed on all domain controllers, and Specops Password Policy admin tools. Keep in mind that this can be split onto different servers if needed. If you purchased Breached Password Protection, you’ll need to install Specops Arbiter as well.
The setup process is smooth, and you can expect to be up and running within the hour. As you can see from the image below, the standard requirements are modest and should not be a problem for any enterprise environment that requires such a solution.
Figure 1. Specops Password Policy minimum requirements
Password policy templates
When you start with Specops Password Policy Domain Administration, you’ll notice four predefined password policy templates you can choose from:
Figure 2. Specops Password Policy Domain Administration including default templates
These templates are convenient for a fast setup but, naturally, you can take them to another level by customizing them. If you’re working in an environment that needs to meet specific regulatory standards, the provided templates can be a lifesaver. Even if you can’t or don’t want to use these policies, you can use them as a base to strengthen your policy or create a policy compatible with your environment.
Let’s create a new, blank policy to see what the process looks like. Creating one will take you to the Group Policy editor:
Figure 3. Specops Password Policy inside the Group Policy editor
If you find it familiar, it’s because it is the same environment where you would change your default password policy inside Active Directory. The one key difference here is that Specops Password Policy applies password settings to the user part of group policy rather than computer. This makes more sense as it’s the users that generally set bad passwords rather than machines.
After testing the options and thinking how this would fit into my network, I have to commend Specops for not unnecessarily complicating things and choosing to go with a workflow most system administrators are familiar with.
When I opened Specops Password Policy inside the Group Policy editor, I was pleasantly surprised to see that it supports the use of passphrases. More importantly, it also offers assistance for handling them (something that Active Directory does not). You can use regular expressions so that you can define what a passphrase means to your organization i.e. 3 words, with at least 6 characters in each word, no words should be repeated, and no patterns should be used 111111 222222 etc.
Figure 4, 5. Passphrase support and password options
The General Settings menu offers familiar settings for anyone that’s used to working with the Group Policy Editor in an Active Directory environment. A neat addition here is the “client message” option, which allows you to create a custom message to be shown on the Active Directory logon screen in case the password policy requirements are not met.
Figure 6. General Settings with options and client message notification
The Password Expiration tab offers a wealth of options, including the maximum password age, password expiration notifications, and so on. A key feature here is the length-based password aging rule. This means that the longer the password the longer the user gets to keep it. It can be real incentive to encourage users to move to passphrases.
Figure 7. Options for password expiration rules and password expiration notifications
The Password Rules menu brings additional password rules granularity which should allow for virtually any password policy scenario. Worth noting is that the use of dictionaries with forbidden words is possible either by creating a custom dictionary or downloading dictionaries provided by Specops.
Figure 8. Regulating password rules requirements in one place
Figure 9. Additional protection from users trying to subvert the password policy
Breached Password Protection
A great set of options are found under Breached Password Protection. In a nutshell, it allows the system to compare an Active Directory password to a list of known breached passwords. As might be expected, passwords are hashed in the process.
If a password is discovered in the breached password list, the action triggers the delivery of notifications/alerts.
Figure 10. Breached Password Protection Complete API
Figure 11. Breached Password Protection Express List
With the API, Specops Password Policy supports both email and SMS notifications. When using the Express List (a downloadable passwords list) you can use only email notifications.
I realize there’s a narrow application for it, but I would like to see support for custom SMS gateways in future versions, as large enterprises might find this useful. Specops Software tells me that since there’s no extra cost involved for using the SMS notification feature they’ve never been asked to provide a custom SMS platform.
The latest version of Specops Password Policy comes with several powerful new features, Powershell CMDlets and a security scanner.
Leaked password scanning
While Powershell support is nothing new to Specops Password Policy, the latest version brings us powerful new CMDlets:
- Get-SppPasswordExpiration and Get-PasswordPolicyAffectingUser are user-related CMDlets enabling checks which until now could not be requested nor scripted trough Powershell. I found them rather useful during troubleshooting while trying to discern why a certain policy was not working as intended. Using CMDlets with pretty self-explanatory names is much faster than going through the menus of a newly installed application.
- Get-SppPasswordExpiration checks for the password expiration date, returning the date and reliability of the password.
- Get-PasswordPolicyAffectingUser – if you ever handled a multi-policy environment, you know that something simple as knowing the exact policies applied to the user can be the difference between solving an issue or entering a virtually endless troubleshooting loop. You just need to provide the username in sAMAccountName or userPrincipalName format for which the CMDlet returns GpoID, GpoName, and the password policy name.
- Start-PasswordPolicyLeakedPasswordScanning – As evident from the name, it starts scanning for leaked passwords in your Active Directory environment. Even though this feature is present in the Domain Admin tool, this CMDlet is useful as it can be scripted and delayed, which is ideal for administrators working in large environments. After running the CMDlet, all users that are non-compliant to the policy will be notified on the next logon to change their password. Leaked passwords scanning requires the Specops Breached Password Protection license.
Figure 12. All available Specops Password Policy CMDlets
Looking after your passwords
Specops Software maintains a comprehensive list of leaked passwords based on numerous sources. It contains billions of passwords and is often updated.
Breached Password Protection can be configured with two settings: Breached Password Protection Complete and Breached Password Protection Express.
The Complete setting comes with a master list of leaked passwords that are stored in the cloud. If a user changes their password to one that can be found on the list, a notification is sent via email or SMS, and they are forced to change their password the next time they log in. For this, you’ll need .Net 4.7.1 and Windows Server 2012 R2 or later, with an installation of Specops Arbiter and an API key.
Breached Password Protection Express downloads a subset of the list of leaked passwords, updated usually every 6 months. This also means administrators will need to manually check for updates and initiate a download of the updated list. Users are also immediately prevented from changing their password to a password found in the leaked list.
Length based password expiration
Specops has found a way to reward security-conscious users by extending the timeframe for mandated password change.
Figure 13. The longer the password, the later it expires
Users can be notified of their upcoming mandated password change. As the timeframe for mandated password change is dictated by password length, notifying users is of great importance as it can help user to prepare in advance. The notification can be shown to the users using regular Active Directory resources, on the logon screen or via email. For both methods you can define the number of days before a mandated password change notification is shown or sent.
This is a security scanner for Active Directory, and it’s such a simple yet invaluable tool. It is included in Specops Password Policy and is available as standalone freeware. It groups all possible password security issues found inside your Active Directory. This at-a-glance overview essentially points out all the things you need to worry about, and it’s the place to discover quickly if there’s a problem you might not be aware of like a password being on a leaked list.
Specops has chosen smart way of aggregating important areas around password security and polices, showing the most relevant issues and offering quick insight of potential issues.
Figure 14. A closer look at expiring passwords
Once you’re aware of all the issues, you can quickly focus on what’s critical. I find this to be an easy way to audit your Active Directory environment for a variety of issues at the same time.
After testing Specops Password Policy for a week in a variety of scenarios, I can definitely say we’re talking about a formidable solution. Not only does it make the process of strengthening the password policies better while being simple to use, but it can detect and resolve issues you might not be aware of in the first place.
I can highly recommend Specops Password Policy for any Active Directory environment, and I would go as far as to say it’s a necessity for complex environments dealing with compliance regulations, as well as specific password policy requirements. This solution can raise security level on any Active Directory environment, and you can’t argue about the benefits of better security, can you?
In the past few years, the use of automation in many spheres of cybersecurity has increased dramatically, but penetration testing has remained stubbornly immune to it.
While crowdsourced security has evolved as an alternative to penetration testing in the past 10 years, it’s not based on automation but simply throwing more humans at a problem (and in the process, creating its own set of weaknesses). Recently though, tools that can be used to automate penetration testing under certain conditions have surfaced – but can they replace human penetration testers?
How do automated penetration testing tools work?
To answer this question, we need to understand how they work, and crucially, what they can’t do. While I’ve spent a great deal of the past year testing these tools and comparing them in like-for-like tests against a human pentester, the big caveat here is that these automation tools are improving at a phenomenal rate, so depending on when you read this, it may already be out of date.
First of all, the “delivery” of the pen test is done by either an agent or a VM, which effectively simulates the pentester’s laptop and/or attack proxy plugging into your network. So far, so normal. The pentesting bot will then perform reconnaissance on its environment by performing scans a human would do – so where you often have human pentesters perform a vulnerability scan with their tool of choice or just a ports and services sweep with Nmap or Masscan. Once they’ve established where they sit within the environment, they will filter through what they’ve found, and this is where their similarities to vulnerability scanners end.
Vulnerability scanners will simply list a series of vulnerabilities and potential vulnerabilities that have been found with no context as to their exploitability and will simply regurgitate CVE references and CVSS scores. They will sometimes paste “proof” that the system is vulnerable but don’t cater well to false positives.
Automated penetration testing tools will then choose out of this list of targets the “best” system to take over, making decisions based on ease of exploit, noise and other such factors. So, for example, if it was presented with an Windows machine which was vulnerable to EternalBlue it may favor this over brute forcing an open SSH port that authenticates with a password as it’s a known quantity and much faster/easier to exploit.
Once it gains a foothold, it will propagate itself through the network, mimicking the way a pentester or attacker would do it, but the only difference being it actually installs a version of its own agent on the exploited machine and continues its pivot from there (there are variations in how different vendors do this).
It then starts the process again from scratch, but this time will also make sure it forensically investigates the machine it has landed on to give it more ammunition to continue its journey through your network. This is where it will dump password hashes if possible or look for hardcoded credentials or SSH keys. It will then add this to its repertoire for the next round of its expansion. So, while previously it may have just repeated the scan/exploit/pivot, this time it will try a pass-the-hash attack or try connecting to an SSH port using the key it just pilfered. Then, it pivots again from here and so on and so forth.
If you notice a lot of similarities to how a human pentester behaves, you’re absolutely right: a lot of this is exactly how pentesters (and to a lesser extent) attackers behave. The toolsets are similar and the techniques and vectors used to pivot are identical in many ways.
So, what’s different?
First of all, the act of automation gives a few advantages over the ageing pentesting methodology (and equally chaotic crowdsourced methodology).
The speed of the test and reporting is many magnitudes faster, and the reports are actually surprisingly readable (after verifying with some QSA’s, they will also pass the various PCI DSS pentesting requirements).
No more waiting days or weeks for a report that has been drafted by human hands and gone through a few rounds of QA before being delivered. This is one of the primary weaknesses of human pen tests: the adoption of continuous delivery has caused many pen test reports to become out of date as soon as they are delivered, since the environment on which the test was completed has been updated multiple times since, and therefore, had potential vulnerabilities and misconfigurations introduced that weren’t present at the time of the pen test. This is why traditional pentesting is more akin to a snapshot of your security posture at a particular point in time.
Automated penetration testing tools get around this limitation by being able to run tests daily, or twice daily, or on every change, and deliver a report almost instantly.
The second advantage is the entry point. A human pentester may be given a specific entry point into your network, while an automated pentesting tool can run the same pen test multiple times from different entry points to uncover vulnerable vectors within your network and monitor various impact scenarios depending on the entry point. While this is theoretically possible with a human it would require a huge budgetary investment due to having to pay each time for a different test.
What are the downsides?
1. Automated penetration testing tools don’t understand web applications – at all. While they will detect something like a web server at the ports/services level, they won’t understand that you have an IDOR vulnerability in your internal API or a SSRF in an internal web page that you can use to pivot further. This is because the web stack today is complex and, to be fair, even specialist scanners (like web application scanners) have a hard time detecting vulnerabilities that aren’t low hanging fruit (such as XSS or SQLi)
2. You can only use automated pentesting tools “inside” the network. As most exposed company infrastructure will be web-based and automated pentesting tools don’t understand these, you’ll still need to stick to a good old-fashioned human pentester for testing from the outside.
To conclude, the technology shows a lot of promise, but it’s still early days and while they aren’t ready to make human pentesters redundant just yet, they do have a role in meeting today’s offensive security challenges that can’t be met without automation.
Positive Technologies performed instrumental scanning of the network perimeter of selected corporate information systems. A total of 3,514 hosts were scanned, including network devices, servers, and workstations.
The results show the presence of high-risk vulnerabilities at most companies. However, half of these vulnerabilities can be eliminated by installing the latest software updates.
The research shows high-risk vulnerabilities at 84% of companies across finance, manufacturing, IT, retail, government, telecoms and advertising. One or more hosts with a high-risk vulnerability having a publicly available exploit are present at 58% of companies.
Publicly available exploits exist for 10% of the vulnerabilities found, which means attackers can exploit them even if they don’t have professional programming skills or experience in reverse engineering. However, half of the vulnerabilities can be eliminated by installing the latest software updates.
The detected vulnerabilities are caused by the absence of recent software updates, outdated algorithms and protocols, configuration flaws, mistakes in web application code, and accounts with weak and default passwords.
Vulnerabilities can be fixed by installing the latest software versions
As part of the automated security assessment of the network perimeter, 47% of detected vulnerabilities can be fixed by installing the latest software versions.
All companies had problems with keeping software up to date. At 42% of them, PT found software for which the developer had announced the end of life and stopped releasing security updates. The oldest vulnerability found in automated analysis was 16 years old.
Analysis revealed remote access and administration interfaces, such as Secure Shell (SSH), Remote Desktop Protocol (RDP), and Network Virtual Terminal Protocol (Internet) TELNET. These interfaces allow any external attacker to conduct bruteforce attacks.
Attackers can bruteforce weak passwords in a matter of minutes and then obtain access to network equipment with the privileges of the corresponding user before proceeding to develop the attack further.
Ekaterina Kilyusheva, Head of Information Security Analytics Research Group of Positive Technologies said: “Network perimeters of most tested corporate information systems remain extremely vulnerable to external attacks.
“Our automated security assessment proved that all companies have network services available for connection on their network perimeter, allowing hackers to exploit software vulnerabilities and bruteforce credentials to these services.
Minimizing the number of services on the network perimeter is recommended
Kilyusheva continued: “At most of the companies, experts found accessible web services, remote administration interfaces, and email and file services on the network perimeter. Most companies also had external-facing resources with arbitrary code execution or privilege escalation vulnerabilities.
“With maximum privileges, attackers can edit and delete any information on the host, which creates a risk of DoS attacks. On web servers, these vulnerabilities may also lead to defacement, unauthorized database access, and attacks on clients. In addition, attackers can pivot to target other hosts on the network.
“We recommend minimizing the number of services on the network perimeter and making sure that accessible interfaces truly need to be available from the Internet. If this is the case, it is recommended to ensure that they are configured securely, and businesses install updates to patch any known vulnerabilities.
“Vulnerability management is a complex task that requires proper instrumental solutions,” Kilyusheva added. “With modern security analysis tools, companies can automate resource inventories and vulnerability searches, and also assess security policy compliance across the entire infrastructure. Automated scanning is only the first step toward achieving an acceptable level of security. To get a complete picture, it is vital to combine automated scanning with penetration testing. Subsequent steps should include verification, triage, and remediation of risks and their causes.”
The majority of applications contain at least one security flaw and fixing those flaws typically takes months, a Veracode report reveals.
This year’s analysis of 130,000 applications found that it takes about six months for teams to close half the security flaws they find.
The report also uncovered some best practices to significantly improve these fix rates. There are some factors that teams have a lot of control over, and those they have very little control over categorizing them as “nature vs. nurture”.
Within the “nature” side, factors such as the size of the application and organization as well as security debt were considered, while the “nurture” side accounts for actions such as scanning frequency, cadence, and scanning via APIs.
Fixing security flaws: Nature or nurture?
The report revealed that addressing issues with modern DevSecOps practices results in higher flaw remediation rates. For example, using multiple application security scan types, working within smaller or more modern apps, and embedding security testing into the pipeline via an API all make a difference in reducing time to fix security defects, even in apps with a less than ideal “nature.”
“The goal of software security isn’t to write applications perfectly the first time, but to find and fix the flaws in a comprehensive and timely manner,” said Chris Eng, Chief Research Officer at Veracode.
“Even when faced with the most challenging environments, developers can take specific actions to improve the overall security of the application with the right training and tools.”
Other key findings
Flawed applications are the norm: 76% of applications have at least one security flaw, but only 24% have high-severity flaws. This is a good sign that most applications do not have critical issues that pose serious risks to the application. Frequent scanning can reduce the time it takes to close half of observed findings by more than three weeks.
Open source flaws on the rise: while 70% of applications inherit at least one security flaw from their open source libraries, 30% of applications have more flaws in their open source libraries than in the code written in-house.
The key lesson is that software security comes from getting the whole picture, which includes identifying and tracking the third-party code used in applications.
Multiple scan types prove efficacy of DevSecOps: teams using a combination of scan types including static analysis (SAST), dynamic analysis (DAST), and software composition analysis (SCA) improve fix rates. Those using SAST and DAST together fix half of flaws 24 days faster.
Automation matters: those who automate security testing in the SDLC address half of the flaws 17.5 days faster than those that scan in a less automated fashion.
Paying down security debt is critical: the link between frequently scanning applications and faster remediation times has been established in a prior research.
This year’s report also found that reducing security debt – fixing the backlog of known flaws – lowers overall risk. Older applications with high flaw density experience much slower remediation times, adding an average of 63 days to close half of flaws.
Mark Sangster, VP and Industry Security Strategist at eSentire, is a cybersecurity evangelist who has spent significant time researching and speaking to peripheral factors influencing the way that legal firms integrate cybersecurity into their day-to-day operations. In this interview, he discusses MDR services and the MDR market.
What are the essential building blocks of a robust MDR service?
Managed Detection and Response (MDR) must combine two elements. The first is an aperture that can collect the full spectrum of telemetry. This means not only monitoring the network through traditional logging and perimeter defenses but also collecting security telemetry from endpoints, cloud services and connected IoT devices.
The wider the aperture, the more light, or signal. This creates the need for rapid ingestion of a growing volume of data, while doing so in near real-time, to aid rapid detection.
The second element is the ability to respond beyond simple alerting. This means the ability to disrupt north and south traffic at the TCP/IP, DNS and geo-fencing levels. It can disrupt application layer traffic or at least block specific applications. Encompassing the ability to perform endpoint forensics to determine integrity of accessed data and systems and the ability to quarantine devices from endpoints to industrial IoT devices and other operational systems, such as medical diagnosis and patient-management systems.
What makes an MDR service successful?
MDR services require a hyper-vigilance with the ability to scale and rapidly adapt to secure emerging technology. This includes OT-based systems beyond the typical auspices of IT. It also requires an ecosystem of talent: working with universities to guide curriculum, training programs, certification maintenance and work paths through Security Operations Center (SOC) and into threat intelligence and lab work.
The MDR market is becoming more competitive and the number of providers continues to grow. What is the best approach for choosing an MDR provider?
Like any vendor selection, it is more about determining your requirements than picking vendors based on boasts or comprehensive data sheets. It means testing vendor capabilities and carefully matching them to your requirements. For example, if you don’t have internal forensics capabilities, then a vendor that is good at detection but only provides alerts won’t solve your problem.
Find a vendor that provides full services and matches your internal capabilities.
How do you see the MDR market evolving in the near future? What are organizations looking for?
More and more, companies will move to outsourced SOC-like services. This means MDR firms need to up their game, and a tighter definition must come into play to weed out pretender firms. Too much rests on their capabilities.
MDR vendors also need to focus on emerging tech (5G, IIoT, etc.) and be prepared to defend against larger adversaries, like organized criminal elements and state-sponsored actors who now troll the midmarket space.
The COVID-19 pandemic has largely proven to be an accelerator of cloud adoption and extension and will continue to drive a faster conversion to cloud-centric IT.
Global spending on cloud services to rise
According to IDC, total global spending on cloud services, the hardware and software components underpinning cloud services, and the professional and managed services opportunities around cloud services will surpass $1 trillion in 2024 while sustaining a double-digit compound annual growth rate (CAGR) of 15.7%.
“Cloud in all its permutations – hardware/software/services/as a service as well as public/private/hybrid/multi/edge – will play ever greater, and even dominant, roles across the IT industry for the foreseeable future,” said Richard L. Villars, Group VP, Worldwide Research at IDC.
“By the end of 2021, based on lessons learned in the pandemic, most enterprises will put a mechanism in place to accelerate their shift to cloud-centric digital infrastructure and application services twice as fast as before the pandemic.”
Strongest growth in the as a service category
The strongest growth in cloud revenues will come in the as a service category – public (shared) cloud services and dedicated (private) cloud services. This category, which is also the largest category in terms of overall revenues, is forecast to deliver a five-year CAGR of 21.0%.
By 2024, the as a service category will account for more than 60% of all cloud revenues worldwide. The services category, which includes cloud-related professional services and cloud-related management services, will be the second largest category in terms of revenue but will experience the slowest growth with an 8.3% CAGR. This is due to a variety of factors, including greater use of automation in cloud migrations.
The smallest cloud category, infrastructure build, which includes hardware, software, and support for enterprise private clouds and service provider public clouds, will enjoy solid growth (11.1% CAGR) over the forecast period.
Factors driving the cloud market forward
While the impact of COVID-19 could have some negative effects on cloud adoption over the next several years, there are a number of factors that are driving the cloud market forward.
- The ecosystem of tech companies helping customers migrate to cloud environments, create new innovations in the cloud, and manage their expanding cloud environments will enable enterprises to meet their accelerated schedules for moving to cloud.
- The emergence of consumption-based IT offerings are aimed at leveraging public cloud-like capabilities in an on-premises environment that reduces the complexity and restructures the cost for enterprises that want additional security, dedicated resources, and more granular management capabilities.
- The adoption of cloud services should enable organizations to shift IT from maintenance of legacy IT to new digital transformation initiatives, which can lead to new business revenue and competitiveness as well as create new opportunities for suppliers of professional services.
- Hybrid cloud has become central to successful digital transformation efforts by defining an IT architectural approach, an IT investment strategy, and an IT staffing model that ensures the enterprise can achieve the optimal balance across dimensions without sacrificing performance, reliability, or control.
Vulnerability scanners can be a very useful addition to any development or operations process. Since a typical vulnerability scanner needs to detect vulnerabilities in deployed software, they are (generally) not dependent on the language or technology used for the application they are scanning.
This often doesn’t make them the top choice for detecting a large number of vulnerabilities or even detecting fickle bugs or business logic issues, but makes them great and very common tools for testing a large number of diverse applications, where such dynamic application security testing tools are indispensable. This includes testing for security defects in software that is being currently developed as a part of a SDLC process, reviewing third-party applications that are deployed inside one’s network (as a part of a due diligence process) or – most commonly – finding issues in all kinds of internally developed applications.
We reviewed Netsparker Enterprise, which is one of the industry’s top choices for web application vulnerability scanning.
Netsparker Enterprise is primarily a cloud-based solution, which means it will focus on applications that are publicly available on the open internet, but it can also scan in-perimeter or isolated applications with the help of an agent, which is usually deployed in a pre-packaged Docker container or a Windows or Linux binary.
To test this product, we wanted to know how Netsparker handles a few things:
1. Scanning workflow
2. Scan customization options
3. Detection accuracy and results
4. CI/CD and issue tracking integrations
5. API and integration capabilities
6. Reporting and remediation efforts.
To assess the tool’s detection capabilities, we needed a few targets to scan and assess.
After some thought, we decided on the following targets:
1. DVWA – Damn Vulnerable Web Application – An old-school extremely vulnerable application, written in PHP. The vulnerabilities in this application should be detected without an issue.
3. Vulnapi – A python3-based vulnerable REST API, written in the FastAPI framework running on Starlette ASGI, featuring a number of API based vulnerabilities.
After logging in to Netsparker, you are greeted with a tutorial and a “hand-holding” wizard that helps you set everything up. If you worked with a vulnerability scanner before, you might know what to do, but this feature is useful for people that don’t have that experience, e.g., software or DevOps engineers, who should definitely use such tools in their development processes.
Initial setup wizard
Scanning targets can be added manually or through a discovery feature that will try to find them by matching the domain from your email, websites, reverse IP lookups and other methods. This is a useful feature if other methods of asset management are not used in your organization and you can’t find your assets.
New websites or assets for scanning can be added directly or imported via a CSV or a TXT file. Sites can be organized in Groups, which helps with internal organization or per project / per department organization.
Adding websites for scanning
Scans can be defined per group or per specific host. Scans can be either defined as one-off scans or be regularly scheduled to facilitate the continuous vulnerability remediation process.
To better guide the scanning process, the classic scan scope features are supported. For example, you can define specific URLs as “out-of-scope” either by supplying a full path or a regex pattern – a useful option if you want to skip specific URLs (e.g., logout, user delete functions). Specific HTTP methods can also be marked as out-of-scope, which is useful if you are testing an API and want to skip DELETE methods on endpoints or objects.
Initial scan configuration
Scan scope options
One feature we quite liked is the support for uploading the “sitemap” or specific request information into Netsparker before scanning. This feature can be used to import a Postman collection or an OpenAPI file to facilitate scanning and improve detection capabilities for complex applications or APIs. Other formats such as CSV, JSON, WADL, WSDL and others are also supported.
For the red team, loading links and information from Fiddler, Burp or ZAP session files is supported, which is useful if you want to expand your automated scanning toolbox. One limitation we encountered is the inability to point to an URL containing an OpenAPI definition – a capability that would be extremely useful for automated and scheduled scanning workflows for APIs that have Swagger web UIs.
Scan policies can be customized and tuned in a variety of ways, from the languages that are used in the application (ASP/ASP.NET, PHP, Ruby, Java, Perl, Python, Node.js and Other), to database servers (Microsoft SQL server, MySQL, Oracle, PostgreSQL, Microsoft Access and Others), to the standard choice of Windows or Linux based OSes. Scan optimizations should improve the detection capability of the tool, shorten scanning times, and give us a glimpse where the tool should perform best.
The next important question is, does it blend… or integrate? From an integration standpoint, sending email and SMSes about the scan events is standard, but support for various issue tracking systems like Jira, Bitbucket, Gitlab, Pagerduty, TFS is available, and so is support for Slack and CI/CD integration. For everything else, there is a raw API that can be used to tie in Netsparker to other solutions if you are willing to write a bit of integration scripting.
One really well-implemented feature is the support for logging into the testing application, as the inability to hold a session and scan from an authenticated context in the application can lead to a bad scanning performance.
Netsparker has the support for classic form-based login, but 2FA-based login flows that require TOTP or HOTP are also supported. This is a great feature, as you can add the OTP seed and define the period in Netsparker, and you are all set to scan OTP protected logins. No more shimming and adding code to bypass the 2FA method in order to scan the application.
Custom scripting workflow for authentication
If we had to nitpick, we might point out that it would be great if Netsparker also supported U2F / FIDO2 implementations (by software emulating the CTAP1 / CTAP2 protocol), since that would cover the most secure 2FA implementations.
In addition to form-based authentication, Basic NTLM/Kerberos, Header based (for JWTs), Client Certificate and OAuth2-based authentication is also supported, which makes it easy to authenticate to almost any enterprise application. The login / logout flow is also verified and supported through a custom dialog, where you can verify that the supplied credentials work, and you can configure how to retain the session.
Login verification helper
And now for the core of this review: what Netsparker did and did not detect.
In short, everything from DVWA was detected, except broken client-side security, which by definition is almost impossible to detect with security scanning if custom rules aren’t written. So, from a “classic” application point of view, the coverage is excellent, even the out-of-date software versions were flagged correctly. Therefore, for normal, classic stateful applications, written in a relatively new language, it works great.
One interesting point for vulnerability detection is that Netsparker uses an engine that tries to verify if the vulnerability is exploitable and will try to create a “proof” of vulnerability, which reduces false positives.
On the negative side, no vulnerabilities in WebSocket-based communications were found, and neither was the API endpoint that implemented insecure YAML deserialization with pyYAML. By reviewing the Netsparker knowledge base, we also found that there is no support for websockets and deserialization vulnerabilities.
That’s certainly not a dealbreaker, but something that needs to be taken into account. This also reinforces the need to use a SAST-based scanner (even if just a free, open source one) in the application security scanning stack, to improve test coverage in addition to other, manual based security review processes.
Multiple levels of detail (from extensive, executive summary, to PCI-DSS level) are supported, both in a PDF or HTML export option. One nice feature we found is the ability to create F5 and ModSecurity rules for virtual patching. Also, scanned and crawled URLs can be exported from the reporting section, so it’s easy to review if your scanner hit any specific endpoints.
Scan results dashboard
Scan result details
Instead of describing the reports, we decided to export a few and attach them to this review for your enjoyment and assessment. All of them have been submitted to VirusTotal for our more cautious readers.
Netsparker’s reporting capabilities satisfy our requirements: the reports contain everything a security or AppSec engineer or a developer needs.
Since Netsparker integrates with JIRA and other ticketing systems, the general vulnerability management workflow for most teams will be supported. For lone security teams, or where modern workflows aren’t integrated, Netsparker also has an internal issue tracking system that will let the user track the status of each found issue and run rescans against specific findings to see if mitigations were properly implemented. So even if you don’t have other methods of triage or processes set up as part of a SDLC, you can manage everything through Netsparker.
Netsparker is extremely easy to set up and use. The wide variety of integrations allow it to be integrated into any number of workflows or management scenarios, and the integrated features and reporting capabilities have everything you would want from a standalone tool. As far as features are concerned, we have no objections.
The login flow – the simple interface, the 2FA support all the way to the scripting interface that makes it easy to authenticate even in the more complex environments, and the option to report on the scanned and crawled endpoints – helps users discover their scanning coverage.
Taking into account the fact that this is an automated scanner that relies on “black boxing” a deployed application without any instrumentalization on the deployed environment or source code scanning, we think it is very accurate, though it could be improved (e.g., by adding the capability of detecting deserialization vulnerabilities). Following the review, Netsparker has confirmed that adding the capability of detecting deserialization vulnerabilities is included in the product development plans.
Nevertheless, we can highly recommend Netsparker.
Earlier this week SonicWall patched 11 vulnerabilities affecting its Network Security Appliance (NSA). Among those is CVE-2020-5135, a critical stack-based buffer overflow vulnerability in the appliances’ VPN Portal that could be exploited to cause denial of service and possibly remote code execution.
The SonicWall NSAs are next-generation firewall appliances, with a sandbox, an intrusion prevention system, SSL/TLS decryption and inspection capabilities, network-based malware protection, and VPN capabilities.
CVE-2020-5135 was discovered by Nikita Abramov of Positive Technologies and Craig Young of Tripwire’s Vulnerability and Exposures Research Team (VERT), and has been confirmed to affect:
- SonicOS 126.96.36.199-79n and earlier
- SonicOS 188.8.131.52-4n and earlier
- SonicOS 184.108.40.206-93o and earlier
- SonicOSv 220.127.116.11-44v-21-794 and earlier
- SonicOS 18.104.22.168-1
“The flaw can be triggered by an unauthenticated HTTP request involving a custom protocol handler. The vulnerability exists within the HTTP/HTTPS service used for product management as well as SSL VPN remote access,” Tripwire VERT explained.
“This flaw exists pre-authentication and within a component (SSLVPN) which is typically exposed to the public Internet.”
By using Shodan, both Tripwire and Tenable researchers discovered nearly 800,000 SonicWall NSA devices with the affected HTTP server banner exposed on the internet. Though, as the latter noted, it is impossible to determine the actual number of vulnerable devices because their respective versions could not be determined (i.e., some may already have been patched).
A persistent DoS condition is apparently easy for attackers to achieve, as it requires no prior authentication and can be triggered by sending a specially crafted request to the vulnerable service/SSL VPN portal.
VERT says that a code execution exploit is “likely feasible,” though it’s a bit more difficult to pull off.
Mitigation and remediation
There is currently no evidence that the flaw is being actively exploited nor is there public PoC exploitation code available, so admins have a window of opportunity to upgrade affected devices.
Aside from implementing the offered update, they can alternatively disconnect the SSL VPN portal from the internet, though this action does not mitigate the risk of exploitation of some of the other flaws fixed by the latest updates.
Starting next week, Zoom users – both those who are on one of the paid plans and those who use it for free – will be able to try out the solution’s new end-to-end encryption (E2EE) option.
In this first rollout phase, all meeting participants:
- Must join from the Zoom desktop client, mobile app, or Zoom Rooms
- Must enable the E2EE option at the account level and then for each meeting they want to use E2EE for
How does Zoom E2EE work?
“Zoom’s E2EE uses the same powerful GCM encryption you get now in a Zoom meeting. The only difference is where those encryption keys live,” the company explained.
“In typical meetings, Zoom’s cloud generates encryption keys and distributes them to meeting participants using Zoom apps as they join. With Zoom’s E2EE, the meeting’s host generates encryption keys and uses public key cryptography to distribute these keys to the other meeting participants. Zoom’s servers become oblivious relays and never see the encryption keys required to decrypt the meeting contents.”
The option will be available as a technical preview and will work for meetings including up to 200 participants. In order to join such a meeting, they must have the E2EE setting enabled.
For the moment, though, enabling E2EE for a meeting means giving up on certain features: “join before host”, cloud recording, streaming, live transcription, Breakout Rooms, polling, 1:1 private chat, and meeting reactions.
“Participants will also see the meeting leader’s security code that they can use to verify the secure connection. The host can read this code out loud, and all participants can check that their clients display the same code,” the company added.
E2EE for everybody
In June 2020, Zoom CEO Eric Yuan announced the company’s intention to offer E2EE only to paying customers, but after a public outcry they decided to extend its benefits to customers with free accounts as well.
“Free/Basic users seeking access to E2EE will participate in a one-time verification process that will prompt the user for additional pieces of information, such as verifying a phone number via text message. Many leading companies perform similar steps to reduce the mass creation of abusive accounts,” the company reiterated again with this latest announcement.
COVID-19 has forced developer agility into overdrive, as the tech industry’s quick push to adapt to changing dynamics has accelerated digital transformation efforts and necessitated the rapid introduction of new software features, patches, and functionalities.
During this time, organizations across both the private and public sector have been turning to open source solutions as a means to tackle emerging challenges while retaining the rapidity and agility needed to respond to evolving needs and remain competitive.
Since well before the pandemic, software developers have leveraged open source code as a means to speed development cycles. The ability to leverage pre-made packages of code rather than build software from the ground up has enabled them to save valuable time. However, the rapid adoption of open source has not come without its own security challenges, which developers and organizations should resolve safely.
Here are some best practices developers should follow when implementing open source code to promote security:
Know what and where open source code is in use
First and foremost, developers should create and maintain a record of where open source code is being used across the software they build. Applications today are usually designed using hundreds of unique open source components, which then reside in their software and workspaces for years.
As these open source packages age, there is an increasing likelihood of vulnerabilities being discovered in them and publicly disclosed. If the use of components is not closely tracked against the countless new vulnerabilities discovered every year, software leveraging these components becomes open to exploitation.
Attackers understand all too well how often teams fall short in this regard, and software intrusions via known open source vulnerabilities are a highly common sources of breaches. Tracking open source code usage along with vigilance around updates and vulnerabilities will go a long way in mitigating security risk.
Understand the risks before adopting open source
Aside from tracking vulnerabilities in the code that’s already in use, developers must do their research on open source components before adopting them to begin with. While an obvious first step is ensuring that there are no known vulnerabilities in the component in question, other factors should be considered focused on the longevity of the software being built.
Teams should carefully consider the level of support offered for a given component. It’s important to get satisfactory answers to questions such as:
- How often is the component patched?
- Are the patches of high quality and do they address the most pressing security issues when released?
- Once implemented, are they communicated effectively and efficiently to the user base?
- Is the group or individual who built the component a trustworthy source?
Leverage automation to mitigate risk
It’s no secret that COVID-19 has altered developers’ working conditions. In fact, 38% of developers are now releasing software monthly or faster, up from 27% in 2018. But this increased pace often comes paired with unwanted budget cuts and organizational changes. As a result, the imperative to “do more with less” has become a rallying cry for business leaders. In this context, it is indisputable that automation across the entire IT security portfolio has skyrocketed to the top of the list of initiatives designed to improve operational efficiency.
While already an important asset for achieving true DevSecOps agility, automated scanning technology has become near-essential for any organization attempting to stay secure while leveraging open source code. Manually tracking and updating open source vulnerabilities across an organization’s entire software suite is hard work that only increases in difficulty with the scale of an organization’s software deployments. And what was inefficient in normal times has become unfeasible in the current context.
Automated scanning technologies alleviate the burden of open source security by handling processes that would otherwise take up precious time and resources. These tools are able to detect and identify open source components within applications, provide detailed risk metrics regarding open source vulnerabilities, and flag outdated libraries for developers to address. Furthermore, they provide detailed insight into thousands of public open source vulnerabilities, security advisories and bugs, to ensure that when components are chosen they are secure and reputable.
Finally, these tools help developers prioritize and triage remediation efforts once vulnerabilities are identified. Equipped with the knowledge of which vulnerabilities present the greatest risk, developers are able to allocate resources most efficiently to ensure security does not get in the way of timely release cycles.
Confidence in a secure future
When it comes to open source security, vigilance is the name of the game. Organizations must be sure to reiterate the importance of basic best practices to developers as they push for greater speed in software delivery.
While speed has long been understood to come at the cost of software security, this type of outdated thinking cannot persist, especially when technological advancements in automation have made such large strides in eliminating this classically understood tradeoff. By following the above best practices, organizations can be more confident that their COVID-19 driven software rollouts will be secure against issues down the road.
As evolution to the cloud is accelerated by digital transformation across industries, virtual appliance security has fallen behind, Orca Security reveals.
Virtual appliance security
The report illuminated major gaps in virtual appliance security, finding many are being distributed with known, exploitable and fixable vulnerabilities and on outdated or unsupported operating systems.
To help move the cloud security industry towards a safer future and reduce risks for customers, 2,218 virtual appliance images from 540 software vendors were analyzed for known vulnerabilities and other risks to provide an objective assessment score and ranking.
Virtual appliances are an inexpensive and relatively easy way for software vendors to distribute their wares for customers to deploy in public and private cloud environments.
“Customers assume virtual appliances are free from security risks, but we found a troubling combination of rampant vulnerabilities and unmaintained operating systems,” said Avi Shua, CEO, Orca Security.
“The Orca Security 2020 State of Virtual Appliance Security Report shows how organizations must be vigilant to test and close any vulnerability gaps, and that the software industry still has a long way to go in protecting its customers.”
Known vulnerabilities run rampant
Most software vendors are distributing virtual appliances with known vulnerabilities and exploitable and fixable security flaws.
- The research found that less than 8 percent of virtual appliances (177) were free of known vulnerabilities. In total, 401,571 vulnerabilities were discovered across the 2,218 virtual appliances from 540 software vendors.
- For this research, 17 critical vulnerabilities were identified, deemed to have serious implications if found unaddressed in a virtual appliance. Some of these well-known and
easily exploitable vulnerabilities included: EternalBlue, DejaBlue, BlueKeep, DirtyCOW, and Heartbleed.
- Meanwhile, 15 percent of virtual appliances received an F rating, deemed to have failed the research test.
- More than half of tested virtual appliances were below an average grade, with 56 percent obtaining a C rating or below (15.1 percent F; 16.1 percent D; 25 percent C).
- However, due to a retesting of the 287 updates made by software vendors after receiving findings, the average grade of these rescanned virtual appliances has increased from a B to an A.
Outdated appliances increase risk
Multiple virtual appliances were at security risk from age and lack of updates. The research found that most vendors are not updating or discontinuing their outdated or end-of-life (EOL) products.
- The research found that only 14 percent (312) of the virtual appliance images had been updated within the last three months.
- Meanwhile, 47 percent (1,049) had not been updated within the last year; 5 percent (110) had been neglected for at least three years, and 11 percent (243) were running on out of date or EOL operating systems.
- Although, some outdated virtual appliances have been updated after initial testing. For example, Redis Labs had a product that scored an F due to an out-of-date operating system and many vulnerabilities, but now scored an A+ after updates.
The silver lining
Under the principle of Coordinated Vulnerability Disclosure, researchers emailed each vendor directly, giving them the opportunity to fix their security issues. Fortunately, the tests have started to move the cloud security industry forward.
As a direct result of this research, vendors reported that 36,259 out of 401,571 vulnerabilities have been removed by patching or discontinuing their virtual appliances from distribution. Some of these key corrections or updates included:
- Dell EMC issued a critical security advisory for its CloudBoost Virtual Edition
- Cisco published fixes to 15 security issues found in the one of its virtual appliances scanned in the research
- IBM updated or removed three of its virtual appliances within a week
- Symantec removed three poorly scoring products
- Splunk, Oracle, IBM, Kaspersky Labs and Cloudflare also removed products
- Zoho updated half of its most vulnerable products
- Qualys updated a 26-month-old virtual appliance that included a user enumeration vulnerability that Qualys itself had discovered and reported in 2018
Maintaining virtual appliances
For customers and software vendors concerned about the issues illuminated in the report, there are corrective and preventive actions that can be taken. Software suppliers should ensure their virtual appliances are well maintained and that new patches are provided as vulnerabilities are identified.
When vulnerabilities are discovered, the product should be patched or discontinued for use. Meanwhile, vulnerability management tools can also discover virtual appliances and scan them for known issues. Finally, companies should also use these tools to scan all virtual appliances for vulnerabilities before use as supplied by any software vendor.