In today’s world, most external cyberattacks start with phishing. For attackers, it’s almost a no-brainer: phishing is cheap and humans are fallible, even after going through anti-phishing training.
Patrick Harr, CEO at SlashNext, says that while security awareness training is an important aspect of a multi-layered defense strategy, simulating attacks during computer-based training sessions is not an effective way to learn, because people don’t necessarily retain the information.
“Working from home, where there are more distractions, makes it even less likely that people really pay attention to these trainings. That’s why it’s not uncommon to see the same people who tune out training falling for scams again and again,” he noted.
That’s why defenders must preempt attacks, he says, and reinforce a lesson during a live attack. When something gets through and someone clicks on a malicious URL, defenders must be able to simultaneously block the attack and show the victim what the phisher was attempting to do.
Latest phishing trends
Harr, who has over 20 years of experience as a senior executive and GM at industry leading security and storage companies and as a serial entrepreneur and CEO at multiple successful start-ups, is now leading SlashNext, a cybersecurity startup that uses AI to predict and protect enterprise users from phishing threats.
He says that most CISOs assume phishing is a corporate email problem and their current line of defense is adequate, but they are wrong.
“We are detecting 21,000 new phishing attacks a day, many of which have moved beyond corporate email and simple credential stealing. These attacks can easily evade email phishing defenses that rely on static, reputation-based detection. That’s why we typically see 80-90% of attacks evading conventional lines of defense to compromise the network,” he told Help Net Security.
“Magnify this by 150,000 new zero-hour phishing threats a week, almost double the number of threats versus a year ago, and we can safely say, ‘Houston we have a problem!’”
They are seeing:
- More text-based phishing, with no actual links, across SMS, gaming, search services, ad networks, and collaboration platforms like Zoom, Teams, Box, Dropbox, and Slack, as well as attacks on mobile devices
- A proliferation of phishing payloads beyond credential stealing scams which have been around for ages
- An increase in scareware, where phishers attempt to scare people into taking an action, such as sharing an email
- Rogue software attacks embedded in browser extensions and social engineering schemes like the massive Twitter bitcoin scam that happened in July
“Finally, we’re seeing cybercriminals trying out innovative ways to evade detection. For example, bad actors may register a domain that lays dormant for months before going live,” he added, and noted that they’ve witnessed a 3,000% increase in the number of phishing attacks since everyone began working and learning from home, and they expect this growth trend will continue.
Advice for CISOs
His main advice to CISOs is not to be complacent and to be diligent: near term, mid-term, and long term.
“You’ve got to take a comprehensive, multi-layer phishing defense approach outside the firewall, where your biggest user population is working remotely, and inside the firewall for your internal users. You need to protect mobile devices and PC/Mac endpoints, with end-to-end encryption (E2EE) deployed,” he opined.
“You also have to be mindful of corporate users’ personal side as their personal and business lives have converged, and many people use the same devices and same credentials across personal and business accounts.
Thirdly, this type of attacks need to be prevented from happening. “Use AI-enabled defenses to fight AI-enabled attacks. Fight machines with machines and adopt a preemptive security posture.”
Finally: some attacks inevitably breach all defenses and they must be prepared to quickly detect and respond to attack, and perform the necessary cleanup.
Microsoft’s Active Directory (AD) is by far the most widely used enterprise repository for digital identities. Microsoft Active Directory Certificate Services (ADCS) is an integrated, optional component of Windows Server designed to issue digital certificates.
Since it’s a built-in and “free” option, it’s no surprise that ADCS has been widely embraced by enterprises around the world. An on-premises or cloud/hybrid public key infrastructure (PKI) has proven to be more secure and easier to use than passwords, and is far easier to deal with when automated certificate management is integrated into AD.
For organizations operating an AD environment, the ability to leverage the certificate template information already included in Microsoft certificate services can make running a Microsoft CA extremely appealing. Since AD and Microsoft certificate services are connected, you can seamlessly register and provision certificates to all domain-connected objects based on group policies. Auto enrollment and silent installation make certificate enrollment and renewal easy for IT administrators and end users alike.
One of the greatest advantages of the Microsoft CA is automation, but that advantage does not extend to endpoints outside the Windows environment. Unfortunately, there are no free or open source Linux, UNIX or Mac tools available today that provide auto-enrollment or integrate with the Microsoft CA. The only “free” option is to manually create and renew certificates from a Microsoft CA using complicated and error-prone commands.
Within enterprise networks, Linux is often used for critical services that require X.509 trusted certificates. One typical need is for an SSL/TLS Server Authentication certificate, or web server certificate, on Red Hat Enterprise Linux (RHEL), Ubuntu server, or other Linux distributions. Certificates are also often needed on many other endpoints, including macOS-based systems, and to provide trusted security for enterprise applications.
The traditional process of creating a X.509 certificate on Linux, UNIX or macOS manually requires a working level of PKI knowledge and can take three to six hours to complete. You have to generate a key pair, create a Certificate Signing Request (CSR), submit it to a Certificate Authority (CA), wait for a PKI administrator to issue a certificate, download the certificate, configure and update the application using the service, and finally verify that it’s active. An example of what’s involved in creating a CSR using Open SSL is shown in Figure 1.
Figure 1. Using OpenSSL to create a CSR requires knowledge of enterprise PKI policies for keys and certificates and filling in lots of details by hand.
A few years ago, when multi-year certificate lifespans were the norm and certificate volumes were lower, a few manually issued certificates weren’t seen as a big problem. When exceptions like a Linux or Mac system came up, they were handled on a one-off basis. This in turn led to the creation of manual processes. Such processes were easily justified by saying that the certificates could be easily tracked using a spreadsheet, and since the numbers were small and certificate renewals were years apart, it wasn’t worth the effort to get a product to solve the problem when the existing solution was sufficient.
Now, however, that has most definitely changed. Apple, Google and Mozilla have implemented policies that went to effect on Sept. 1, 2020 and that say that any new website certificate valid for more than 398 days will not be trusted by their respective browsers and instead will be rejected. Adding to the certificate management problem is the dramatic increase in the number of device and applications that require certificates, often numbering into the thousands or more.
Looking across the industry landscape, there are third-party certificate management systems (CMS) that can automate enterprise certificate processes more broadly, but such systems have require large-scale “rip and replace”. You have to deeply integrate each Linux system with Active Directory, including switching your system user authentication over as well.
This requires extensive changes to existing Linux and Microsoft infrastructure, staff re-training, and additional software licensing costs. The “rip and replace” approach also requires implementation time frames ranging from a few months to a few years for complex PKIs.
The advantage of such an approach – once you’ve worked through the implementation process – is end-to-end automation. When a CMS is used to create a certificate, it has all the data it needs to not only monitor the certificate for expiration but automatically provision a replacement certificate without human intervention.
While some CMS solutions can be incredibly complex, expensive, and time-consuming to install, there is a new generation of CMS offerings designed to simply extend the Microsoft-based PKI to encompass certificates on systems and applications outside the purview of Active Directory.
Such systems install into an existing Microsoft ADCS environment and mirror Windows certificate auto enrollment onto Linux, Unix or macOS systems. Certificates can be automatically created by registered computers from a central management console. Alternatively, as shown in Figure 2, the administrator of an end-node can request a certificate using one simple command without knowing how to create a keypair, generate a certificate request or know how to translate difficult to understand enrollment templates, formats and attribute requirements.
Figure 2. Creating a CSR is greatly simplified through automation and requires no PKI expertise.
Once issued, this command will automatically create a CSR, submit it to the enterprise Microsoft CA, and install the certificate after it has been issued. This is done using pre-configured PKI policies stored on the enterprise CA. The policies are set up based on the role or purpose of the certificate. Because these policies are pre-defined, the system admin doesn’t need to know anything beyond the role or purpose of the system they are setting up.
Once the certificate has been installed, the admin can move on to other tasks with the knowledge that the enterprise CA will automatically renew the certificate without further intervention. And because the certificate was created by the enterprise CA and not self-signed, the certificate is automatically verifiable by any application in the enterprise.
With the certificate lifetimes already getting shorter, and likely to continue getting shorter, the need for automated certificate management has never been greater. While the Microsoft CA addresses the automation challenge within the Active Directory environment, far too many enterprises are still relying on manual processes for non-Microsoft systems and applications. Rather than abandon the Microsoft CA at considerable expense and disruption, it may be time to consider a certificate management option that bring the benefits of ADCS to the entire enterprise.
Many companies tend to jump into the cloud before thinking about security. They may think they’ve thought about security, but when moving to the cloud, the whole concept of security changes. The security model must transform as well.
Moving to the cloud and staying secure
Most companies maintain a “castle, moat, and drawbridge” attitude to security. They put everything inside the “castle” (datacenter); establish a moat around it, with sharks and alligators, guns on turrets; and control access by raising the drawbridge. The access protocol involves a request for access, vetting through firewall rules where the access is granted or denied. That’s perimeter security.
When moving to the cloud, perimeter security is still important, but identity-based security is available to strengthen the security posture. That’s where a cloud partner skilled at explaining and operating a different security model is needed.
Anybody can grab a virtual machine, build the machine in the cloud, and be done, but establishing a VM and transforming the machine to a service with identity-based security is a different prospect. When identity is added to security, the model looks very different, resulting in cost savings and an increased security posture.
Advanced technology, cost of security, and lack of cybersecurity professionals place a strain on organizations. Cloud providers invest heavily in infrastructure, best-in-class tools, and a workforce uniquely focused on security. As a result, organizations win operationally, financially, and from a security perspective, when moving to the cloud. To be clear, moving applications and servers, as is, to the cloud does not make them secure.
Movement to the cloud should be a standardized process and should use a Cloud Center of Excellence (CCoE) or Cloud Business Office (CBO); however, implemented within a process focused on security first, organizations can reap the security benefits.
Although security is marketed as a shared responsibility in the cloud, ultimately, the owner of the data (customer) is responsible and the responsibility is non-transferrable. In short, the customer must understand the responsibility matrix (RACI) involved to accomplish their end goals. Every cloud provider has a shared responsibility matrix, but organizations often misunderstand the responsibilities or the lines fall into a grey area. Regardless of responsibility models, the data owner has a responsibility to protect the information and systems. As a result, the enterprise must own an understanding of all stakeholders, their responsibilities, and their status.
When choosing a partner, it’s vital for companies to identify their exact needs, their weaknesses, and even their culture. No cloud vendor will cover it all from the beginning, so it’s essential that organizations take control and ask the right questions (see Cloud Security Alliance’s CAIQ), in order to place trust in any cloud provider. If it’s to be a managed service, for example, it’s crucial to ask detailed questions about how the cloud provider intends to execute the offering.
It’s important to develop a standard security questionnaire and probe multiple layers deep into the service model until the provider is unable to meet the need. Looking through a multilayer deep lens allows the customer and service provider to understand the exact lines of responsibility and the details around task accomplishment.
It might sound obvious, but it’s worth stressing: trust is a shared responsibility between the customer and cloud provider. Trust is also earned over time and is critical to the success of the customer-cloud provider relationship. That said, zero trust is a technical term that means, from a technology viewpoint, assume danger and breach. Organizations must trust their cloud provider but should avoid blind trust and validate. Trust as a Service (TaaS) is a newer acronym that refers to third-party endorsement of a provider’s security practices.
Key influencers of a customer’s trust in their cloud provider include:
- Data location
- Investigation status and location of data
- Data segregation (keeping cloud customers’ data separated from others)
- Privileged access
- Backup and recovery
- Regulatory compliance
- Long-term viability
A TaaS example: Google Cloud
Google has taken great strides to earn customer trust, designing the Google Cloud Platform with a key eye on zero trust and its implementation of the model BeyondCorp. For example, Google has implemented two core concepts including:
- Delivery of services and data: ensuring that people with the correct identity and the right purpose can access the required data every time
- Prioritization and focus: access and innovation are placed ahead of threats and risks, meaning that as products are innovated, security is built into the environment
Transparency is very important to the trust relationship. Google has enabled transparency through strong visibility and control of data. When evaluating cloud providers, understanding their transparency related to access and service status is crucial. Google ensures transparency by using specific controls including:
- Limited data center access from a physical standpoint, adhering to strict access controls
- Disclosing how and why customer data is accessed
- Incorporating a process of access approvals
Multi-layered security for a trusted infrastructure
Finally, cloud services must provide customers with an understanding of how each layer of infrastructure works and build rules into each. This includes operational and device security, encrypting data at rest, multiple layers of identity, and finally storage services: multi-layered, and supported by security by default.
Cloud native companies have a security-first approach and naturally have a higher security understanding and posture. That said, when choosing a cloud provider, enterprises should always understand, identify, and ensure that their cloud solution addresses each one of their security needs, and who’s responsible for what.
Essentially, every business must find a cloud partner that can answer all the key questions, provide transparency, and establish a trusted relationship in the zero trust world where we operate.
Andrew Magnusson started his information security career 20 years ago and he decided to offer the knowledge he accumulated through this book, to help the reader eliminate security weaknesses and threats within their system.
As he points out in the introduction, bugs are everywhere, but there are actions and processes the reader can apply to eliminate or at least mitigate the associated risks.
The author starts off by explaining vulnerability management basics, the importance of knowing your network and the process of collecting and analyzing data.
He explains the importance of a vulnerability scanner and why it is essential to configure and deploy it correctly, since it gives valuable infromation to successfully complete a vulnerabilty management process.
The next step is to automate the processes, which prioritizes vulnerabilities and gives time to work on more severe issues, consequently boosting an organization’s security posture.
Finally, it is time to decide what to do with the vulnerabilities you have detected, which means choosing the appropriate security measures, whether it’s patching, mitigation or systemic measures. When the risk has a low impact, there’s also the option of accepting it, but this still needs to be documented and agreed upon.
The important part of this process, and perhaps also the hardest, is building relationships within the organization. The reader needs to respect office politics and make sure all the decisions and changes they make are approved by the superiors.
The second part of the book is practical, with the author guiding the reader through the process of building their own vulnerability management system with a detailed analysis of the open source tools they need to use such as Nmap, OpenVAS, and cve-search, everything supported by coding examples.
The reader will learn how to build an asset and vulnerability database and how to keep it accurate and up to date. This is especially important when generating reports, as those need to be based on recent vulnerability findings.
Who is it for?
Practical Vulnerability Management is aimed at security practitioners who are responsible for protecting their organization and tasked with boosting its security posture. It is assumed they are familiar with Linux and Python.
Despite the technical content, the book is an easy read and offers comprehensive solutions to keeping an organization secure and always prepared for possible attacks.
In one form or another, APIs have been around for years, bringing the benefits of ease of use, efficiency and flexibility to the developer community. The advantage of using APIs for mobile and web apps is that developers can build and deploy functionality and data integrations quickly.
API security posture
But there is a huge downside to this approach. Undermining the power of an API-driven development methodology are shadow, deprecated and non-conforming APIs that, when exposed to the public, introduce the risk of data loss, compromise or automated fraud.
The stateless nature of APIs and their ubiquity makes protecting them increasingly difficult, largely because malicious actors can leverage the same developer benefits – ease of use and flexibility – to easily execute account takeovers, credential stuffing, fake account creation or content scraping. It’s no wonder that Gartner identified API security as a top concern for 50% of businesses.
Thankfully, it’s never too late to get your API footprint in order to better protect your organization’s critical data. Here are a few easy steps you can follow to mitigate API security risks immediately:
1. Start an organization-wide conversation
If your company is having conversations around API security at all, it’s likely that they are happening in a fractured manner. If there’s no larger, cohesive conversation, then various development and operational teams could be taking conflicting approaches to mitigating API security risks.
For this reason, teams should discuss how they can best work together to support API security initiatives. As a basis for these meetings, teams should refer to the NIST Cybersecurity Framework, as it’s a great way to develop a shared understanding of organization-wide cybersecurity risks. The NIST CSF will help the collective team to gain a baseline awareness about the APIs used across the organization to pinpoint the potential gaps in the operational processes that support them, so that companies can work towards improving their API strategy immediately.
2. Ask (& answer) any outstanding questions as a team
To improve an organization’s API security posture, it’s critical that outstanding questions are asked and answered immediately so that gaps in security are reduced and closed. When posing these questions to the group, consider the API assets you have overall, the business environment, governance, risk assessment, risk management strategy, access control, awareness and training, anomalies and events, continuous security monitoring, detection processes, etc. Leave no stone unturned. Here are a few suggested questions to use as a starting point as you work on the next step in this process towards getting your API security house in order:
- How many APIs do we have?
- How were they developed? Which are open-source, custom built or third-party?
- Which APIs are subject to legal or regulatory compliance?
- How do we monitor for vulnerabilities in our APIs?
- How do we protect our APIs from malicious traffic?
- Are there APIs with vulnerabilities?
- What is the business impact if the APIs are compromised or abused?
- Is API security a part of our on-going developer training and security evangelism?
Once any security holes have been identified through a shared understanding, the team then can collectively work together to fill those gaps.
3. Strive for complete and continuous API security and visibility
Visibility is critical to immediate and continuous API security. By going through step one and two, organizations are working towards more secure APIs today – but what about tomorrow and in the years to come as your API footprint expands exponentially?
Consider implementing a visibility and monitoring solution to help you oversee this security program on an ongoing basis, so that your organization can feel confident in having a strong and comprehensive API security posture that grows and adapts as your number of APIs expand and shift. The key components to visibility and monitoring?
Centralized visibility and inventory of all APIs, a detailed view of API traffic patterns, discovery of APIs transmitting sensitive data, continuous API specification conformance assessment, having validated authentication and access control programs in place and running automatic risk analysis based on predefined criteria. Continuous, runtime visibility into how many APIs an organization has, who is accessing them, where they are located and how they are being used, is key to API security.
As organizations continue to expand their use of APIs to drive their business, it’s crucial that companies consider every path malicious actors might take to attack their organization’s critical data.
Your brand is a valuable asset, but it’s also a great attack vector. Threat actors exploit the public’s trust of your brand when they phish under your name or when they counterfeit your products. The problem gets harder because you engage with the world across so many digital platforms – the web, social media, mobile apps. These engagements are obviously crucial to your business.
Something else should be obvious as well: guarding your digital trust – public confidence in your digital security – is make-or-break for your business, not just part of your compliance checklist.
COVID-19 has put a renewed spotlight on the importance of defending against cyberattacks and data breaches as more users are accessing data from remote or non-traditional locations. Crisis fuels cybercrime and we have seen that hacking has increased substantially as digital transformation initiatives have accelerated and many employees have been working from home without adequate firewalls and back-up protection.
The impact of cybersecurity breaches is no longer constrained to the IT department. The frequency and sophistication of ransomware, phishing schemes, and data breaches have the potential to destroy both brand health and financial viability. Organizations across industry verticals have seen their systems breached as cyber thieves have tried to take advantage of a crisis.
Good governance will be essential for handling the management of cyber issues. Strong cybersecurity will also be important to show customers that steps are being taken to avoid hackers and keep their data safe.
The COVID crisis has not changed the cybersecurity fundamentals. What will the new normal be like? While the COVID pandemic has turned business and society upside down, well-established cybersecurity practices – some known for decades – remain the best way to protect yourself.
1. Data must be governed
Data governance is the capability within an organization to help provide for and protect for high quality data throughout the lifecycle of that data. This includes data integrity, data security, availability, and consistency. Data governance includes people, processes, and technology that help enable appropriate handling of the data across the organization. Data governance program policies include:
- Delineating accountability for those responsible for data and data assets
- Assigning responsibility to appropriate levels in the organization for managing and protecting the data
- Determining who can take what actions, with what data, under what circumstances, using what methods
- Identifying safeguards to protect data
- Providing integrity controls to provide for the quality and accuracy of data
2. Patch management and vulnerability management: Two sides of a coin
Address threats with vulnerability management. Bad actors look to take advantage of discovered vulnerabilities in an attempt to infect a workstation or server. Managing threats is a reactive process where the threat must be actively present, whereas vulnerability management is proactive, seeking to close the security gaps that exist before they are taken advantage of.
It’s more than just patching vulnerabilities. Formal vulnerability management doesn’t simply involve the act of patching and reconfiguring insecure settings. Vulnerability management is a disciplined practice that requires an organizational mindset within IT that new vulnerabilities are found daily requiring the need for continual discovery and remediation.
3. Not “if” but “when”: Assume you’re already hacked
If you build your operations and defense with this premise in mind, your chances of helping to detect these types of attacks and preventing the breaches are much greater than most organizations today.
The importance of incident response steps
A data breach should be viewed as a “when” not “if” occurrence, so be prepared for it. Under the pressure of a critical-level incident is no time to be figuring out your game plan. Your future self will thank you for the time and effort you invest on the front end.
Incident response can be stressful and is stressful when a critical asset is involved and you realize there’s an actual threat. Incident response steps help in these stressing, high pressure situations to more quickly guide you to successful containment and recovery. Response time is critical to minimizing damages. With every second counting, having a plan to follow already in place is the key to success.
4. Your size does not mean security maturity
It does not matter how big you are or the resources your team can access. As defenders, we always think, “If I only had enough money or people, I could solve this problem.” We need to change our thinking. It’s not how much you spend but rather, is that spend an effective use? Does it allow your team to disrupt attacks or just wait to be alerted (maybe)? No matter where an organization is on its journey toward security maturity, a risk assessment can prove invaluable in deciding where and when it needs most improvement.
For more mature organizations, the risk assessment process will focus less on discovering major controls gaps and more on finding subtler opportunities for continuously improving the program. An assessment of a less mature program is likely to find misalignments with business goals, inefficiencies in processes or architecture, and places where protections could be taken to another level of effectiveness.
5. Do more with less
Limited budgets, limited staff, limited time. Any security professional will have dealt with all of these repeatedly while trying to launch new initiatives or when completing day-to-day tasks. They are possibly the most severe and dangerous adversaries that many cybersecurity professionals will face. They affect every organization regardless of industry, size, or location and pose an existential threat to even the most prepared company. There is no easy way to contain them either, since no company has unlimited funding or time, and the lack of cybersecurity professionals makes filling roles incredibly tricky.
How can organizations cope with these natural limitations? The answer is resource prioritization, along with a healthy dose of operational improvements. By identifying areas where processes can be streamlined and understanding what the most significant risks are, organizations can begin to help protect their systems while staying within their constraints.
6. Rome wasn’t built in a day
An edict out of the IT department won’t get the job done. Building a security culture takes time and effort. What’s more, cybersecurity awareness training ought to be a regular occurrence — once a quarter at a minimum — where it’s an ongoing conversation with employees. One-and-done won’t suffice.
People have short memories, so repetition is altogether appropriate when it comes to a topic that’s so strategic to the organization. This also needs to be part of a broader top-down effort starting with senior management. Awareness training should be incorporated across all organizations, not just limited to governance, threat detection, and incident response plans. The campaign should involve more than serving up a dry set of rules, separate from the broader business reality.
The National Institute of Standards and Technology (NIST) has published a cybersecurity practice guide enterprises can use to recover from data integrity attacks, i.e., destructive malware and ransomware attacks, malicious insider activity or simply mistakes by employees that have resulted in the modification or destruction of company data (emails, employee records, financial records, and customer data).
About the guide
Ransomware is currently one of the most disruptive scourges affecting enterprises. While it would be ideal to detect the early warning signs of a ransomware attack to minimize its effects or prevent it altogether, there are still too many successful incursions that organizations must recover from.
Special Publication (SP) 1800-11, Data Integrity: Recovering from Ransomware and Other Destructive Events can help organizations to develop a strategy for recovering from an attack affecting data integrity (and to be able to trust that any recovered data is accurate, complete, and free of malware), recover from such an event while maintaining operations, and manage enterprise risk.
The goal is to monitor and detect data corruption in widely used as well as custom applications, and to identify what data way altered/corrupted, when, by whom, the impact of the action, whether other events happened at the same time. Finally, organizations are advised on how to restore data to its last known good configuration and to identify the correct backup version.
“Multiple systems need to work together to prevent, detect, notify, and recover from events that corrupt data. This project explores methods to effectively recover operating systems, databases, user files, applications, and software/system configurations. It also explores issues of auditing and reporting (user activity monitoring, file system monitoring, database monitoring, and rapid recovery solutions) to support recovery and investigations,” the authors added.
The National Cybersecurity Center of Excellence (NCCoE) at NIST used specific commercially available and open-source components when creating a solution to address this cybersecurity challenge, but noted that each organization’s IT security experts should choose products that will best work for them by taking into consideration how they will integrate with the IT system infrastructure and tools already in use.
The NCCoE tested the set up against several test cases (ransomware attack, malware attack, user modifies a configuration file, administrator modifies a user’s file, database or database schema has been altered in error by an administrator or script). Additional materials can be found here.
As ransomware continues to prove how devastating it can be, one of the scariest things for security pros is how quickly it can paralyze an organization. Just look at Honda, which was forced to shut down all global operations in June, and Garmin, which had its services knocked offline for days in July.
Ransomware isn’t hard to detect but identifying it when the encryption and exfiltration are rampant is too little too late. However, there are several warning signs that organizations can catch before the real damage is done. In fact, FireEye found that there is usually three days of dwell time between these early warning signs and detonation of ransomware.
So, how does a security team find these weak but important early warning signals? Somewhat surprisingly perhaps, the network provides a unique vantage point to spot the pre-encryption activity of ransomware actors such as those behind Maze.
Here’s a guide, broken down by MITRE category, of the many different warning signs organizations being attacked by Maze ransomware can see and act upon before it’s too late.
With Maze actors, there are several initial access vectors, such as phishing attachments and links, external-facing remote access such as Microsoft’s Remote Desktop Protocol (RDP), and access via valid accounts. All of these can be discovered while network threat hunting across traffic. Furthermore, given this represents the actor’s earliest foray into the environment, detecting this initial access is the organization’s best bet to significantly mitigate impact.
T1193 Spear-phishing attachment
|T133 External Remote Services||
|T1078 Valid accounts||
|T1190 Exploit public-facing application||
The execution phase is still early enough in an attack to shut it down and foil any attempts to detonate ransomware. Common early warning signs to watch for in execution include users being tricked into clicking a phishing link or attachment, or when certain tools such as PsExec have been used in the environment.
T1024 User execution
|T1035 Service execution||
|T1028 Windows remote management||
Adversaries using Maze rely on several common techniques, such as a web shell on internet-facing systems and the use of valid accounts obtained within the environment. Once the adversary has secured a foothold, it starts to become increasingly difficult to mitigate impact.
T1100 Web shell
|T1078 Valid accounts||
As an adversary gains higher levels of access it becomes significantly more difficult to pick up additional signs of activity in the environment. For the actors of Maze, the techniques used for persistence are similar to those for privileged activity.
T1100 Web shell
|T1078 Valid accounts||
To hide files and their access to different systems, adversaries like the ones who use Maze will rename files, encode, archive, and use other mechanisms to hide their tracks. Attempts to hide their traces are in themselves indicators to hunt for.
T1027 Obfuscated files or information
|T1078 Valid accounts||
There are several defensive controls that can be put in place to help limit or restrict access to credentials. Threat hunters can enable this process by providing situational awareness of network hygiene including specific attack tool usage, credential misuse attempts and weak or insecure passwords.
T110 Brute force
|T1081 Credentials in files||
Maze adversaries use a number of different methods for internal reconnaissance and discovery. For example, enumeration and data collection tools and methods leave their own trail of evidence that can be identified before the exfiltration and encryption occurs.
T1201 Password policy discovery
|T1018 Remote system discovery
T1087 Account discovery
T1016 System network configuration discovery
T1135 Network share discovery
T1083 File and directory discovery
Ransomware actors use lateral movement to understand the environment, spread through the network and then to collect and prepare data for encryption / exfiltration.
T1105 Remote file copy
T1077 Windows admin shares
|T1076 Remote Desktop Protocol
T1028 Windows remote management
T1097 Pass the ticket
In this phase, Maze actors use tools and batch scripts to collect information and prepare for exfiltration. It is typical to find .bat files or archives using the .7z or .exe extension at this stage.
T1039 Data from network share drive
Command and control (C2)
Many adversaries will use common ports or remote access tools to try and obtain and maintain C2, and Maze actors are no different. In the research my team has done, we’ve also seen the use of ICMP tunnels to connect to the attacker infrastructure.
T1043 Common used port
T1071 Standard application layer protocol
|T1105 Remote file copy||
|T1219 Remote access tools||
At this stage, the risk of exposure of sensitive data in the public realm is dire and it means an organization has missed many of the earlier warning signs—now it’s about minimizing impact.
T1030 Data transfer size limits
|T1048 Exfiltration over alternative protocol||
|T1002: Data compressed||
Ransomware is never good news when it shows up at the doorstep. However, with disciplined network threat hunting and monitoring, it is possible to identify an attack early in the lifecycle. Many of the early warning signs are visible on the network and threat hunters would be well served to identify these and thus help mitigate impact.
In the digital age, staff expect employers to provide hardware, and companies need hardware that allows employees to work efficiently and securely. There are already a number of models to choose from to purchase and manage hardware, however, with remote work policies becoming more popular, enterprises have to prioritize cybersecurity when making their selection.
The COVID-19 pandemic and online shift has brought to light the need for robust cybersecurity strategies and technology that facilitates safe practices. Since the pandemic started, the FBI has reported a 300 percent increase in cybercrime. As more businesses are forced to operate at a distance, hackers are taking advantage of weak links in their networks. At the same time, the crisis has meant many enterprises have had to cut their budgets, and so risk compromising cybersecurity when opting for more cost-effective measures.
Currently, Device-as-Service (DaaS), Bring-Your-Own-Device (BYOD) and leasing/buying are some of the most popular hardware options. To determine which is most appropriate for your business cybersecurity needs, here are the pros and cons of each:
DaaS models are when an organization distributes hardware like computers, tablets, and phones to employees with preconfigured and customized services and software. For many enterprises, DaaS is attractive because it allows them to acquire technology without having to outright buy, set up, and manage it – therefore saving time and money in the long run. Because of DaaS’s growing popularity, 65 percent of major PC manufacturers now offer DaaS capabilities, including Apple and HP.
When it comes to cybersecurity, DaaS is favorable because providers are typically experts in the field. In the configuration phase, they are responsible for ensuring that all devices have the latest security protections installed as standard, and they are also responsible for maintaining such protections. Once the hardware is in use, DaaS models allow providers to monitor the company’s entire fleet – checking that all devices adhere to security policies, including protocols around passwords, approved apps, and accessing sensitive data.
Another bonus is that DaaS can offer analytical insights about hardware, such as device location and condition. With this information, enterprises can be alerted if tech is stolen, missing or outdated and a threat to overall cybersecurity. Not to mention, a smart way to boost the level of protection given by DaaS models is to integrate it with Unified Endpoint Management (UEM). UEM helps businesses organize and control internet-enabled devices from a single interface and uses mobile threat detection to identify and thwart vulnerabilities or attacks among devices.
Nonetheless, to effectively utilize DaaS, enterprises have to determine their own relevant security principles before adopting the model. They then need to have an in-depth understanding of how these principles are applied throughout DaaS services and how the level of assurance enacts them. Assuming that DaaS completely removes enterprises from being involved in device cybersecurity would be unwise.
BYOD is when employees use their own mobile, laptops, PCs, and tablets for work. In this scenario, companies have greater flexibility and can make significant cost savings, but, there are many more risks associated with personal devices compared to corporate-issued devices. Although BYOD is favorable among employees – who can use devices that they are more familiar with – enterprises essentially lose control and visibility of how data is transmitted, stored, and processed.
Personal devices are dangerous because hackers can create a sense of trust via personal apps on the hardware and more easily coerce users into sharing business details or download malicious content. Plus, with BYOD, companies are dependent on employees keeping all their personal devices updated with the most current protective services. One employee forgetting to do so could negate the cybersecurity for the overall network.
Similar to DaaS, UEM can also help companies that have adopted BYOD take a more centralized approach to manage the risk of exposing their data to malicious actors. For example, UEM can block websites or content from personal devices, as well as implement passcodes, and device and disk encryption. Alternatively, VPNs are common to enhance cybersecurity in companies that allow BYOD. In the COVID-19 pandemic, 68 percent of employees claim their company has expanded VPN usage as a direct result of the crisis. It’s worthwhile noting though, that VPNs only encrypt data accessed via the internet and cloud-based services.
When moving forward with BYOD models, enterprises must host regular training and education sessions around safe practices on devices, including recognizing threats, avoiding harmful websites, and the importance of upgrading. They also need to have documented and tested computer security incident response plans, so if any attacks do occur, they are contained as soon as possible.
Leasing / buying
Leasing hardware is when enterprises obtain equipment on a rental basis, in order to retain working capital that can be invested in other areas. In the past, as many as 80 percent of businesses chose to lease their hardware. The trend is less popular today, as SaaS products have proven to be more tailored and scalable.
Still, leasing is beneficial because rather than jeopardizing cybersecurity to purchase large volumes of hardware, enterprises can rent fully covered devices. Likewise, because the latest software typically requires the latest hardware, companies can rent the most recent tech at a fraction of the retail cost.
Comparable to DaaS providers, leasing companies are responsible for device maintenance and have to ensure that every laptop, phone, and tablet has the appropriate security software. Again, however, this does not absolve enterprises from taking an active role in cybersecurity implementation and surveillance.
Unlike leasing, where there can be uncertainty over who owns the cybersecurity strategy, buying is more straightforward. Purchasing hardware outright means companies have complete control over devices and can cherry-pick cybersecurity features to include. It also means they can be more flexible with cybersecurity partners, running trials with different solutions to evaluate which is the best fit.
That said, buying hardware has a noticeable downside where equipment becomes obsolete once new versions are released. 73 percent of senior leaders from enterprises actually agree that an abundance of outdated equipment leaves companies vulnerable to data security breaches. Considering that, on average, a product cycle takes only 12 to 24 months, and there are thousands of hardware manufacturers at work, devices can swiftly become outdated.
Additionally, because buying is a more permanent action, enterprises run the risk of being stuck with hardware that has been compromised. As opposed to software which can be relatively easily patched to fix, hardware often has to be sent off-site for repairs. This may result in enterprises with limited hardware continuing to use damaged or unprotected devices to avoid downtime in workflows.
If and when a company does decide to dispose of hardware, there are complications around guaranteeing that systems are totally blocked and databases or networks cannot be accessed afterwards. In contrast, providers from DaaS and leasing models expertly wipe devices at the end of contracts or when disposing of them, so enterprises don’t have to be concerned about unauthorized access.
Putting cybersecurity front-and-center
DaaS, BYOD, and leasing/buying all have their own unique benefits when it comes to cybersecurity. Despite all the perks, it has to be acknowledged that BYOD and leasing pose the biggest obstacles for enterprises because they take cybersecurity monitoring and control out of companies’ hands. Nevertheless, for all the options mentioned, UEM is a valuable way to bridge gaps and empower businesses to be in control of cybersecurity, while still being agile.
Ultimately, the most impactful cybersecurity measures are the ones that enterprises are firmly vested in, whatever hardware model they adopt. Businesses should never underestimate the power of a transparent, well-researched, and constantly evolving security framework – one which a hardware model complements, not solely creates.
Cyber threat intelligence (CTI) sharing is a critical tool for security analysts. It takes the learnings from a single organization and shares it across the industry to strengthen the security practices of all.
By sharing CTI, security teams can alert each other to new findings across the threat landscape and flag active cybercrime campaigns and indicators of compromise (IOCs) that the cybersecurity community should be immediately aware of. As this intel spreads, organizations can work together to build upon each other’s defenses to combat the latest threat. This creates a herd-like immunity for networks as defensive capabilities are collectively raised.
Blue teams need to act more like red teams
A recent survey by Exabeam showed that 62 percent of blue teams have difficulty stopping red teams during adversary simulation exercises. A blue team is charged with defending one network. They have the benefit of knowing the ins and outs of their network better than any red team or cybercriminal, so they are well-equipped to spot abnormalities and IOCs and act fast to mitigate threats.
But blue teams have a bigger disadvantage: they mostly work in silos consisting only of members of their immediate team. They typically don’t share their threat intelligence with other security teams, vendors, or industry groups. This means they see cyber threats from a single lens. They lack the broader view of the real threat landscape external to their organization.
This disadvantage is where red teams and cybercriminals thrive. Not only do they choose the rules of the game – the when, where, and how the attack will be executed – they share their successes and failures with each other to constantly adapt and evolve tactics. They thrive in a communications-rich environment, sharing frameworks, toolkits, guidelines, exploits, and even offering each other customer support-like help.
For blue teams to move from defense to prevention, they need to take defense to the attacker’s front door. This proactive approach can only work by having timely, accurate, and contextual threat intelligence. And that requires a community, not a company. But many companies are hesitant to join the CTI community. The SANS 2020 Cyber Threat Intelligence Survey shows that more than 40% of respondents both produce and consume intelligence, leaving much room for improvement over the next few years.
Common challenges for beginning a cyber threat intelligence sharing program
One of the biggest challenges to intelligence sharing is that businesses don’t understand how sharing some of their network data can actually strengthen their own security over time. Much like the early days of open-source software, there’s a fear that if you have anything open to exposure it makes you inherently more vulnerable. But as open source eventually proved, more people collaborating in the open can lead to many positive outcomes, including better security.
Another major challenge is that blue teams don’t have the lawless luxury of sharing threat intelligence with reckless abandon: we have legal teams. And legal teams aren’t thrilled with the notion of admitting to IOCs on their network. And there is a lot of business-sensitive information that shouldn’t be shared, and the legal team is right to protect this.
The opportunity is in finding an appropriate line to walk, where you can share intelligence that contributes to bolstering cyber defense in the larger community without doing harm to your organization.
If you’re new to CTI sharing and want to get involved here are a few pieces of advice.
Clear it with your manager
If you or your organization are new to CTI sharing the first thing to do is to get your manager’s blessing before you move forward. Being overconfident in your organization’s appetite to share their network data (especially if they don’t understand the benefits) can be a costly, yet avoidable mistake.
Start sharing small
Don’t start by asking permission to share details on a data exfiltration event that currently has your company in crisis mode. Instead, ask if it’s ok to share a range of IPs that have been brute forcing logins on your site. Or perhaps you’ve seen a recent surge of phishing emails originating from a new domain and want to share that. Make continuous, small asks and report back any useful findings.
Share your experience when you can’t share intelligence
When you join a CTI group, you’re going to want to show that you’re an active, engaged member. But sometimes you just don’t have any useful intelligence to share. You can still add value to the group by lending your knowledge and experience. Your perspective might change someone’s mind on their process and make them a better practitioner, thus adding to the greater good.
Demonstrate value of sharing CTI
Tie your participation in CTI groups to any metrics that demonstrate your organization’s security posture has increased during that time. For example, show any time that participation in a CTI group has directly led to intelligence that helped decrease alerted events and helped your team get ahead of a new attack.
There’s a CTI group for everyone
From disinformation and dark web to medical devices and law enforcement, there’s a CTI segment for everything you ever wanted to be involved in. Some are invite-only, so the more active you are in public groups the more likely you’ll be asked to join groups that you’ve shown interest in or have provided useful intelligence about. These hyper-niche groups can provide big value to your organization as you can get expert consulting from top minds in the field.
The more data you have, the more points you can correlate faster. Joining a CTI sharing group gives you access to data you’d never even know about to inform better decision making when it comes to your defensive actions. More importantly, CTI sharing makes all organizations more secure and unites us under a common cause.
Sitting in the midst of an unstable economy, a continued public health emergency, and facing an uptick in successful cyber attacks, CISOs find themselves needing to enhance their cybersecurity posture while remaining within increasingly scrutinized budgets.
Senior leadership recognizes the value of cybersecurity but understanding how to best allocate financial resources poses an issue for IT professionals and executive teams. As part of justifying a 2021 cybersecurity budget, CISOs need to focus on quick wins, cost-effective SaaS solutions, and effective ROI predictions.
Finding the “quick wins” for your 2021 cybersecurity budget
Cybersecurity, particularly with organizations suffering from technology debt, can be time-consuming. Legacy technologies, including internally designed tools, create security challenges for organizations of all sizes.
The first step to determining the “quick wins” for 2021 lies in reviewing the current IT stack for areas that have become too costly to support. For example, as workforce members moved off-premises during the current public health crisis, many organizations found that their technology debt made this shift difficult. With workers no longer accessing resources from inside the organization’s network, organizations with rigid technology stacks struggled to pivot their work models.
Going forward, remote work appears to be one way through the current health and economic crises. Even major technology leaders who traditionally relied on in-person workforces have moved to remote models through mid-2021, with Salesforce the most recent to announce this decision.
Looking for gaps in security, therefore, should be the first step in any budget analysis. As part of this gap analysis, CISOs can look in the following areas:
- VPN and data encryption
- Data and user access
- Cloud infrastructure security
Each of these areas can provide quick wins if done correctly because as organizations accelerate their digital transformation strategies to match these new workplace situations, they can now leverage cloud-native security solutions.
Adopting SaaS security solutions for accelerating security and year-over-year value
The SaaS-delivered security solution market exploded over the last five to ten years. As organizations moved their mission-critical business operations to the cloud, cybercriminals focused their activities on these resources.
Interestingly, a CNBC article from July 14, 2020 noted that for the first half of 2020, the number of reported data breaches dropped by 33%. Meanwhile, another CNBC article from July 29, 2020 notes that during the first quarter, large scale data breaches increased by 273% compared to the same time period in 2019. Although the data appears conflicting, the Identity Theft Research Center research that informed the July 14th article specifically notes, “This is not expected to be a long-term trend as threat actors are likely to return to more traditional attack patterns to replace and update identity information needed to commit future identity and financial crimes.” In short, rapidly closing security gaps as part of a 2021 cybersecurity budget plan needs to include the fast wins that SaaS-delivered solutions provide.
SaaS security solutions offer two distinct budget wins for CISOs. First, they offer rapid integration into the organization’s IT stack. In some cases, CISOs can get a SaaS tool deployed within a few weeks, in other cases within a few months. Deployment time depends on the complexity of the problem being solved, the type of integrations necessary, and the enterprise’s size. However, in the same way that agile organizations leverage cloud-based business applications, security teams can leverage rapid deployment of cloud-based security solutions.
The second value that SaaS security solutions offer is YoY savings. Subscription models offer budget conscious organizations several distinct value propositions. First, the organization can reduce hardware maintenance costs, including operational costs, upgrade costs, software costs, and servicing costs. Second, SaaS solutions often enable companies to focus on their highest risk assets and then increase their usage in the future. Third, they allow organizations to pivot more effectively because the reduced up-front capital outlay reduces the commitment to the project.
Applying a dollar value to these during the budget justification process might feel difficult, but the right key performance indicators (KPIs) can help establish baseline cost savings estimates.
Choosing the KPIs for effective ROI predictions
During an economic downturn, justifying the cybersecurity budget requests might be increasingly difficult. Most cybersecurity ROI predictions rely on risk evaluations and applying probability of a data breach to projected cost of a data breach. As organizations look to reduce costs to maintain financially viable, a “what if” approach may not be as appealing.
However, as part of budgeting, CISOs can look to several value propositions to bolster their spending. Cybersecurity initiatives focus on leveraging resources effectively so that they can ensure the most streamlined process possible while maintaining a robust security program. Aligning purchase KPIs with specific reduced operational costs can help gain buy-in for the solution.
A quick hypothetical can walk through the overarching value of SaaS-based security spending. Continuous monitoring for external facing vulnerabilities is time-consuming and often incorporates inefficiency. Hypothetical numbers based on research indicate:
A poll of C-level security executives noted that 37% said they received more than 10,000 alerts each month with 52% of those alerts identified as false positives.
- The average security analyst spends ten minutes responding to a single alert.
- The average security analyst makes approximately $91,000 per year.
Bringing this data together shows the value of SaaS-based solutions that reduce the number of false positives:
- Every month enterprise security analysts spend 10 minutes for each of the 5,2000 false positives.
- This equates to approximately 866 hours.
- 866 hours, assuming a 40-hour week, is 21.65 weeks.
- Assuming 4 weeks per month, the enterprise needs at least 5 security analysts to manage false positive responses.
- These 5 security analysts cost a total of $455,000 per year in salary, not including bonuses and other benefits.
Although CISOs may not want to reduce their number of team members, they may not want to add additional ones, or they may be seeking to optimize the team they have. Tracking KPIs such reduction in false positives per month can provide the type of long-term cost value necessary for other senior executives and the board of directors.
Securing a 2021 cybersecurity budget
While the number of attacks may have stalled during 2020, cybercriminals have not stopped targeting enterprise data. Phishing attacks and malware attacks have moved away from the enterprise network level and now look to infiltrate end-user devices. As organizations continue to pivot their operating models, they need to look for cost-effective ways to secure their sensitive resources and data. However, budget constrictions arising from 2020’s economic instability may make it difficult for CISOs to gain the requisite dollars to continue to apply best security practices.
As organizations start looking toward their 2021 roadmap, CISOs will increasingly need to be specific about not only the costs associated with purchases but also the cost savings that those purchases provide from both data incident risk and operational cost perspective.
Ransomware has been noted by many as the most threatening cybersecurity risk for organizations, and it’s easy to see why: in 2019, more than 50 percent of all businesses were hit by a ransomware attack – costing an estimated $11.5 billion. In the last month alone, major consumer corporations, including Canon, Garmin, Konica Minolta and Carnival, have fallen victim to major ransomware attacks, resulting in the payment of millions of dollars in exchange for file access.
While there is a lot of discussion about preventing ransomware from affecting your business, the best practices for recovering from an attack are a little harder to pin down.
While the monetary amounts may be smaller for your organization, the importance of regaining access to the information is just as high. What steps should you take for effective ransomware recovery? A few of our best tips are below.
1. Infection detection
Arguably the most challenging step for recovering from a ransomware attack is the initial awareness that something is wrong. It’s also one of the most crucial. The sooner you can detect the ransomware attack, the less data may be affected. This directly impacts how much time it will take to recover your environment.
Ransomware is designed to be very hard to detect. When you see the ransom note, it may have already inflicted damage across the entire environment. Having a cybersecurity solution that can identify unusual behavior, such as abnormal file sharing, can help quickly isolate a ransomware infection and stop it before it spreads further.
Abnormal file behavior detection is one of the most effective means of detecting a ransomware attack and presents with the fewest false positives when compared to signature based or network traffic-based detection.
One additional method to detect a ransomware attack is to use a “signature-based” approach. The issue with this method, is it requires the ransomware to be known. If the code is available, software can be trained to look for that code. This is not recommended, however, because sophisticated attacks are using new, previously unknown forms of ransomware. Thus, an AI/ML based approach is recommended, which will look for behaviors such as rapid, successive encryption of files and determine there is an attack happening.
Effective cybersecurity also includes good defensive mechanisms that protect business-critical systems like email. Often ransomware affects organizations by means of a phishing email attack or an email that has a dangerous file attached or hyperlinked.
If organizations are ill-equipped to handle dangerous emails, this can be an easy way for ransomware to make its way inside the walls of your organization’s on-premise environment or within the cloud SaaS environment. With cloud SaaS environments in particular, controlling third-party applications that have access to your cloud environment is extremely important.
2. Contain the damage
After you have detected an active infection, the ransomware process can be isolated and stopped from spreading further. If this is a cloud environment, these attacks often stem from a remote file sync or other process driven by a third-party application or browser plug-in running the ransomware encryption process. Digging in and isolating the source of the ransomware attack can contain the infection so that the damage to data is mitigated. To be effective, this process must be automated.
Many attacks happen after-hours when admins are not monitoring the environment and the reaction must be rapid to stop the spread of the virus. Security policy rules and scripts must be put in place as a part of proactive protection. Thus, when an infection is identified, the automation kicks in to stop the attack by removing the executable file or extension and isolate the infected files from the rest of the environment.
Another way organizations can help protect themselves and contain the damage should an attack occur is by purchasing cyber liability insurance. Cyber liability insurance is a specialty insurance line intended to protect businesses (and the individuals providing services from those businesses) from internet-based risks (like ransomware attacks) and risks related to information technology infrastructure, information privacy, information governance liability, and other related activities. In this type of attack situation, cyber liability insurance can help relieve some of the financial burden of restoring your data.
3. Restore affected data
In most cases, even if the ransomware attack is detected and contained quickly, there will still be a subset of data that needs to be restored. This requires having good backups of your data to pull back to production. Following the 3-2-1 backup best practice, it’s imperative to have your backup data in a separate environment from production.
The 3-2-1 backup rule consists of the following guidelines:
- Keep 3 copies of any important file, one primary and two backups
- Keep the file on 2 different media types
- Maintain 1 copy offsite
If your backups are of cloud SaaS environments, storing these “offsite” using a cloud-to-cloud backup vendor aligns with this best practice. This will significantly minimize the chance that your backup data is affected along with your production data.
The tried and true way to recover from a ransomware attack involves having good backups of your business-critical data. The importance of backups cannot be stressed enough when it comes to ransomware. Recovering from backup allows you to be in control of getting your business data back and not the attacker.
All too often, businesses may assume incorrectly that the cloud service provider has “magically protected” their data. While there are a few mechanisms in place from the cloud service provider side, ultimately, the data is your responsibility as part of the shared responsibility model of most CSPs. You can take a look at Microsoft’s stance on shared responsibility here.
4. Notify the authorities
Many of the major compliance regulations that most organizations fall under today, such as PCI-DSS, HIPAA, GDPR, and others, require that organizations notify regulatory agencies of the breach. Notification of the breach should be immediate and the FBI’s Internet Crime Complaint Center should be the first organization alerted. Local law enforcement should be informed next. If your organization is in a governed industry, there may be strict guidelines regarding who to inform and when.
5. Test your access
Once data has been restored, test access to the data and any affected business-critical systems to ensure the recovery of the data and services have been successful. This will allow any remaining issues to be remedied before turning the entire system back over to production.
If you’re experiencing slower than usual response times in the IT environment or larger-than-normal file sizes, it may be a sign that something sinister is still looming in the database or storage.
Ransomware prevention v. recovery
Sometimes the best offense is a good defense. When it comes to ransomware and regaining access to critical files, there are only two options. You either restore your data from backup if you were forward-thinking enough to have such a system in place, or you have to pay the ransom. Beyond the obvious financial implications of acquiescing to the hacker’s demands, paying is risky because there is no way to ensure they will actually provide access to your files after the money is transferred.
There is no code of conduct or contract when negotiating with a criminal. A recent report found that some 42 percent of organizations who paid a ransom did not get their files decrypted.
Given the rising number of ransomware attacks targeting businesses, the consequences of not having a secure backup and detection system in place could be catastrophic to your business. Investing in a solution now helps ensure you won’t make a large donation to a nefarious organization later. Learning from the mistakes of other organizations can help protect yours from a similar fate.
The Internet Society has launched the first-ever regulatory assessment toolkit that defines the critical properties needed to protect and enhance the future of the Internet.
The Internet Impact Assessment Toolkit is a guide to help ensure regulation, technology trends and decisions don’t harm the infrastructure of the Internet. It describes the Internet at its optimal state – a network of networks that is universally accessible, decentralized and open; facilitating the free and efficient flow of knowledge, ideas and information.
Critical properties of the Internet Impact Assessment Toolkit
The five critical properties identified by the IWN are:
- An accessible infrastructure with a common protocol – A ‘common language’ enabling global connectivity and unrestricted access to the Internet.
- An open architecture of interoperable and reusable building blocks – Open infrastructure with a set of standards enabling permission-free innovation.
- Decentralized management and a single distributed routing system – Distributed routing enabling local networks to grow, while maintaining worldwide connectivity.
- Common global identifiers – A single common identifier allowing computers and devices around the world to communicate with each other.
- A technology neutral, general-purpose network – A simple and adaptable dynamic environment cultivating infinite opportunities for innovation.
When combined, these properties form the unique foundation that underpins the Internet’s success and are essential for its healthy evolution. The closer the Internet aligns with the IWN, the more open and agile it is for future innovation and the broader benefits of collaboration, resiliency, global reach and economic growth.
“The Internet’s ability to support the world through a global pandemic is an example of the Internet Way of Networking at its finest,” explains Joseph Lorenzo Hall, Senior VP for a Strong Internet, Internet Society. “Governments didn’t need to do anything to facilitate this massive global pivot in how humanity works, learns and socializes. The Internet just works – and it works thanks to the principles that underpin its success.”
A resource for policymakers and technologists
The Internet Impact Assessment Toolkit will serve as an important resource to help policymakers and technologists ensure trends in regulatory and technical proposals don’t harm the unique architecture of the Internet. The toolkit explains why each property of the IWN is crucial to the Internet and the social and economic consequences that can arise when any of these properties are damaged.
For instance, the Toolkit shows how China’s restrictive networking model severely impacts its global reach and hinders collaboration with networks beyond its borders. It also highlights how the US administration’s Clean Network proposal challenges the Internet’s architecture by dictating how networks interconnect according to political considerations rather than technical considerations.
“We’re seeing a trend of governments encroaching on parts of the Internet’s infrastructure to try and solve social and political problems through technical means. Ill-informed regulation can drastically alter the Internet’s fundamental architecture and harm the ecosystem that supports it,” continues Hall. “We’re giving both policymakers and Internet users the information and tools to make sure they don’t break this resource that brings connectivity, innovation, and empowerment to everyone.”
With so many organizations switching to a work-from-home model, many are finding security to be increasingly more difficult to administer and maintain. There is an influx of vulnerable points distributed across more locations than ever before, as remote workers strive to maintain their productivity. The result? Security teams everywhere are being stretched.
The Third Global Threat Report from VMware Carbon Black also found little confidence among respondents that the rollout to remote working had been done securely. The study took a deep dive into the effects COVID-19 had on the security of remote working, with 91% of executives stating that working from home has led to a rise in attacks.
Are you making sure your security professionals are up to the task of remote working while security threats are on the rise?
1. Maintain consistency
One way to help mitigate risk is to have your developers and security professionals train at a consistent level so they are all on the same page. Knowing that there is some sort of security architecture at play in your organization and understanding the logistics of how to stress test aspects of that structure will make it easier to prepare for and block attacks.
2. Don’t overlook the details
Training needs to address all aspects of your structure, specifically: information security, data security, cybersecurity, computer security, physical security, IoT security, cloud security, and individual security. Each area of an architecture needs to be tested and hardened regularly for your organization to truly be shielded from security breaches. Be specific about your program: train your staff on how to defend your information around your HR records (SSNs, PII, etc.) and data that could be exposed (shopping cart, customer card numbers), as well as in cyber defense to provide tools against nefarious actors, breaches and threats.
3. Think about the individual
Staff must be trained to know how to lock down computers, so individual machines and network servers are safe. This training should also encompass how to ensure physical security, to protect your storage or physical assets. This comes into play more as the IoT plays a larger role in connecting our devices and BYOD policies allow for more connections to be made between personal and corporate assets. Individual security: each employee is entitled to be secure in their work for a company, and that includes privacy concerns and compliance issues.
4. Keep your head in the cloud
Today, most companies have some sort of cloud presence and security professionals will need to be trained to constantly check the interfaces to cloud and any hybrid on-prem and off-prem instances you have.
5. Invest in learning
With constantly changing layers of architecture and amplified room for breaches as a result of remote working, it’s hard to imagine how security professionals stay ahead of all the changes. One thing that keeps teams on top of their game is professional online learning.
During the COVID-19 shelter-in-place mandate, leading eLearning companies have witnessed a massive increase in hours of security content consumed. For some, security is one of the fastest-growing topic areas which suggests that this year, security is more important. This is likely because of the number of workers who have gone remote and challenges that brings to an organization, particularly in the security department.
6. Consider role-based training
While it’s important to equip teams with skills that apply across function, there is a case to be made for investing in experts. Cybersecurity is not a field where there is a linear path of growth. There are different journeys individuals can take to venture into paths to transition from a vulnerability analyst to a security architect. By looking at individuals within the organization to seek ways to upskill and take on new roles and responsibilities, you have the unique benefit of being able to help them curate roles that fit the needs of the organizations.
It’s not often that a business has a dedicated Remote Team Security Lead, because there was rarely a need for one. Considering the quick transition to remote work and possibility that this is the new normal, organizations can benefit by investing in specific training curated to meet the security needs of remote teams. If this role is cultivated within the organization, there is the added benefit of knowing that the lessons being taught provide direct relevancy to specific needs and increase the attractiveness of investing time and effort into skills training.
Training can be the key to preparing security professionals for the unexpected. But there is no one-size-fits-all lesson that can be delivered or an evergreen degree that can keep up with an industry that changes every day. Training needs to be always on the agenda and it needs to be developed in a way that offers different modalities of learning.
Regardless of how the individual best learns, criterion-based assessments can measure knowledge/skills and act as a guide to true, lasting learning. Developing a culture committed to agility and learning is the key to embracing change.
Recent research shows almost three quarters of large businesses believe remote working policies introduced to help stop the spread of COVID-19 are making their companies more vulnerable to cyberattacks. New attack vectors for opportunistic cyber attackers – and new challenges for network administrators have been introduced.
To select a suitable remote workforce protection solution for your business, you need to think about a variety of factors. We’ve talked to several cybersecurity professionals to get their insight on the topic.
Vince Berk, VP, Chief Architect Security, Riverbed
A business needs to meet three main realizations or criteria for a remote workforce protection solution to be effective:
Use of SaaS, where access to the traffic in traditional ways becomes challenging: understanding where data lives, and who accesses it, and controlling this access, is the minimum bar to pass in an environment where packets are not available or the connection cannot be intercepted.
Recognition that users use a multitude of devices, from laptops, iPads, phones—many of which are not owned or controlled by the enterprise: can identity be established definitively, can data access be controlled effecitvely, and forensically accurately monitored for compromise at the cloud/datacenter end?
When security becomes ‘too invasive’, workers create out-of-band business processes and “shadow IT,” which are a major blind spot as well as a potential risk surface as company private information ends up outside of the control of the organization: does the solution provide a way to discover and potentially control use of this modern shadow IT.
A comprehensive security solution for remote work must acknowledge the novel problems these new trends bring and succeed on resolving these issues for all three criteria.
Kate Bolseth, CEO, HelpSystems
One thing must be clear: your entire management team needs to assist in establishing the right infrastructure in order to facilitate a successful remote workforce environment.
Before looking at any solutions, answer the following questions:
- How are my employees accessing data?
- How are they working?
- How can we minimize the risk of data breaches or inadvertent exposure of sensitive data?
- How do we discern what data is sensitive and needs to be protected?
The answers will inform organizational planning and facilitate employee engagement while removing potential security roadblocks that might thwart workforce productivity. These guidelines must be as fluid as the extraordinary circumstances we are facing without creating unforeseen exposure to risk.
When examining solutions, any option worth considering must be able to identify and classify sensitive personal data and critical corporate information assets. The deployment of enterprise-grade security is essential to protecting the virtual workforce from security breaches via personal computers as well as at-home Wi-Fi networks and routers.
Ultimately, it’s the flow of email that remains the biggest vulnerability for most organizations, so make sure your solution examines emails and files at the point of creation to identify personal data and apply proper protection while providing the link to broader data classification.
Carolyn Crandall, Chief Deception Officer, Attivo Networks
When selecting a remote workforce protection solution, CISOs need to consider three key areas: exposed endpoints, security for Active Directory (AD) and preventing malware from spreading.
Exposed endpoints: standard anti-virus software and VPNs are no match for advanced signature-less or file-less attack techniques. EDR tools enhance detection but still leave gaps. Therefore pick an endpoint solution capable of quickly detecting endpoint lateral movement, discovery and privilege escalation.
Security for Active Directory (AD): cloud services and identity access management need protection against credential theft, privilege escalation and AD takeover. In a remote workforce context AD is often over provisioned or misconfigured. A good answer is denial technology which detects discovery behaviors and attempts at privilege escalation.
Preventing spread of malware: it is almost impossible to prevent malware passing from workforce machines reconnecting to the network. It is vital therefore to choose a resolution that uncovers lateral movement, APTs, ransomware and insider threats. Popular options include EPP/EDR, Intrusion Detection/Prevention Systems (IDS/IPS) and deception technology. When selecting, take account of native integrations and automation as well as how well the tools combine to share data and automate incident response.
In short, the answer to remote workforce protection lies in a robust, layered defence. If attackers get through one, there must be additional controls to stop them from progressing.
Daniel Döring, Technical Director Security and Strategic Alliances, Matrix42
Endpoint security requires a bundle of measures, and only companies that take all aspects into account can ensure a high level of security.
Automated malware protection: automated detection in case of anomalies and deviations is a fundamental driver for IT to be able to react quickly in case of an incident. In this way, it is often possible to fend off attacks before they even cause damage.
Device control: all devices that have access to corporate IT must be registered and secured in advance. This includes both corporate devices and private employee devices such as smartphones, tablets, or laptops. If, for example, a smartphone is lost, access to the system can be withdrawn at the click of a mouse.
App control: if, in addition to devices, all applications are centrally controlled by IT, IT risks can be further minimized. The IT department can thus control access at any time.
Encryption: the encryption of all existing data protects against the consequences of data loss.
Data protection at the technological and manual levels: automated and manual measures are combined for greater data protection. Employees must continue to be trained so that they are aware of risks. However, the secure management of data stocks can be simplified with the help of technology in such a way that error tolerance is significantly increased.
Greg Foss, Senior Cybersecurity Strategist, VMware Carbon Black
The most important aspect for any security solution is how this product is going to complement your current environment and compensate for gaps within your existing controls.
Whether you’re looking to upgrade your endpoint protections or add always-on VPN capability for the now predominately remote workforce, there are a few key considerations when it comes to deploying security software for protecting distributed assets:
- Will the solution require infrastructure to deploy, or will this be a remote cloud hosted solution? Both options come with their unique benefits and drawbacks, with cloud being optimal for disparate systems and offloading the burden of securing internet-facing services to the vendor.
- What is the footprint of the agent and are multiple agents required for the solution to be effective? Compute is expensive, agents should be as non-impactful to the system as possible.
- How will this solution improve your security team’s visibility and ability to either prevent or respond to a breach? What key gaps in coverage will this tool help rectify as cost effectively as possible.
- Will this meet the organization’s future needs, as things begin to shift back to the office?
- Lastly, ensure that you allow for the team to operationalize and integrate the platform. This takes time. Don’t bring on too many tools at once.
Matt Lock, Technical Director, Varonis
With more remote working, comes more cyberattacks. When selecting a remote workforce solution, CISO’s must ask the following questions:
Am I able to provide comprehensive visibility of cloud apps? Microsoft Teams usage exploded by 500% during the pandemic, however given its immediate enforcement, deployments were rushed with misconfigured permissions. It’s paramount to pick a solution that allows security teams to see where sensitive data is overexposed and provide visibility into how each user can access Office 365 data.
Can I confidently monitor insider threat activity? The shift to remote working has seen a spike in insider threat activity and highlighted the importance of understanding where sensitive data is, who has access to it, whose leveraging that access, and any unusual access patterns. Best practices such as implementing the principle of least privilege to confine user access to the data should also be considered.
Do I have real-time insight into anomalous behavior? Having real-time awareness of unusual VPN, DNS and web activity mustn’t be overlooked. Gaining visibility of this web activity assists security teams track and trend progress as they mitigate critical security gaps.
Selecting the right workforce protection solution will vary for different organizations depending on their priorities but the top priority of any solution must be to provide clear visibility of data across all cloud and remote environments.
Druce MacFarlane, Head of Products – Security, Threat Intelligence and Analytics, Infoblox
Enterprises investing in remote workforce security tools should consider shoring up their foundational security in a way that:
Secures corporate assets wherever they are located: backhauling traffic to a data center—for example with a VPN—can introduce latency and connectivity issues, especially when accessing cloud-based applications and services that are now essential for business operations. Look for solutions that extend the reach of your existing security stack, and leverage infrastructure you already rely on for connectivity to extend security, visibility, and control to the edge.
Optimizes your existing security stack: find a solution that works with your entire security ecosystem to cross-share threat intelligence, spot and flag suspicious activities, and automate threat response.
Offers flexible deployment: to get the most value for your spend, make sure the solution you choose can be deployed on-premises and in the cloud to offer security that cuts across your hybrid infrastructure, protecting your on-premises assets as well as your remote workforce, while allowing IT to manage the solution from anywhere.
The right solution to secure remote work should ideally enable you to scale quickly to optimize remote connections and secure corporate assets wherever they are located.
Faiz Shuja, CEO, SIRP Labs
In all the discussion around making remote working safer for employees, relatively little has been said about mechanisms governing distributed security monitoring and incident response teams working from home.
Normally, security analysts work within a SOC complete with advanced defences and tools. New special measures are needed to protect them while monitoring threats and responding to attacks from home.
Such measures include hardened machines with secure connectivity through VPNs, 2FA and jump machines. SOC teams also need to update security monitoring plans remotely.
Our advice to CISOs is to optimize security operations and monitoring platforms so that all essential cybersecurity information needed for accurate decision-making is contextualized and visible at-a-glance to a remote security analyst.
Practical measures include:
- Unify the view for distributed security analysts to monitor and respond to threats
- Ensure proper communication and escalation between security teams and across the organization through defined workflows
- Use security orchestration and automation playbooks for repetitive investigation and incident response tasks for consistency across all distributed security analysts
- Align risk matrix with evolving threat landscape
- Enhance security monitoring use cases for remote access services and remotely connected devices
One notable essential is the capacity to constantly tweak risk-levels to quickly realign priorities to optimise the detection and response effectiveness of individual security team members.
Todd Weber, CTO, Americas, Optiv Security
Selecting a remote workforce protection solution is more about scale these days than technology. Companies have been providing work-from-home solutions for several years, but not necessarily for all applications.
How granular can you get on access to applications based on certain conditions?
Simply the credentials themselves (even with multi-factor authentication) aren’t enough any longer to judge on trusted access to critical applications. Things like what device am I on, how trusted is this device, where in the world is this device, and other factors play a role, and remote access solutions need to accommodate granular access to applications based on this criteria.
Can I provide enhanced transport and access to applications with the solution?
The concept of SD-WAN is not new, but it has become more important as SaaS applications and distributed workforce have become more prevalent. Providing optimal network transport as well as a visibility point for user and data controls has become vitally important.
Does the solution provide protections for cloud SaaS applications?
Many applications are no longer hosted by companies and aren’t in the direct path of many controls. Can you deploy very granular controls within the solution that provides both visibility and access restrictions to IaaS and SaaS applications?
Today’s organizations desire the accessibility and flexibility of the cloud, yet these benefits ultimately mean little if you’re not operating securely. One misconfigured server and your company may be looking at financial or reputational damage that takes years to overcome.
Fortunately, there’s no reason why cloud computing can’t be done securely. You need to recognize the most critical cloud security challenges and develop a strategy for minimizing these risks. By doing so, you can get ahead of problems before they start, and help ensure that your security posture is strong enough to keep your core assets safe in any environment.
With that in mind, let’s dive into the five most pressing cloud security challenges faced by modern organizations.
1. The perils of cloud migration
According to Gartner, the shift to cloud computing will generate roughly $1.3 trillion in IT spending by 2022. The vast majority of enterprise workloads are now run on public, private or hybrid cloud environments.
Yet if organizations heedlessly race to migrate without making security a primary consideration, critical assets can be left unprotected and exposed to potential compromise. To ensure that migration does not create unnecessary risks, it’s important to:
- Migrate in stages, beginning with non-critical or redundant data. Mistakes are often more likely to occur earlier in the process. So, begin moving data that won’t lead to damaging consequences to the enterprise in case it gets corrupted or erased.
- Fully understand your cloud provider’s security practices. Go beyond “trust by reputation” and really dig into how your data is stored and protected.
- Maintain operational continuity and data integrity. Once migration occurs, it’s important to ensure that controls are still functioning and there is no disruption to business operations.
- Manage risk associated with the lack of visibility and control during migration. One effective way to manage risk during transition is to use breach and attack simulation software. These automated solutions launch continuous, simulated attacks to view your environment through the eyes of an adversary by identifying hidden vulnerabilities, misconfigurations and user activity that can be leveraged for malicious gain. This continuous monitoring provides a significant advantage during migration – a time when IT staff are often stretched thin, learning new concepts and operating with less visibility into key assets.
2. The need to master identity and access management (IAM)
Effectively managing and defining the roles, privileges and responsibilities of various network users is a critical objective for maintaining robust security. This means giving the right users the right access to the right assets in the appropriate context.
As workers come and go and roles change, this mandate can be quite a challenge, especially in the context of the cloud, where data can be accessed from anywhere. Fortunately, technology has improved our ability to track activities, adjust roles and enforce policies in a way that minimizes risk.
Today’s organizations have no shortage of end-to-end solutions for identity governance and management. Yet it’s important to understand that these tools alone are not the answer. No governance or management product can provide perfect protection as organizations are eternally at the mercy of human error. To help support smart identity and access management, it’s critical to have a layered and active approach to managing and mitigating security vulnerabilities that will inevitably arise.
Taking steps like practicing the principle of least privilege by permitting only the minimal amount of access necessary to perform tasks will greatly enhance your security posture.
3. The risks posed by vendor relationships
The explosive growth of cloud computing has highlighted new and deeper relationships between businesses and vendors, as organizations seek to maximize efficiencies through outsourcing and vendors assume more important roles in business operations. Effectively managing vendor relations within the context of the cloud is a core challenge for businesses moving forward.
Why? Because integrating third-party vendors often substantially raises cybersecurity risk. A Ponemon institute study in 2018 noted that nearly 60% of companies surveyed had encountered a breach due to a third-party. APT groups have adopted a strategy of targeting large enterprises via such smaller partners, where security is often weaker. Adversaries know you’re only as strong as your weakest link and take the least path of resistance to compromise assets. Due to this, it is incumbent upon today’s organizations to vigorously and securely manage third-party vendor relations in the cloud. This means developing appropriate guidance for SaaS operations (including sourcing and procurement solutions) and undertaking periodic vendor security evaluations.
4. The problem of insecure APIs
APIs are the key to successful cloud integration and interoperability. Yet insecure APIs are also one of the most significant threats to cloud security. Adversaries can exploit an open line of communication and steal valuable private data by compromising APIs. How often does this really occur? Consider this: By 2022, Gartner predicts insecure APIs will be the vector most commonly used to target enterprise application data.
With APIs growing ever more critical, attackers will continue to use tactics such as exploiting inadequate authentications or planting vulnerabilities within open source code, creating the possibility of devastating supply chain attacks. To minimize the odds of this occurring, developers should design APIs with proper authentication and access control in mind and seek to maintain as much visibility as possible into the enterprise security environment. This will allow for the quick identification and remediation of such API risks.
5. Dealing with limited user visibility
We’ve mentioned visibility on multiple occasions in this article – and for good reason. It is one of the keys to operating securely in the cloud. The ability to tell friend from foe (or authorized user from unauthorized user) is a prerequisite for protecting the cloud. Unfortunately, that’s a challenging task as cloud environments grow larger, busier and more complex.
Controlling shadow IT and maintaining better user visibility via behavior analytics and other tools should be a top priority for organizations. Given the lack of visibility across many contexts within cloud environments, it’s a smart play to develop a security posture that is dedicated to continuous improvement and supported by continuous testing and monitoring.
Critical cloud security challenges: The takeaway
Cloud security is achievable as long as you understand, anticipate and address the most significant challenges posed by migration and operation. By following the ideas outlined above, your organization will be in a much stronger position to prevent and defeat even the most determined adversaries.
91 percent of people know that using the same password on multiple accounts is a security risk, yet 66 percent continue to use the same password anyway. IT security practitioners are aware of good habits when it comes to strong authentication and password management, yet often fail to implement them due to poor usability or inconvenience.
To select a suitable password management solution for your business, you need to think about a variety of factors. We’ve talked to several cybersecurity professionals to get their insight on the topic.
Simran Anand, Head of B2B Growth, Dashlane
An organization’s security chain is only as strong as its weakest link – so selecting a password manager should be a top priority among IT leaders. While most look to the obvious: security (high grade encryption, 2FA, etc.), support, and price, it’s critical to also consider the end-user experience. Why? Because user adoption remains by far IT’s biggest challenge. Only 17 percent of IT leaders incorporate the end-UX when evaluating password management tools.
It’s not surprising, then, that those who have deployed a password manager in their company report only 23 percent adoption by employees. The end-UX has to be a priority for IT leaders who aim to guarantee secure processes for their companies.
Password management is too important a link in the security chain to be compromised by a lack of adoption (and simply telling employees to follow good password practices isn’t enough to ensure it actually happens). For organizations to leverage the benefits of next-generation password security, they need to ensure their password management solution is easy to use – and subsequently adopted by all employees.
Gerald Beuchelt, CISO, LogMeIn
As the world continues to navigate a long-term future of remote work, cybercriminals will continue to target users with poor security behaviors, given the increased time spent online due to COVID-19. Although organizations and people understand that passwords play a huge role in one’s overall security, many continue to neglect best password practices. For this reason, businesses should implement a password management solution.
It is essential to look for a password management solution that:
- Monitors poor password hygiene and provides visibility to the improvements that could be made to encourage better password management.
- Standardizes and enforces policies across the organization to support proper password protection.
- Provides a secure password management portal for employees to access all account passwords conveniently.
- Reports IT insights to provide a detailed security report of potential threats.
- Equips IT to audit the access controls users have with the ability to change permissions and encourage the use of new passwords.
- Integrates with previous and existing infrastructure to automate and accelerate workflows.
- Oversees when users share accounts to maintain a sense of security and accountability.
Using a password management solution that is effective is crucial to protecting business information. Finding the right solution will not only help to improve employee password behaviors but also increase your organization’s overall online security.
Michael Crandell, CEO, Bitwarden
Employees, like many others, face the daily challenge of remembering passwords to securely work online. A password manager simplifies generating, storing, and sharing unique and complex passwords – a must-have for security.
There are a number of reputable password managers out there. Businesses should prioritize those that work cross-platform and offer affordable plans. They should consider if the solution can be deployed in the cloud or on-premises. A self-hosting option is often preferred by some organizations for security and internal compliance reasons.
Password managers need to be easy-to-use for every level of user – from beginner to advanced. Any employee should be able to get up and running in minutes on the devices they use.
As of late, many businesses have shifted to a remote work model, which has highlighted the importance of online collaboration and the need to share work resources online. With this in mind, businesses should prioritize options that provide a secure way to share passwords across teams. Doing so keeps everyone’s access secure even when they’re spread out across many locations.
Finally, look for password managers built around an open source approach. Being open source means the source code can be vetted by experienced developers and security researchers who can identify potential security issues, and even contribute to resolving them.
Matt Davey, COO, 1Password
65% of people reuse passwords for some or all of their accounts. Often, this is because they don’t have the right tools to easily create and use strong passwords, which is why you need a password manager.
Opt for a password manager that gives you oversight over the things that matter most to your business: from who’s signed in from where, who last accessed certain items, or which email addresses on your domain have been included in a breach.
To keep the admin burden low, look for a password manager that allows you to manage access by groups, delegate admin powers, and manage users at scale. Depending on the structure of your business, it can be useful to grant access to information by project, location, or team.
You’ll also want to think about how a password manager will fit with your existing IAM/security stack. Some password managers integrate with identity providers, streamlining provisioning and administration.
Above all, if you want your employees to adopt your password manager of choice, make sure it’s easy to use: a password manager will only keep you secure if your employees actually use it.
As COVID-19 forced organizations to re-imagine how the workplace operates just to maintain basic operations, HR departments and their processes became key players in the game of keeping our economy afloat while keeping people alive.
Without a doubt, people form the core of any organization. The HR department must strike an increasingly delicate balance while fulfilling the myriad of needs of workers in this “new normal” and supporting organizational efficiency. As the tentative first steps of re-opening are being taken, many organizations remain remote, while others are transitioning back into the office environment.
Navigating the untested waters of managing HR through this shift to remote and back again is complex enough without taking cybercrime and data security into account, yet it is crucial that HR do exactly that. The data stored by HR is the easy payday cybercriminals are looking for and a nightmare keeping CISOs awake at night.
Why securing HR data is essential
If compromised, the data stored by HR can do a devastating amount of damage to both the company and the personal lives of its employees. HR data is one of the highest risk types of information stored by an organization given that it contains everything from basic contractor details and employee demographics to social security numbers and medical information.
Many state and federal laws and regulations govern the storage, transmission and use of this high value data. The sudden shift to a more distributed workforce due to COVID-19 increased risks because a large portion of the HR workforce being remote means more and higher access levels across cloud, VPN, and personal networks.
Steps to security
Any decent security practitioner will tell you that no security setup is foolproof, but there are steps that can me taken to significantly reduce risk in an ever-evolving environment. A multi-layer approach to security offers better protection than any single solution. Multiple layers of protection might seem redundant, but if one layer fails, the other layers work fill in gaps.
Securing HR-related data needs to be approached from both a technical and end user perspective. This includes controls designed to protect the end user or force them into making appropriate choices, and at the same time providing education and awareness so they understand how to be good stewards of their data.
Secure the identity
The first step to securing HR data is making sure that the ways in which users access data are both secure and easy to use. Each system housing HR data should be protected by a federated login of some variety. Federated logins use a primary source of identity for managing usernames and passwords such as Active Directory.
When a user uses a federated login, the software utilizes a system like LDAP, SAML, or OAuth to query the primary source of identity to validate the username and password, as well as ensure that the user has appropriate rights to access. This ensures that users only have to learn one username and password and we can ensure that the password complies with organizationally mandated complexity policies.
The next step to credential security is to add a second factor of authentication on every system storing HR data. This is referred to as Multi-factor Authentication (MFA) and is a vital preventative measure when used well. The primary rule of MFA says that the second factor should be something “the user is or has” to be most effective.
This second factor of authentication can be anything from a PIN generated on a mobile device to a biometric check to ensure the person entering the password is, in fact, the actual owner. Both of these systems are easy for end users to use and add very little additional friction to the authentication effort, while significantly reducing the risk of credential theft, as it’s difficult for someone to compromise users’ credentials and steal their mobile device or a copy of their fingerprints.
In today’s world, HR users working from somewhere other than the office is not unusual. With this freedom comes the need to secure the means by which they access data, regardless of the network they are using. The best way to accomplish this is to set up a VPN and ensure that all HR systems are only accessible either from inside of the corporate network or from IPs that are connected to the VPN.
A VPN creates an encrypted tunnel between the end user’s device and the internal network. The use of a VPN protects the user against snooping even if they are using an unsecured network like a public Wi-Fi at a coffee shop. Additionally, VPNs require authentication and, if that includes MFA, there are three layers of security to ensure that the person connecting in is a trusted user.
Next, you have to ensure that access is being used appropriately or that no anomalous use is taking place. This is done through a combination of good logging and good analytics software. Solutions that leverage AI or ML to review how access is being utilized and identify usage trends further increase security. The logging solution verifies appropriate usage while the analysis portion helps to identify any questionable activity taking place. This functions as an early warning system in case of compromised accounts and insider threats.
Comprehensive analytics solutions will notice trends in behavior and flag an account if the user changes their normal routine. If odd activity occurs (e.g., going through every HR record), the system alerts an administrator to delve deeper into why this user is viewing so many files. If it notices access occurring from IP ranges coming in through the VPN from outside of the expected geographical areas, accounts can be automatically disabled while alerts are sent out and a deeper investigation takes place. This are ways to shrink the scope of an incident and reduce the damage should an attack occur.
Secure the user
Security awareness training for end users is one of the most essential components of infrastructure security. The end user is a highly valuable target because they already have access to internal resources. The human element is often considered a high-risk factor because humans are easier to “hack” than passwords or automatic security controls.
Social engineering attacks succeed when people aren’t educated to spot red flags indicating an attack is being attempted. Social engineering attacks are the easiest and least costly option for an attacker because any charismatic criminal with good social skills and a mediocre acting ability can be successful. The fact that this type of cyberattack requires no specialized technical skill expands the potential number of attackers.
The most important step of a solid layered security model is the one that prevent these attacks through education and awareness. By providing end users engaging, thorough, and relevant training about types of attacks such as phishing and social engineering, organizations arm their staff with the tools they need to avoid malicious links, prevent malware or rootkit installation, and dodge credential theft.
No perfect security
No matter where the job gets done, HR needs to deliver effective services to employees while still taking steps to keep employee data safe. Even though an organization cannot control every aspect of how work is getting done, these steps will help keep sensitive HR data safe.
Control over accounts, how they are monitored, and what they are accessing are important steps. Arming the end user directly, with the awareness needed to prevent having their good intentions weaponized, requires a combination of training and controls that create a pro-active system of prevention, early warnings, and swift remediation. There is no perfect security solution for protecting HR data, but multiple, overlapping security layers can protect valuable HR assets without making it impossible for HR employees to do their work.
Internal investigations in corporations are typically conducted by the human resources (HR) department, internal compliance teams, and/or the IT department. Some cases may also require the involvement of outside third parties like forensic experts, consultants, law or accounting firms, or security experts.
These are often complex matters from a legal, process and technical perspective. Depending on the nature and extent of the potential misconduct, the stakes can be very high, with risks that include legal jeopardy, large fines or damages, negative publicity, and damage to company culture and morale. Speed and efficiency are vital: organizations need to understand the extent of the problem and act immediately to prevent further damage.
Key phases of an internal investigation
An internal investigation typically follows five key phases: a trigger event; a legal hold and custodian interviews; requests for data and data collection; processing, review and analysis of files; and the recommendation of next steps. COVID-19 and work-at-home requirements are most relevant to the second and third phases, in which interviews take place and data is requested and collected.
A trigger event kicks off an action from a legal, compliance, or investigative standpoint. While complaints to HR alleging discrimination or harassment based on race or gender are among the most common triggers of an internal investigation, other triggers include leaked or stolen intellectual property, whistle-blower complaints alleging fraud or compliance violations, the loss or theft of physical assets, or leaked or stolen data containing sensitive or personally identifiable information (PII).
In the next phase, legal hold and custodian interviews, the legal department must quickly perform an assessment of the veracity of the allegation(s) and the degree of risk involved, and then determine whether further investigative action is required. If a decision to continue is made, a legal hold is immediately put in place.
While some companies may be able to preserve information by working with their IT department without notifying the person(s) being investigated, in other cases the organization may need to send an official notification to the person(s) and ask for their cooperation in preserving information. The latter option is more common, especially in the age of COVID-19.
Initial interviews will often expand the scope of the investigation. A custodian may say, “I only worked on that project for a week; X was the driving force behind it,” or “I’ve only been with the company for a month, but Y and Z have been working on this since last year.” As the number of custodians grows, so does the number of devices to collect data from. Data locations and data types also have a tendency to multiply, with sources ranging from corporate email, text messages, file shares, and “loose” files stored on local devices or thumb drives to cloud storage like Office 365, Dropbox, Google Vault and even, in some cases, surveillance video.
After custodian interviews, it’s time for the request for and collection of data. The complexity at this stage depends to a large extent on the company’s information infrastructure. Especially during the pandemic, cloud-based data or work product saved in a virtual environment will be more straightforward to collect than on-premise data or data stored locally on a mobile device. Collection can become especially challenging with work-at-home requirements. A custodian may need to allow a forensics professional to access their device(s) at home. In other cases a device—which the custodian presumably needs to do their job—may need to be shipped.
Before COVID-19, an employee under investigation could be surprised with an on-the-spot collection at the office under the guise of an in-person meeting or “routine” request to bring in a device for an IT upgrade or a mandatory security update. Such strategies are much less likely to be practical or successful in a remote work environment. “At home” collection may also become impossible if the employee has opted to work from a second home or another location in a different region.
Employees using their own devices for remote work present a further complication. Devices like personal phones or tablets usually lack many of the security protections embedded in a company-provided mobile device and are therefore more vulnerable to malware, spyware, and co-mingled (personal and work-related) data. The data is also much more likely to be accessible by family and friends, increasing the potential for vulnerability as well as foul play. Upon collection, such data will often need to go through more extensive screening, and custodians may be more reluctant to cooperate when personal information is stored on a device targeted for collection. It is also possible they may use the virus as a pretext and refuse to allow a forensic professional into their home.
Increasing numbers of companies are turning to remote assisted collection kits (RACKs), which allow a forensic investigator to gain access to a device online and gather data directly from it. While RACK collections are forensically sound and legally defensible, some RACKs are designed to create a forensic image of a device and can consume large amounts of Internet bandwidth in the process. With less robust home connections, this can result in the disruption of ordinary work, or perhaps open the door to delaying tactics or data erasure on the part of custodians who have something to hide.
Once the data collection phase is complete, COVID-19-related constraints on the investigation recede from the picture. Processing, reviewing and analyzing files can proceed as normal—although review teams will be dispersed and have to be managed via a virtual collaborative workspace. The last phase in the investigation, recommending a next step, involves either closing the investigation, expanding it or possibly bringing in third parties such as a managed document review company and/or outside counsel.
Given the complexity of many internal investigations and the risks involved, it’s surprising how many organizations conduct them in an ad hoc manner. This is asking for trouble, especially in the age of COVID-19. Careful planning, clear policies and a consistent, formal process are essential. Each matter should begin with the development of a step-by-step plan based on the type of event and the trigger.
Detailed documentation is crucial every step of the way, so stakeholders can continually monitor progress while assessing scope and risk, and to be certain information is gathered in a legally defensible way. Documentation should address:
1. The investigation plan, processes and updates.
2. The data chain of custody.
3. The scope of the investigation, which needs to be legally “reasonable.”
In addition to working closely with the IT department, the investigation team should also consider engaging a company that specializes in forensic collections and solicit the input of the organization’s trusted eDiscovery provider. While some companies do not routinely use eDiscovery tools in internal investigations, these tools can save significant time and money in the culling, analysis, and review of data, particularly when they have a built-in cloud collections capability. AI technologies can dramatically speed up the process while minimizing human error and increasing accuracy, especially in investigations involving large volumes of data.
AI tools also have tremendous potential for companies seeking to apply more proactive controls over information governance and record management, identify security potential vulnerabilities before they become serious liabilities, and perform regular compliance audits. For example, these tools can perform privacy audits and assess an organization’s vulnerability to violations of regulations like the General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA). AI technologies can also be deployed to look for data anomalies that may indicate security breaches or suspicious behavior.
While the age of COVID-19 presents new challenges for internal investigations, companies should be able to weather the storm by identifying which processes in their investigation workflows will need to change, carefully following best practices, and ensuring they have appropriate, scalable technologies that can be deployed quickly when a new matter emerges.
New research from Trend Micro highlights design flaws in legacy languages and released new secure coding guidelines. These are designed to help Industry 4.0 developers greatly reduce the software attack surface, and therefore decrease business disruption in OT environments. The layers of the software stack (including automation task programs) and what their respective vulnerabilities could affect Conducted jointly with Politecnico di Milano, the research details how design flaws in legacy programming languages could lead to … More
The post Security analysis of legacy programming environments reveals critical flaws appeared first on Help Net Security.
The volume of business data worldwide is growing at an astounding pace, with some estimates showing the figure doubling every year. Over time, every company generates and accumulates a massive trove of data, files and content – some inconsequential and some highly sensitive and confidential in nature.
Throughout the data lifecycle there are a variety of risks and considerations to manage. The more data you create, the more you must find a way to track, store and protect against theft, leaks, noncompliance and more.
Faced with massive data growth, most organizations can no longer rely on manual processes for managing these risks. Many have instead adopted a vast web of tracking, endpoint detection, encryption, access control and data policy tools to maintain security, privacy and compliance. But, deploying and managing so many disparate solutions creates a tremendous amount of complexity and friction for IT and security teams as well as end users. The problem with this approach is that it comes up short in terms of the level of integration and intelligence needed to manage enterprise files and content at scale.
Let’s explore several of the most common data lifecycle challenges and risks businesses are facing today and how to overcome them:
Maintaining security – As companies continue to build up an ocean of sensitive files and content, the risk of data breaches grows exponentially. Smart data governance means applying security across the points at which the risk is greatest. In just about every case, this includes both ensuring the integrity of company data and content, as well as any user with access to it. Every layer of enterprise file sharing, collaboration and storage must be protected by controls such as automated user behavior monitoring to deter insider threats and compromised accounts, multi-factor authentication, secure storage in certified data centers, and end-to-end encryption, as well as signature-based and zero-day malware detection.
Classification and compliance – Gone are the days when organizations could require users to label, categorize or tag company files and content, or task IT to manage and manually enforce data policies. Not only is manual data classification and management impractical, it’s far too risky. You might house millions of files that are accessible by thousands of users – there’s simply too much, spread out too broadly. Moreover, regulations like GDPR, CCPA and HIPAA add further complexity to the mix, with intricate (and sometimes conflicting) requirements. The definition of PII (personally identifiable information) under GDPR alone encompasses potentially hundreds of pieces of information, and one mistake could result in hefty financial penalties.
Incorrect categorization can lead to a variety of issues including data theft and regulatory penalties. Fortunately, machines can do in seconds–and often with better accuracy–what it might take years for a human to do. AI and ML technologies are helping companies quickly scan files across data repositories to identify sensitive information such as credit card numbers, addresses, dates of birth, social security numbers, and health-related data, to apply automatic classifications. They can also track files across popular data sources such as OneDrive, Windows File Server, SharePoint, Amazon S3, Google Cloud, GSuite, Box, Microsoft Azure Blob, and generic CIFS/SMB repositories to better visualize and control your data.
Retention – As data storage costs have plummeted over the past 10 years, many organizations have fallen into the trap of simply “keeping everything” because it’s (deceptively) cheap to do so. This approach carries many security and regulatory risks, as well as potential costs. Our research shows that exposure of just a single terabyte of data could cost you $129,324; now think about how many terabytes of data your organization stores today. The longer you retain sensitive files, the greater the opportunity for them to be compromised or stolen.
Certain types of data must be stored for a specific period of time in order to adhere to various customer contracts and regulatory criteria. For example, HIPAA regulations require organizations to retain documentation for six years from the date of its creation. GDPR is less specific, stating that data shall be kept for no longer than is necessary for the purposes for which it is being processed.
Keeping data any longer than absolutely necessary is not only risky, but those “affordable” costs can add up quickly. AI-enabled governance can track these set retention periods and minimize risk by automatically securing or eliminating any old or redundant files longer required (or allowed). With streamlined data retention processes, you can decrease storage costs, reduce security and noncompliance exposure and optimize data processing performance.
Ongoing monitoring and management – Strong governance gets easier with good data hygiene practices over the long term, but with so many files to manage across a variety of different repositories and storage platforms, it can be challenging to track risks and suspicious activities at all times. Defining dedicated policies for what data types can be stored in which locations, which users can access it, and all parties with which it be shared will help you focus your attention on further minimizing risk. AI can multiply these efforts by eliminating manual monitoring processes, providing better visibility into how data is being used and alerts when sensitive content might have been shared externally or with unapproved users. This makes it far easier to identify and respond to threats and risky behavior, enabling you to take immediate action on compromised accounts, move or delete sensitive content that is being shared too broadly or stored in unauthorized locations, etc.
The key to data lifecycle management
The sheer volume of data, files and content businesses are now generating and managing creates massive amounts of complexity and risk. You have to know what assets exist, where they’re stored, the specific users have access to them, when they’re being shared, what files can be deleted, which need to be stored in accordance with regulatory requirements, and so on. Falling short in any one of these areas can lead to major operational, financial and reputational consequences.
Fortunately, recent advances in AI and ML are enabling companies to streamline data governance to find and secure sensitive data at its source, sense and respond to potentially malicious behaviors, maintain compliance and adapt to changing regulatory criteria, and more. As manual processes and piecemeal point solutions fall short, AI-enabled data governance will continue to dramatically reduce complexity both for users and administrators, and deliver a level of visibility and control that business needs in today’s data-centric world.