5 tips to reduce the risk of email impersonation attacks

Email attacks have moved past standard phishing and become more targeted over the years. In this article, I will focus on email impersonation attacks, outline why they are dangerous, and provide some tips to help individuals and organizations reduce their risk exposure to impersonation attacks.

email impersonation attacks

What are email impersonation attacks?

Email impersonation attacks are malicious emails where scammers pretend to be a trusted entity to steal money and sensitive information from victims. The trusted entity being impersonated could be anyone – your boss, your colleague, a vendor, or a consumer brand you get automated emails from.

Email impersonation attacks are tough to catch and worryingly effective because we tend to take quick action on emails from known entities. Scammers use impersonation in concert with other techniques to defraud organizations and steal account credentials, sometimes without victims realizing their fate for days after the fraud.

Fortunately, we can all follow some security hygiene best practices to reduce the risk of email impersonation attacks.

Tip #1 – Look out for social engineering cues

Email impersonation attacks are often crafted with language that induces a sense of urgency or fear in victims, coercing them into taking the action the email wants them to take. Not every email that makes us feel these emotions will be an impersonation attack, of course, but it’s an important factor to keep an eye out for, nonetheless.

Here are some common phrases and situations you should look out for in impersonation emails:

  • Short deadlines given at short notice for processes involving the transfer of money or sensitive information.
  • Unusual purchase requests (e.g., iTunes gift cards).
  • Employees requesting sudden changes to direct deposit information.
  • Vendor sharing new.

email impersonation attacks

This email impersonation attack exploits the COVID-19 pandemic to make an urgent request for gift card purchases.

Tip #2 – Always do a context check on emails

Targeted email attacks bank on victims being too busy and “doing before thinking” instead of stopping and engaging with the email rationally. While it may take a few extra seconds, always ask yourself if the email you’re reading – and what the email is asking for – make sense.

  • Why would your CEO really ask you to purchase iTunes gift cards at two hours’ notice? Have they done it before?
  • Why would Netflix emails come to your business email address?
  • Why would the IRS ask for your SSN and other sensitive personal information over email?

To sum up this tip, I’d say: be a little paranoid while reading emails, even if they’re from trusted entities.

Tip #3 – Check for email address and sender name deviations

To stop email impersonation, many organizations have deployed keyword-based protection that catches emails where the email addresses or sender names match those of key executives (or other related keywords). To get past these security controls, impersonation attacks use email addresses and sender names with slight deviations from those of the entity the attacks are impersonating. Some common deviations to look out for are:

  • Changes to the spelling, especially ones that are missed at first glance (e.g., “ei” instead of “ie” in a name).
  • Changes based on visual similarities to trick victims (e.g. replacing “rn” with “m” because they look alike).
  • Business emails sent from personal accounts like Gmail or Yahoo without advance notice. It’s advisable to validate the identity of the sender through secondary channels (text, Slack, or phone call) if they’re emailing you with requests from their personal account for the first time.
  • Descriptive changes to the name, even if the changes fit in context. For example, attackers impersonating a Chief Technology Officer named Ryan Fraser may send emails with the sender name as “Ryan Fraser, Chief Technology Officer”.
  • Changes to the components of the sender name (e.g., adding or removing a middle initial, abbreviating Mary Jane to MJ).

Tip #4 – Learn the “greatest hits” of impersonation phrases

Email impersonation has been around for long enough that there are well-known phrases and tactics we need to be aware of. The emails don’t always have to be directly related to money or data – the first email is sometimes a simple request, just to see who bites and buys into the email’s faux legitimacy. Be aware of the following phrases/context:

  • “Are you free now?”, “Are you at your desk?” and related questions are frequent opening lines in impersonation emails. Because they seem like harmless emails with simple requests, they get past email security controls and lay the bait.
  • “I need an urgent favor”, “Can you do something for me within the next 15 minutes?”, and other phrases implying the email is of a time-sensitive nature. If you get this email from your “CEO”, your instinct might be to respond quickly and be duped by the impersonation in the process.
  • “Can you share your personal cell phone number?”, “I need your personal email”, and other out-of-context requests for personal information. The objective of these requests is to harvest information and build out a profile of the victim; once adversaries have enough information, they have another entity to impersonate.

Tip #5 – Use secondary channels of authentication

Enterprise adoption of two-factor authentication (2FA) has grown considerably over the years, helping safeguard employee accounts and reduce the impact of account compromise.

Individuals should try to replicate this best practice for any email that makes unusual requests related to money or data. For example:

  • Has a vendor emailed you with a sudden change in their bank account details, right when an invoice is due? Call or text the vendor and confirm that they sent the email.
  • Did your manager email you asking for gift card purchases? Send them a Slack message (or whatever productivity app you use) to confirm the request.
  • Did your HR representative email you a COVID resource document that needs email account credentials to be viewed? Check the veracity of the email with the HR rep.

Even if you’re reaching out to very busy people for this additional authentication, they will understand and appreciate your caution.

These tips are meant as starting points for individuals and organizations to better understand email impersonation and start addressing its risk factors. But effective protection against email impersonation can’t be down to eye tests alone. Enterprise security teams should conduct a thorough audit of their email security stack and explore augments to native email security that offer specific protection against impersonation.

With email more important to our digital lives than ever, it’s vital that we are able to believe people are who their email says they are. Email impersonation attacks exploit this sometimes-misplaced belief. Stopping email impersonation attacks will require a combination of security hygiene, email security solutions that provide specific impersonation protection, and some healthy paranoia while reading emails – even if they seem to be from people you trust.

Preventing cybersecurity’s perfect storm

Zerologon might have been cybersecurity’s perfect storm: that moment when multiple conditions collide to create a devastating disaster. Thanks to Secura and Microsoft’s rapid response, it wasn’t.

Zerologon scored a perfect 10 CVSS score. Threats rating a perfect 10 are easy to execute and have deep-reaching impact. Fortunately, they aren’t frequent, especially in prominent software brands such as Windows. Still, organizations that perpetually lag when it comes to patching become prime targets for cybercriminals. Flaws like Zerologon are rare, but there’s no reason to assume that the next attack will not be using a perfect 10 CVSS vulnerability, this time a zero-day.

Zerologon: Unexpected squall

Zerologon escalates a domain user beyond their current role and permissions to a Windows Domain Administrator. This vulnerability is trivially easy to exploit. While it seems that the most obvious threat is a disgruntled insider, attackers may target any average user. The most significant risk comes from a user with an already compromised system.

In this scenario, a bad actor has already taken over an end user’s system but is constrained only to their current level of access. By executing this exploit, the bad actor can break out of their existing permissions box. This attack grants them the proverbial keys to the kingdom in a Windows domain to access whatever Windows-based devices they wish.

Part of why Zerologon is problematic is that many organizations rely on Windows as an authoritative identity for a domain. To save time, they promote their Windows Domain Administrators to an Administrator role throughout the organizational IT ecosystem and assign bulk permissions, rather than adding them individually. This method eases administration by removing the need to update the access permissions frequently as these users change jobs. This practice violates the principle of least privilege, leaving an opening for anyone with a Windows Domain Administrator role to exercise broad-reaching access rights beyond what they require to fulfill the role.

Beware of sharks

Advanced preparation for attacks like these requires a fundamental paradigm shift in organizational boundary definitions away from a legacy mentality to a more modern cybersecurity mindset. The traditional castle model assumes all threats remain outside the firewall boundary and trust everything either natively internal or connected via VPN to some degree.

Modern cybersecurity professionals understand the advantage of controls like zero standing privilege (ZSP), which authorizes no one and requires that each user request access and evaluation before granting privileged access. Think of it much like the security check at an airport. To get in, everyone —passenger, pilot, even store staff— needs to be inspected, prove they belong and have nothing questionable in their possession.

This continual re-certification prevents users from gaining access once they’ve experienced an event that alters their eligibility, such as leaving the organization or changing positions. Checking permissions before approving them ensures only those who currently require a resource can access it.

My hero zero (standing privilege)

Implementing the design concept of zero standing privilege is crucial to hardening against privilege escalation attacks, as it removes the administrator’s vast amounts of standing power and access. Users acquire these rights for a limited period and only on an as-needed basis. This Just-In-Time (JIT) method of provisioning creates a better access review process. Requests are either granted time-bound access or flagged for escalation to a human approver, ensuring automation oversight.

An essential component of zero standing privilege is avoiding super-user roles and access. Old school practitioners may find it odd and question the impact on daily administrative tasks that keep the ecosystem running. Users manage these tasks through heavily logged time-limited permission assignments. Reliable user behavior analytics, combined with risk-based privileged access management (PAM) and machine learning supported log analysis, offers organizations better contextual identity information. Understanding how their privileged access is leveraged and identifying access misuse before it takes root is vital to preventing a breach.

Peering into the depths

To even start with zero standing privilege, an organization must understand what assets they consider privileged. The categorization of digital assets begins the process. The next step is assigning ownership of these resources. Doing this allows organizations to configure the PAM software to accommodate the policies and access rules defined organizationally, ensuring access rules meet governance and compliance requirements.

The PAM solution requires in-depth visibility of each individual’s full access across all cloud and SaaS environments, as well as throughout the internal IT infrastructure. This information improves the identification of toxic combinations, where granted permissions create compliance issues such as segregation of duties (SoD) violations.

AI & UEBA to the rescue

Zero standing privilege generates a large number of user logs and behavioral information over time. Manual log review becomes unsustainable very quickly. Leveraging the power of AI and machine learning to derive intelligent analytics allows organizations to identify risky behaviors and locate potential breaches far faster than human users.

Integration of a user and entity behavior analytics (UEBA) software establishes baselines of behavior, triggering alerts when deviations occur. UEBA systems detect insider threats and advanced persistent threats (APTs) while generating contextual identity information.

UEBA systems track all behavior linked back to an entity and identify anomalous behaviors such as spikes in access requests, requesting access to data that would typically not be allowed for that user’s roles, or systematically accessing numerous items. Contextual information helps organizations identifying situations that might indicate a breach or point to unauthorized exfiltration of data.

Your compass points to ZTA

Protecting against privilege escalation threats requires more than merely staying up to date on patches. Part of stopping attacks like Zerologon is to re-imagine how security is architected in an organization. Centering identity as the new security perimeter and implementing zero standing privilege are essential to the foundation of a security model known as zero trust architecture (ZTA).

Zero trust architecture has existed for a while in the corporate world. It is gaining attention from the public sector since NIST’s recent approval of SP-207 outlined ZTA and how to leverage it for the government agencies. NIST’s sanctification of ZTA opened the doors for government entities and civilian contractors to incorporate it into their security model. Taking this route helps to close the privilege escalation pathway providing your organization a secure harbor in the event of another cybersecurity perfect storm.

Review: Netsparker Enterprise web application scanner

Vulnerability scanners can be a very useful addition to any development or operations process. Since a typical vulnerability scanner needs to detect vulnerabilities in deployed software, they are (generally) not dependent on the language or technology used for the application they are scanning.

This often doesn’t make them the top choice for detecting a large number of vulnerabilities or even detecting fickle bugs or business logic issues, but makes them great and very common tools for testing a large number of diverse applications, where such dynamic application security testing tools are indispensable. This includes testing for security defects in software that is being currently developed as a part of a SDLC process, reviewing third-party applications that are deployed inside one’s network (as a part of a due diligence process) or – most commonly – finding issues in all kinds of internally developed applications.

We reviewed Netsparker Enterprise, which is one of the industry’s top choices for web application vulnerability scanning.

Netsparker Enterprise is primarily a cloud-based solution, which means it will focus on applications that are publicly available on the open internet, but it can also scan in-perimeter or isolated applications with the help of an agent, which is usually deployed in a pre-packaged Docker container or a Windows or Linux binary.

To test this product, we wanted to know how Netsparker handles a few things:

1. Scanning workflow
2. Scan customization options
3. Detection accuracy and results
4. CI/CD and issue tracking integrations
5. API and integration capabilities
6. Reporting and remediation efforts.

To assess the tool’s detection capabilities, we needed a few targets to scan and assess.

After some thought, we decided on the following targets:

1. DVWA – Damn Vulnerable Web Application – An old-school extremely vulnerable application, written in PHP. The vulnerabilities in this application should be detected without an issue.
2. OWASP Juice Shop simulates a modern single page web application with a REST API backend. It has a Javascript heavy interface, websockets, a REST API in the backend, and many interesting points and vulnerabilities for testing.
3. Vulnapi – A python3-based vulnerable REST API, written in the FastAPI framework running on Starlette ASGI, featuring a number of API based vulnerabilities.

Workflow

After logging in to Netsparker, you are greeted with a tutorial and a “hand-holding” wizard that helps you set everything up. If you worked with a vulnerability scanner before, you might know what to do, but this feature is useful for people that don’t have that experience, e.g., software or DevOps engineers, who should definitely use such tools in their development processes.

Review Netsparker Enterprise

Initial setup wizard

Scanning targets can be added manually or through a discovery feature that will try to find them by matching the domain from your email, websites, reverse IP lookups and other methods. This is a useful feature if other methods of asset management are not used in your organization and you can’t find your assets.

New websites or assets for scanning can be added directly or imported via a CSV or a TXT file. Sites can be organized in Groups, which helps with internal organization or per project / per department organization.

Review Netsparker Enterprise

Adding websites for scanning

Scans can be defined per group or per specific host. Scans can be either defined as one-off scans or be regularly scheduled to facilitate the continuous vulnerability remediation process.

To better guide the scanning process, the classic scan scope features are supported. For example, you can define specific URLs as “out-of-scope” either by supplying a full path or a regex pattern – a useful option if you want to skip specific URLs (e.g., logout, user delete functions). Specific HTTP methods can also be marked as out-of-scope, which is useful if you are testing an API and want to skip DELETE methods on endpoints or objects.

Review Netsparker Enterprise

Initial scan configuration

Review Netsparker Enterprise

Scan scope options

One feature we quite liked is the support for uploading the “sitemap” or specific request information into Netsparker before scanning. This feature can be used to import a Postman collection or an OpenAPI file to facilitate scanning and improve detection capabilities for complex applications or APIs. Other formats such as CSV, JSON, WADL, WSDL and others are also supported.

For the red team, loading links and information from Fiddler, Burp or ZAP session files is supported, which is useful if you want to expand your automated scanning toolbox. One limitation we encountered is the inability to point to an URL containing an OpenAPI definition – a capability that would be extremely useful for automated and scheduled scanning workflows for APIs that have Swagger web UIs.

Scan policies can be customized and tuned in a variety of ways, from the languages that are used in the application (ASP/ASP.NET, PHP, Ruby, Java, Perl, Python, Node.js and Other), to database servers (Microsoft SQL server, MySQL, Oracle, PostgreSQL, Microsoft Access and Others), to the standard choice of Windows or Linux based OSes. Scan optimizations should improve the detection capability of the tool, shorten scanning times, and give us a glimpse where the tool should perform best.

Integrating Netsparker

The next important question is, does it blend… or integrate? From an integration standpoint, sending email and SMSes about the scan events is standard, but support for various issue tracking systems like Jira, Bitbucket, Gitlab, Pagerduty, TFS is available, and so is support for Slack and CI/CD integration. For everything else, there is a raw API that can be used to tie in Netsparker to other solutions if you are willing to write a bit of integration scripting.

Review Netsparker Enterprise

Integration options

One really well-implemented feature is the support for logging into the testing application, as the inability to hold a session and scan from an authenticated context in the application can lead to a bad scanning performance.

Netsparker has the support for classic form-based login, but 2FA-based login flows that require TOTP or HOTP are also supported. This is a great feature, as you can add the OTP seed and define the period in Netsparker, and you are all set to scan OTP protected logins. No more shimming and adding code to bypass the 2FA method in order to scan the application.

Review Netsparker Enterprise

Authentication methods

What’s more, Netsparker enables you to create a custom script for complex login flows or javascript/CSS heavy login pages. I was pleasantly surprised that instead of reading complex documentation, I just needed to right click on the DOM elements and add them to the script and press next.

Review Netsparker Enterprise

Custom scripting workflow for authentication

If we had to nitpick, we might point out that it would be great if Netsparker also supported U2F / FIDO2 implementations (by software emulating the CTAP1 / CTAP2 protocol), since that would cover the most secure 2FA implementations.

In addition to form-based authentication, Basic NTLM/Kerberos, Header based (for JWTs), Client Certificate and OAuth2-based authentication is also supported, which makes it easy to authenticate to almost any enterprise application. The login / logout flow is also verified and supported through a custom dialog, where you can verify that the supplied credentials work, and you can configure how to retain the session.

Review Netsparker Enterprise

Login verification helper

Scanning accuracy

And now for the core of this review: what Netsparker did and did not detect.

In short, everything from DVWA was detected, except broken client-side security, which by definition is almost impossible to detect with security scanning if custom rules aren’t written. So, from a “classic” application point of view, the coverage is excellent, even the out-of-date software versions were flagged correctly. Therefore, for normal, classic stateful applications, written in a relatively new language, it works great.

From a modern JavaScript-heavy single page application point of view, Netsparker correctly discovered the backend API interface from the user interface, and detected a decently complex SQL injection vulnerability, where it was not enough to trigger a ‘ or 1=1 type of vector but to adjust the vector to properly escape the initial query.

Netsparker correctly detected a stored XSS vulnerability in the reviews section of the Juice Shop product screen. The vulnerable application section is a JavaScript-heavy frontend, with a RESTful API in the backend that facilitates the vulnerability. Even the DOM-based XSS vulnerability was detected, although the specific vulnerable endpoint was marked as the search API and not the sink that is the entry point for DOM XSS. On the positive side, the vulnerability was marked as “Possible” and a manual security review would find the vulnerable sink.

One interesting point for vulnerability detection is that Netsparker uses an engine that tries to verify if the vulnerability is exploitable and will try to create a “proof” of vulnerability, which reduces false positives.

On the negative side, no vulnerabilities in WebSocket-based communications were found, and neither was the API endpoint that implemented insecure YAML deserialization with pyYAML. By reviewing the Netsparker knowledge base, we also found that there is no support for websockets and deserialization vulnerabilities.

That’s certainly not a dealbreaker, but something that needs to be taken into account. This also reinforces the need to use a SAST-based scanner (even if just a free, open source one) in the application security scanning stack, to improve test coverage in addition to other, manual based security review processes.

Reporting capability

Multiple levels of detail (from extensive, executive summary, to PCI-DSS level) are supported, both in a PDF or HTML export option. One nice feature we found is the ability to create F5 and ModSecurity rules for virtual patching. Also, scanned and crawled URLs can be exported from the reporting section, so it’s easy to review if your scanner hit any specific endpoints.

Review Netsparker Enterprise

Scan results dashboard

Review Netsparker Enterprise

Scan result details

Instead of describing the reports, we decided to export a few and attach them to this review for your enjoyment and assessment. All of them have been submitted to VirusTotal for our more cautious readers.

Netsparker’s reporting capabilities satisfy our requirements: the reports contain everything a security or AppSec engineer or a developer needs.

Since Netsparker integrates with JIRA and other ticketing systems, the general vulnerability management workflow for most teams will be supported. For lone security teams, or where modern workflows aren’t integrated, Netsparker also has an internal issue tracking system that will let the user track the status of each found issue and run rescans against specific findings to see if mitigations were properly implemented. So even if you don’t have other methods of triage or processes set up as part of a SDLC, you can manage everything through Netsparker.

Verdict

Netsparker is extremely easy to set up and use. The wide variety of integrations allow it to be integrated into any number of workflows or management scenarios, and the integrated features and reporting capabilities have everything you would want from a standalone tool. As far as features are concerned, we have no objections.

The login flow – the simple interface, the 2FA support all the way to the scripting interface that makes it easy to authenticate even in the more complex environments, and the option to report on the scanned and crawled endpoints – helps users discover their scanning coverage.

Taking into account the fact that this is an automated scanner that relies on “black boxing” a deployed application without any instrumentalization on the deployed environment or source code scanning, we think it is very accurate, though it could be improved (e.g., by adding the capability of detecting deserialization vulnerabilities). Following the review, Netsparker has confirmed that adding the capability of detecting deserialization vulnerabilities is included in the product development plans.

Nevertheless, we can highly recommend Netsparker.

The anatomy of an endpoint attack

Cyberattacks are becoming increasingly sophisticated as tools and services on the dark web – and even the surface web – enable low-skill threat actors to create highly evasive threats. Unfortunately, most of today’s modern malware evades traditional signature-based anti-malware services, arriving to endpoints with ease. As a result, organizations lacking a layered security approach often find themselves in a precarious situation. Furthermore, threat actors have also become extremely successful at phishing users out of their credentials or simply brute forcing credentials thanks to the widespread reuse of passwords.

A lot has changed across the cybersecurity threat landscape in the last decade, but one thing has remained the same: the endpoint is under siege. What has changed is how attackers compromise endpoints. Threat actors have learned to be more patient after gaining an initial foothold within a system (and essentially scope out their victim).

Take the massive Norsk Hydro ransomware attack as an example: The initial infection occurred three months prior to the attacker executing the ransomware and locking down much of the manufacturer’s computer systems. That was more than enough time for Norsk to detect the breach before the damage could done, but the reality is most organization simply don’t have a sophisticated layered security strategy in place.

In fact, the most recent IBM Cost of a Data Breach Report found it took organizations an average of 280 days to identify and contain a breach. That’s more than 9 months that an attacker could be sitting on your network planning their coup de grâce.

So, what exactly are attackers doing with that time? How do they make their way onto the endpoint undetected?

It usually starts with a phish. No matter what report you choose to reference, most point out that around 90% of cyberattacks start with a phish. There are several different outcomes associated with a successful phish, ranging from compromised credentials to a remote access trojan running on the computer. For credential phishes, threat actors have most recently been leveraging customizable subdomains of well-known cloud services to host legitimate-looking authentication forms.

anatomy endpoint attack

The above screenshot is from a recent phish WatchGuard Threat Lab encountered. The link within the email was customized to the individual recipient, allowing the attacker to populate the victim’s email address into the fake form to increase credibility. The phish was even hosted on a Microsoft-owned domain, albeit on a subdomain (servicemanager00) under the attacker’s control, so you can see how an untrained user might fall for something like this.

In the case of malware phishes, attackers (or at least the successful ones) have largely stopped attaching malware executables to emails. Most people these days recognize that launching an executable email attachment is a bad idea, and most email services and clients have technical protections in place to stop the few that still click. Instead, attackers leverage dropper files, usually in the form of a macro-laced Office document or a JavaScript file.

The document method works best when recipients have not updated their Microsoft Office installations or haven’t been trained to avoid macro-enabled documents. The JavaScript method is a more recently popular method that leverages Windows’ built-in scripting engine to initiate the attack. In either case, the dropper file’s only job is to identify the operating system and then call home and grab a secondary payload.

That secondary payload is usually a remote-access trojan or botnet of some form that includes a suite of tools like keyloggers, shell script-injectors, and the ability to download additional modules. The infection isn’t usually limited to the single endpoint for long after this. Attackers can use their foothold to identify other targets on the victim’s network and rope them in as well.

It’s even easier if the attackers manage to get hold of a valid set of credentials and the organization hasn’t deployed multi-factor authentication. It allows the threat actor to essentially walk right in through the digital front door. They can then use the victim’s own services – like built-in Windows scripting engines and software deployment services – in a living-off-the-land attack to carry out malicious actions. We commonly see threat actors leverage PowerShell to deploy fileless malware in preparation to encrypt and/or exfiltrate critical data.

The WatchGuard Threat Lab recently identified an ongoing infection while onboarding a new customer. By the time we arrived, the threat actor had already been on the victim’s network for some time thanks to compromising at least one local account and one domain account with administrative permissions. Our team was not able to identify how exactly the threat actor obtained the credentials, or how long they had been present on the network, but as soon as our threat hunting services were turned on, indicators immediately lit up identifying the breach.

In this attack, the threat actors used a combination of Visual Basic Scripts and two popular PowerShell toolkits – PowerSploit and Cobalt Strike – to map out the victim’s network and launch malware. One behavior we saw came from Cobalt Strike’s shell code decoder enabled the threat actors to download malicious commands, load them into memory, and execute them directly from there, without the code ever touching the victim’s hard drive. These fileless malware attacks can range from difficult to impossible to detect with traditional endpoint anti-malware engines that rely on scanning files to identify threats.

anatomy endpoint attack

Elsewhere on the network our team saw the threat actors using PsExec, a built in Windows tool, to launch a remote access trojan with SYSTEM-level privileges thanks to the compromised domain admin credentials. The team also identified the threat actors attempts to exfiltrate sensitive data to a DropBox account using a command-line based cloud storage management tool.

Fortunately, they were able to identify and clean up the malware quickly. However, without the victim changing the stolen credentials, the attacker could have likely re-initiated their attack at-will. Had the victim deployed an advanced Endpoint Detection and Response (EDR) engine as part of their layered security strategy, they could have stopped or slowed the damage created from those stolen credentials.

Attackers are targeting businesses indiscriminately, even small organizations. Relying on a single layer of protection simply no longer works to keep a business secure. No matter the size of an organization, it’s important to adopt a layered security approach that can detect and stop modern endpoint attacks. This means protections from the perimeter down to the endpoint, including user training in the middle. And, don’t forget about the role of multifactor authentication (MFA) – could be the difference between stopping an attack and becoming another breach statistic.

Three common mistakes in ransomware security planning

As the frequency and intensity of ransomware attacks increase, one thing is becoming abundantly clear: organizations can do more to protect themselves. Unfortunately, most organizations are dropping the ball. Most victims receive adequate warning of potential vulnerabilities yet are woefully unprepared to recover when they are hit. Here are just a few recent examples of both prevention and incident response failures:

  • Two months before the city of Atlanta was hit by ransomware in 2018, an audit identified over 1,500 severe security vulnerabilities.
  • Before the city of Baltimore suffered multiple weeks of downtime due to a ransomware attack in 2019, a risk assessment identified a severe vulnerability due to servers running an outdated operating system (and therefore lacking the latest security patches) and insufficient backups to restore those servers, if necessary.
  • Honda was attacked this past June, and public access to Remote Desktop Protocol (RDP) for some machines may have been the attack vector leveraged by hackers. Complicating matters further, there was a lack of adequate network segmentation.

Other notable recent victims include Travelex, Blackbaud, and Garmin. In all these examples, these are large organizations that should have very mature security profiles. So, what’s the problem?

Three common mistakes lead to inadequate prevention and ineffective response, and they are committed by organizations of all sizes:

1. Failing to present risk in business terms, which is crucial to persuading business leaders to provide appropriate funding and policy buy-in.

2. Not going deep enough in how you test your ransomware readiness.

3. Insufficient DR planning that fails to account for a ransomware threat that could infect your backups.

Common mistake #1 – Failing to present security risk in business terms to get funding and policies

I was reminded of just how easy it is to bypass basic security (e.g., firewalls, email security gateways, and anti-malware solutions) when I recently watched a ransomware attack simulation: the initial pre-ransomware payload got a foot in the door, followed by techniques such as reverse DNS lookups to identify an Active Directory service, that has the necessary privileges to really do some damage, and finally Kerberoasting to take control.

No organization is bulletproof, but attacks take time – which means early detection with more advanced intrusion detection and a series of roadblocks for hackers to overcome (e.g., greater network segmentation, appropriate end-user restrictions, etc.) are crucial for prevention.
This is not news to security practitioners. But convincing business leaders to make additional security investments is another challenge altogether.

How to fix mistake #1: Quantify business impact to enable an informed cost-based business decision

Tighter security controls (via technology and policy) are friction for the business. And while no business leader wants to fall victim to ransomware, budgets are not unlimited.

To justify the extra cost and tighter controls, you need to make a business case that presents not just risk, but also quantifiable business impact. Help senior leadership compare apples to apples – in this case, the cost of improved security (capex and potential productivity friction) vs. the cost of a security breach (the direct cost of extended downtime and long-term cost of reputational damage).

Assessing business impact need not be onerous. Fairly accurate impact assessments of critical systems can easily be completed in one day. Focus on a few key applications and datasets to get a representative sample, and get a rough estimate of cost, goodwill, compliance, and/or health & safety impacts, depending on your organization. You don’t have to be exact at this stage to recognize the potential for a critical impact.

Below is an example of the resulting key talking points for a presentation to senior leadership:

ransomware security planning

Source: Info-Tech Research Group, Business Impact Analysis Summary Example, 2020

Common mistake #2 – Not going deep enough in testing ransomware readiness

If you aren’t already engaged in penetration testing to validate security technology and configuration, start now. If you can’t secure funding, re-read How to fix mistake #1.

Where organizations go wrong is stopping at penetration testing and not validating the end-to-end incident response. This is especially critical for larger organizations that need to quickly coordinate assessment, containment, and recovery with multiple teams.

How to fix mistake #2: Run in-depth tabletop planning exercises to cover a range of what-if scenarios

The problem with most tabletop planning exercises is that they are designed to simply validate your existing incident response plans (IRP).
Instead, dive into how an attack might take place and what actions you would take to detect, contain, and recover from the attack. For example, where your IRP says check for other infected systems, get specific in how that would happen. What tools would you use? What data would you review? What patterns would you look for?

It is always an eye-opener when we run tabletop planning with our clients. I’ve yet to come across a client that had a perfect incident response plan. Even organizations with a mature security profile and documented response plans often find gaps such as:

  • Poor coordination between security and infrastructure staff, with unclear handoffs during the assessment phase.
  • Existing tools aren’t fully leveraged (e.g., configuring auto-contain features).
  • Limited visibility into some systems (e.g., IoT devices and legacy systems).

Common mistake #3 – Backup strategies and DR plans don’t account for ransomware scenarios

Snapshots and standard daily backups on rewritable media, as in the example below, just aren’t good enough:

ransomware security planning

Source: Info-Tech Research Group, Outdated Backup Strategy Example, 2020

The ultimate safety net if hackers get in is your ability to restore from backup or failover to a clean standby site/system.

However, a key goal of a ransomware attack is to disable your ability to recover – that means targeting backups and standby systems, not just the primary data. If you’re not explicitly guarding against ransomware all the time, the money you’ve invested to minimize data loss due to traditional IT outages – from drive failures to hurricanes – becomes meaningless.

How to fix mistake #3: Apply defense-in-depth to your backup and DR strategy

The philosophy of defense-in-depth is best practice for security, and the same philosophy needs to be applied to your backups and recovery capabilities.

Defense-in-depth is not just about trying to catch what slips through the cracks of your perimeter security. It’s also about buying time to detect, contain, and recover from the attack.

Achieve defense-in-depth with the following approach:

1. Create multiple restore points so that you can be more granular with your rollback and not lose as much data.

2. Reduce the risk of infected backups by using different storage media (e.g., disk, NAS, and/or tape) and backup locations (i.e., offsite backups). If you can make the attackers jump through more hoops, it gives you a greater chance of detecting the attack before it infects all of your backups. Start with your most critical data and design a solution that considers business need, cost, and risk tolerance.

3. Invest in solutions that generate immutable backups. Most leading backup solutions offer options to ensure backups are immutable (i.e., can’t be altered after they’re written). Expect the cost to be higher, of course, so again consider business impact when deciding what needs higher levels of protection.

Summary: Ransomware security planning

Ransomware attacks can be sophisticated, basic security practices just aren’t good enough. Get buy-in from senior leadership on what it takes to be ransomware-ready by presenting not just the risk of attack, but the potential extensive business impact. Assume you’ll get hit, be ready to respond quickly, test realistically, and update your DR strategy to encompass this fast-evolving threat.

Working together to secure our expanding connected health future

Securing medical devices is not a new challenge. Former Vice President Cheney, for example, had the wireless capabilities of a defibrillator disabled when implanted near his heart in 2007, and hospital IT departments and health providers have for years secured medical devices to protect patient data and meet HIPAA requirements.

connected health

With the expansion of security perimeters, the surge in telehealth usage (particularly during COVID-19), and proliferation in the number and types of connected technologies, healthcare cybersecurity has evolved into a more complex and urgent effort.

Today, larger hospital systems have approximately 350,000+ medical devices running simultaneously. On top of this, millions of additional connected devices are maintained by the patients themselves. Over the next 10 years, it’s estimated the number of connected medical devices could increase to roughly 50 billion, driven by innovations such as 5G, edge computing, and more. This rise in connectivity has increased the threat of cyberattacks not just to patient data, but also patient safety. Vulnerabilities in healthcare technology (e.g., an MRI machine or pacemaker) can lead to patient harm if diagnoses are delayed or the right treatments don’t get to the right people.

What can the healthcare industry do to strengthen their defenses today? How can they lay the groundwork for more secure devices and networks tomorrow?

The challenges are interconnected. The solutions cannot be siloed, and collaboration between manufacturers, doctors, healthcare delivery organizations and regulators is more critical now than ever before.

Device manufacturers: Integrating security into product design

Many organizations view medical device cybersecurity as protecting technology while it is deployed as part of a local network. Yet medical devices also need to be designed and developed with mobile and cloud security in mind, with thoughtful consideration about the patient experience. It is especially important we take this step as medical technology moves beyond the four walls of the hospital and into the homes of patients. The connected device itself needs to be secure, as opposed to the network surrounding the device.

We also need greater visibility and transparency across the medical device supply chain—a “software bill of materials.” The multicomponent nature of many medical products, such as insulin pumps or pacemakers, make the final product feel like a black box: hospitals and users know what it’s intended to do, but they don’t have much understanding about the individual components that make everything work. That makes it difficult to solve cybersecurity problems as they arise.

According to the 2019 HIMSS Cybersecurity Survey, just over 15% of significant security issues were initially started through either medical device problems in hospitals or vendor medical devices. As a result, some of these issues led to ransomware attacks exposing vulnerabilities, as healthcare providers and device makers scrambled to figure out just which of the products were at risk, while their systems were under threat. A software bill of materials would have helped them respond quickly to security, license, and operational risks.

Healthcare delivery organizations: Prioritizing preparedness and patient education

Healthcare providers, for their part, need to strengthen their threat awareness and preparedness, thinking about security from device procurement all the way to the sunsetting of legacy devices, which can extend over years and decades.

It’s currently not uncommon for healthcare facilities to use legacy technology that is 15 to 20 years old. Many of these devices are no longer supported and their security doesn’t meet the baseline of today’s evolving threats. However, as there is no replacement technology that serves the same functions, we need to provide heightened monitoring of these devices.

Threat modeling can help hospitals and providers understand their risks and increase resilience. Training and preparedness exercises are imperative in another critical area of cybersecurity: the humans operating the devices. Such exercises can put doctors, for instance, in an emergency treatment scenario with a malfunctioning device, and the discussions that follow provide valuable opportunities to educate, build awareness of, and proactively prepare for cyber threats.

Providers might consider “cybersecurity informed consent” to educate patients. When a patient signs a form before a procedure that acknowledges potential risks like infection or side effects, cyber-informed consent could include risks related to data breaches, denial of service attacks, ransomware, and more. It’s an opportunity to both manage risk and engage patients in conversations about cybersecurity, increasing trust in the technology that is essential for their health.

Regulators: Connecting a complex marketplace

The healthcare industry in the US is tremendously complex, comprised of hundreds of large healthcare systems, thousands of groups of physician practices, public and private payers, medical device manufacturers, software companies, and so on.

This expanding healthcare ecosystem can make it difficult to coordinate. Groups like the Food & Drug Administration (FDA) and the Healthcare Sector Coordinating Council have been rising to the challenge.

They’ve assembled subgroups and task forces in areas like device development and the treatment of legacy technologies. They’ve been reaching out to hospitals, patients, medical device manufacturers, and others to strengthen information-sharing and preparedness, to move toward a more open, collaborative cybersecurity environment.

Last year, the FDA issued a safety communication to alert health care providers and patients about cybersecurity vulnerabilities identified in a wireless telemetry technology used for communication that impacted more than 20 types of implantable cardiac devices, programmers, and home monitors. Later in 2019, the same device maker recalled thousands of insulin pumps due to unpatchable cyber vulnerabilities.

These are but two examples of many that demonstrate not only the impact of cybersecurity to patient health but to device makers and the healthcare system at large. Connected health should give patients access to approved technologies that can save lives without introducing risks to patient safety.

As the world continues to realize the promise of connected technologies, we must monitor threats, manage risks, and increase our network resilience. Working together to incorporate cybersecurity into device design, industry regulations, provider resilience, and patient education are where we should start.

Contributing author: Shannon Lantzy, Chief Scientist, Booz Allen Hamilton.

Why developing cybersecurity education is key for a more secure future

Cybersecurity threats are growing every day, be they are aimed at consumers, businesses or governments. The pandemic has shown us just how critical cybersecurity is to the successful operation of our respective economies and our individual lifestyles.

developing cybersecurity education

The rapid digital transformation it has forced upon us has seen us rely almost totally on the internet, ecommerce and digital communications to do everything from shopping to working and learning. It has brought into stark focus the threats we all face and the importance of cybersecurity skills at every level of society.

European Cybersecurity Month is a timely reminder that we must not become complacent and must redouble our efforts to stay safe online and bolster the cybersecurity skills base in society. This is imperative not only to manage the challenges we face today, but to ensure we can rise to the next wave of unknown, sophisticated cybersecurity threats that await us tomorrow.

Developing cybersecurity education at all levels, encouraging more of our students to embrace STEM subjects at an early age, educating consumers and the elderly on how to spot and avoid scams are critical to managing the challenge we face. The urgency and need to build our professional cybersecurity workforce is paramount to a safe and secure cyber world.

With a global skills gap of over four million, the cybersecurity professional base must grow substantially now in the UK and across mainland Europe to meet the challenge facing organisations, at the same time as we lay the groundwork to welcome the next generation into cybersecurity careers. That means a stronger focus on adult education, professional workplace training and industry-recognised certification.

At this key moment in the evolution of digital business and the changes in the way society functions day-to-day, certification plays an essential role in providing trust and confidence on knowledge and skills. Employers, government, law enforcement – whatever the function, these organisations need assurance that cybersecurity professionals have the skills, expertise and situational fluency needed to deal with current and future needs.

Certifications provide cybersecurity professionals with this important verification and validation of their training and education, ensuring organisations can be confident that current and future employees holding a given certification have an assured and consistent skillset wherever in the world they are.

The digital skills focus of European Cybersecurity Month is a reminder that there is a myriad of evolving issues that cybersecurity professionals need to be proficient in including data protection, privacy and cyber hygiene to name just a few.

However, certifications are much more than a recognised and trusted mark of achievement. They are a gateway to ensuring continuous learning and development. Maintaining a cybersecurity certification, combined with professional membership is evidence that professionals are constantly improving and developing new skills to add value to the profession and taking ownership for their careers. This new knowledge and understanding can be shared throughout an organisation to support security best practice, as well as ensuring cyber safety in our homes and communities.

Ultimately, we must remember that cybersecurity skills, education and best practice is not just a European issue, and neither is it a political issue. Rather, it is a global challenge that impacts every corner of society. Cybersecurity mindfulness needs to be woven into the DNA of everything we do, and it starts with everything we learn.

Three immediate steps to take to protect your APIs from security risks

In one form or another, APIs have been around for years, bringing the benefits of ease of use, efficiency and flexibility to the developer community. The advantage of using APIs for mobile and web apps is that developers can build and deploy functionality and data integrations quickly.

API security posture

API security posture

But there is a huge downside to this approach. Undermining the power of an API-driven development methodology are shadow, deprecated and non-conforming APIs that, when exposed to the public, introduce the risk of data loss, compromise or automated fraud.

The stateless nature of APIs and their ubiquity makes protecting them increasingly difficult, largely because malicious actors can leverage the same developer benefits – ease of use and flexibility – to easily execute account takeovers, credential stuffing, fake account creation or content scraping. It’s no wonder that Gartner identified API security as a top concern for 50% of businesses.

Thankfully, it’s never too late to get your API footprint in order to better protect your organization’s critical data. Here are a few easy steps you can follow to mitigate API security risks immediately:

1. Start an organization-wide conversation

If your company is having conversations around API security at all, it’s likely that they are happening in a fractured manner. If there’s no larger, cohesive conversation, then various development and operational teams could be taking conflicting approaches to mitigating API security risks.

For this reason, teams should discuss how they can best work together to support API security initiatives. As a basis for these meetings, teams should refer to the NIST Cybersecurity Framework, as it’s a great way to develop a shared understanding of organization-wide cybersecurity risks. The NIST CSF will help the collective team to gain a baseline awareness about the APIs used across the organization to pinpoint the potential gaps in the operational processes that support them, so that companies can work towards improving their API strategy immediately.

2. Ask (& answer) any outstanding questions as a team

To improve an organization’s API security posture, it’s critical that outstanding questions are asked and answered immediately so that gaps in security are reduced and closed. When posing these questions to the group, consider the API assets you have overall, the business environment, governance, risk assessment, risk management strategy, access control, awareness and training, anomalies and events, continuous security monitoring, detection processes, etc. Leave no stone unturned. Here are a few suggested questions to use as a starting point as you work on the next step in this process towards getting your API security house in order:

  • How many APIs do we have?
  • How were they developed? Which are open-source, custom built or third-party?
  • Which APIs are subject to legal or regulatory compliance?
  • How do we monitor for vulnerabilities in our APIs?
  • How do we protect our APIs from malicious traffic?
  • Are there APIs with vulnerabilities?
  • What is the business impact if the APIs are compromised or abused?
  • Is API security a part of our on-going developer training and security evangelism?

Once any security holes have been identified through a shared understanding, the team then can collectively work together to fill those gaps.

3. Strive for complete and continuous API security and visibility

Visibility is critical to immediate and continuous API security. By going through step one and two, organizations are working towards more secure APIs today – but what about tomorrow and in the years to come as your API footprint expands exponentially?

Consider implementing a visibility and monitoring solution to help you oversee this security program on an ongoing basis, so that your organization can feel confident in having a strong and comprehensive API security posture that grows and adapts as your number of APIs expand and shift. The key components to visibility and monitoring?

Centralized visibility and inventory of all APIs, a detailed view of API traffic patterns, discovery of APIs transmitting sensitive data, continuous API specification conformance assessment, having validated authentication and access control programs in place and running automatic risk analysis based on predefined criteria. Continuous, runtime visibility into how many APIs an organization has, who is accessing them, where they are located and how they are being used, is key to API security.

As organizations continue to expand their use of APIs to drive their business, it’s crucial that companies consider every path malicious actors might take to attack their organization’s critical data.

MITRE Shield shows why deception is security’s next big thing

Seasoned cybersecurity pros will be familiar with MITRE. Known for its MITRE ATT&CK framework, MITRE helps develop threat models and defensive methodologies for both the private and public sector cybersecurity communities.

MITRE Shield

MITRE recently added to their portfolio and released MITRE Shield, an active defense knowledge base that captures and organizes security techniques in a way that is complementary to the mitigations featured in MITRE ATT&CK.

The MITRE Shield framework focuses on active defense and adversary engagement, which takes the passivity out of network defense. MITRE defines active defense as ranging from “basic cyber defensive capabilities to cyber deception and adversary engagement operations,” which “allow an organization to not only counter current attacks, but also learn more about that adversary and better prepare for new attacks in the future.”

This is the first time that deception has been proactively referenced in a framework from MITRE, and yes, it’s a big deal.

As the saying goes, the best defense is a good offense. Cybercriminals continue to evolve their tactics, and as a result, traditional security and endpoint protections are proving insufficient to defend against today’s sophisticated attackers. Companies can no longer sit back and hope that firewalls or mandatory security training will be enough to protect critical systems and information. Instead, they should consider the “active defense” tactics called for in MITRE Shield to help level the playing field.

Why deception?

The key to deception technology – and why it’s so relevant now – is that it goes beyond simple detection to identify and prevent lateral movement, notoriously one of the most difficult aspects of network defense. The last several months have been especially challenging for security teams, with the pandemic and the sudden shift to remote work leaving many organizations more vulnerable than before. Cybercriminals are acutely aware of this and have been capitalizing on the disruption to launch more attacks.

In fact, the number of data breaches in 2020 has almost doubled (compared to the year before), with more than 3,950 incidents as of August. But what this number doesn’t account for are the breaches that may still be undetected, in which attackers gained access to a company’s network and are performing reconnaissance weeks, or potentially months, before they actually launch an attack.

As they move through a network laterally, cybercriminals stealthily gather information about a company and its assets, allowing them to develop a plan for a more sophisticated and damaging attack down the line. This is where deception and active defense converge – hiding real assets (servers, applications, routers, printers, controllers and more) in a crowd of imposters that look and feel exactly like the real thing. In a deceptive environment, the attacker must be 100% right, otherwise they will waste time and effort collecting bad data in exchange for revealing their tradecraft to the defender.

Deception exists in a shadow network. Traps don’t touch real assets, making it a highly valued solution for even the most diverse environments, including IT, OT and Internet of Things devices. And because traps are not visible to legitimate users or systems and serve only to deceive attackers, they deliver high fidelity alerts and virtually no false positives.

How can companies embrace MITRE Shield using deception?

MITRE Shield currently contains 34 deception-based tactics, all mapped to one of MITRE’s eight active defense categories: Channel, Collect, Contain, Detect, Disrupt, Facilitate, Legitimize and Test. Approximately one third of suggested tactics in the framework are related to deception, which not only shows the power of deception as an active defense strategy, but also provides a roadmap for companies to develop a successful deception posture of their own.

There are three tiers of deceptive assets that companies should consider, depending on the level of forensics desired:

1. Low interaction, which consists of simple fake assets designed to divert cybercriminals away from the real thing, using up their time and resources.

2. Medium interaction, which offers greater insights into the techniques used by cybercriminals, allowing security teams to identify attackers and respond to the attack.

3. High interaction, which provides the most insight into attacker activity, leveraging extended interaction to collect information.

While a company doesn’t have to use all of the deception-based tactics outlined in MITRE Shield to prevent attacks, low interaction decoys are a good place to start, and can be deployed in a matter of minutes. Going forward, CISOs should consider whether it’s time to rethink their security strategy to include more active defense tactics, including deception.

The lifecycle of a eureka moment in cybersecurity

It takes more than a single eureka moment to attract investor backing, especially in a notoriously high-stakes and competitive industry like cybersecurity.

eureka moment cybersecurity

While every seed-stage investor has their respective strategies for vetting raw ideas, my experience of the investment due diligence process involves a veritable ringer of rapid-fire, back-to-back meetings with cybersecurity specialists and potential customers, as well as rigorous market scoping by analysts and researchers.

As the CTO of a seed-stage venture capital firm entirely dedicated to cybersecurity, I spend a good portion of my time ideating alongside early-stage entrepreneurs and working through this process with them. To do this well, I’ve had to develop an internal seismometer for industry pain points and potential competitors, play matchmaker between tech geniuses and industry decision-makers, and peer down complex roadmaps to find the optimal point of convergence for good tech and good business.

Along the way, I’ve gained a unique perspective on the set of necessary qualities for a good idea to turn into a successful startup with significant market traction.

Just as a good idea doesn’t necessarily translate into a great product, the qualities of a great product don’t add up to a magic formula for guaranteed success. However, how well an idea performs in the categories I set out below can directly impact the confidence of investors and potential customers you’re pitching to. Therefore, it’s vital that entrepreneurs ask themselves the following before a pitch:

Do I have a strong core value proposition?

The cybersecurity industry is saturated with features passing themselves off as platforms. While the accumulated value of a solution’s features may be high, its core value must resonate with customers above all else. More pitches than I wish to count have left me scratching my head over a proposed solution’s ultimate purpose. Product pitches must lead with and focus on the solution’s core value proposition, and this proposition must be able to hold its own and sell itself.

Consider a browser security plugin with extensive features that include XSS mitigation, malicious website blocking, employee activity logging and download inspections. This product proposition may be built on many nice-to-have features, but, without a strong core feature, it doesn’t add up to a strong product that customers will be willing to buy. Add-on features, should they need to be discussed, ought to be mentioned as secondary or additional points of value.

What is my solution’s path to scalability?

Solutions must be scalable in order to reach as many customers as possible and avoid price hikes with reduced margins. Moreover, it’s critical to factor in the maintenance cost and “tech debt” of solutions that are environment-dependent on account of integrations with other tools or difficult deployments.

I’ve come across many pitches that fail to do this, and entrepreneurs who forget that such an omission can both limit their customer pool and eventually incur tremendous costs for integrations that are destined to lose value over time.

What is my product experience like for customers?

A solution’s viability and success lie in so much more than its outcome. Both investors and customers require complete transparency over the ease-of use of a product in order for it to move forward in the pipeline. Frictionless and resource-light deployments are absolutely key and should always mind the realities of inter-departmental politics. Remember, the requirement of additional hires for a company to use your product is a hidden cost that will ultimately reduce your margins.

Moreover, it can be very difficult for companies to rope in the necessary stakeholders across their organization to help your solution succeed. Finally, requiring hard-to-come-by resources for a POC, such as sensitive data, may set up your solution for failure if customers are reluctant to relinquish the necessary assets.

What is my solution’s time-to-value?

Successfully discussing a core value must eventually give way to achieving it. Satisfaction with a solution will always ultimately boil down to deliverables. From the moment your idea raises funds, your solution will be running against the clock to provide its promised value, successfully interact with the market and adapt itself where necessary.

The ability to demonstrate strong initial performance will draw in sought-after design partners and allow you to begin selling earlier. Not only are these sales necessary bolsters to your follow on rounds, they also pave the way for future upsells to customers.

It’s critical, where POCs are involved, that the beta content installed by early customers delivers well in order to drive conversions and complete the sales process. It’s critical to create a roadmap for achieving this type of deliverability that can be clearly articulated to your stakeholders.

When will my solution deliver value?

It’s all too common for entrepreneurs to focus on “the ultimate solution”. This usually amounts to what they hope their solution will achieve some three years into development while neglecting the market value it can provide along the way. While investors are keen to embrace the big picture, this kind of entrepreneurial tunnel vision hurts product sales and future fundraising.

Early-stage startups must build their way up to solving big problems and reconcile with the fact that they are typically only equipped to resolve small ones until they reach maturity. This must be communicated transparently to avoid creating a false image of success in your market validation. Avoid asking “do you need a product that solves your [high-level problem]?” and ask instead “would you pay for a product that solves this key element of your [high-level problem]?”.

Unless an idea breaks completely new ground or looks to secure new tech, it’s likely to be an improvement to an already existing solution. In order to succeed at this, however, it’s critical to understand the failures and drawbacks of existing solutions before embarking on building your own.

Cybersecurity buyers are often open to switching over to a product that works as well as one they already use without its disadvantages. However, it’s incumbent on vendors to avoid making false promises and follow through on improving their output.

The cybersecurity industry is full of entrepreneurial genius poised to disrupt the current market. However, that potential can only manifest by designing it to address much more than mere security gaps.

The lifecycle of a good cybersecurity idea may start with tech, but it requires a powerful infusion of foresight and listening to make it through investor and customer pipelines. This requires an extraordinary amount of research in some very unexpected places, and one of the biggest obstacles ideating entrepreneurs face is determining precisely what questions to ask and gaining access to those they need to understand.

Working with well-connected investors dedicated to fostering those relationships, ironing out roadmap kinks in the ideation process is one of the surest ways to secure success. We must focus on building good ideas sustainably and remember that immediate partial value delivery is a small compromise towards building out the next great cybersecurity disruptor.

Measuring impact beyond a single incident

Determining the true impact of a cyber attack has always and will likely be one of the most challenging aspects of this technological age.

true impact

In an environment where very limited transparency on the root cause and the true impact is afforded we are left with isolated examples to point to the direct cost of a security incident. For example, the 2010 attack on the Natanz nuclear facilities was and in certain cases is still used as the reference case study for why cybersecurity is imperative within an ICS environment (quite possibly substituted with BlackEnergy).

For the impact on ransomware, it was the impact WannaCry had on healthcare and will likely be replaced with the awful story where a patient sadly lost their life because of a ransomware attack.

What these cases clearly provide is a degree of insight into their impact. Albeit this would be limited in certain scenarios, but this approach sadly almost excludes the multitude of attacks that successfully occurred prior and in which the impact was either unavailable or did not make the headline story.

It can of course be argued that the use of such case studies are a useful vehicle to influence change, there is equally the risk that they simply are such outliers that decision makers do not recognise their own vulnerabilities within the broader problem statement.

If we truly need to influence change, then a wider body of work to develop the broader economic, and societal impact, from the multitude of incidents is required. Whilst this is likely to be hugely subjective it is imperative to understand the true impact of cybersecurity. I recall a conversation a friend of mine had with someone who claimed they “are not concerned with malware because all it does is slow down their computer”. This of course is the wider challenge to articulate the impact in a manner which will resonate.

Ask anybody the impact of car theft and this will be understood, ask the same question about any number of digital incidents and the reply will likely be less clear.

It can be argued that studies which measure the macro cost of such incidents do indeed exist, but the problem statement of billions lost is so enormous that we each are unable to relate to this. A small business owner hearing about how another small business had their records locked with ransomware, and the impact to their business is likely to be more influential than an economic model explaining the financial cost of cybercrime (which is still imperative to policy makers for example).

If such case studies are so imperative and there exists a stigma with being open about such breaches what can be done? This of course is the largest challenge, with potential litigation governing every communication. To be entirely honest as I sit here and try and conclude with concrete proposals I am somewhat at a loss as to how to change the status quo.

The question is more an open one, what can be done? Can we leave fault at the door when we comment on security incidents? Perhaps encourage those that are victims to be more open? Of course this is only a start, and an area that deserves a wider discussion.

Credential stuffing is just the tip of the iceberg

Credential stuffing attacks are taking up a lot of the oxygen in cybersecurity rooms these days. A steady blitz of large-scale cybersecurity breaches in recent years have flooded the dark web with passwords and other credentials that are used in subsequent attacks such as those on Reddit and State Farm, as well as widespread efforts to exploit the remote work and online get-togethers resulting from the COVID-19 pandemic.

credential stuffing

But while enterprises are rightly worried about weathering a hurricane of credential-stuffing attacks, they also need to be concerned about more subtle, but equally dangerous, threats to APIs that can slip in under the radar.

Attacks that exploit APIs, beyond credential stuffing, can start small with targeted probing of unique API logic, and lead to exploits such as the theft of personal information, wholesale data exfiltration or full account takeovers.

Unlike automated flood-the-zone, volume-based credential attacks, other API attacks are conducted almost one-to-one and carried out in elusive ways, targeting the distinct vulnerabilities of each API, making them even harder to detect than attacks happening on a large scale. Yet, they’re capable of causing as much, if not more, damage. And they’re becomingg more and more prevalent with APIs being the foundation of modern applications.

Beyond credential stuffing

Credential stuffing attacks are a key concern for good reason. High profile breaches—such as those of Equifax and LinkedIn, to name two of many—have resulted in billions of compromised credentials floating around on the dark web, feeding an underground industry of malicious activity. For several years now, about 80% of breaches that have resulted from hacking have involved stolen and/or weak passwords, according to Verizon’s annual Data Breach Investigations Report.

Additionally, research by Akamai determined that three-quarters of credential abuse attacks against the financial services industry in 2019 were aimed at APIs. Many of those attacks are conducted on a large scale to overwhelm organizations with millions of automated login attempts.

The majority of threats to APIs move beyond credential stuffing, which is only one of many threats to APIs as defined in the 2019 OWASP API Security Top 10. In many instances they are not automated, are much more subtle and come from authenticated users.

APIs, which are essential to an increasing number of applications, are specialized entities performing particular functions for specific organizations. Someone exploiting a vulnerability in an API used by a bank, retailer or other institution could, with a couple of subtle calls, dump the database, drain an account, cause an outage or do all kinds of other damage to impact revenue and brand reputation.

An attacker doesn’t even have to necessarily sneak in. For instance, they could sign on to Disney+ as a legitimate user and then poke around the API looking for opportunities to exploit. In one example of a front-door approach, a researcher came across an API vulnerability on the Steam developer site that would allow the theft of game license keys. (Luckily for the company, he reported it—and was rewarded with $20,000.)

Most API attacks are very difficult to detect and defend against since they’re carried out in such a clandestine manner. Because APIs are mostly unique, their vulnerabilities don’t conform to any pattern or signature that would allow common security controls to be enforced at scale. And the damage can be considerable, even coming from a single source. For example, an attacker exploiting a weakness in an API could launch a successful DoS attack with a single request.

API DoS

Rather than the more common DDoS attack, which floods a target with requests from many sources via a botnet, an API DoS can happen when the attacker manipulates the logic of the API, causing the application to overwork itself. If an API is designed to return, say, 10 items per request, an attacker could change that value to 10 million, using up all of an application’s resources and crashing it—with a single request.

Credential stuffing attacks present security challenges of their own. With easy access to evasion tools—and with their own sophistication improving dramatically – it’s not difficult for attackers to disguise their activity behind a mesh of thousands of IP addresses and devices. But credential stuffing nevertheless is an established problem with established solutions.

How enterprises can improve

Enterprises can scale infrastructure to mitigate credential stuffing attacks or buy a solution capable of identifying and stopping the attacks. The trick is to evaluate large volumes of activity and block malicious login attempts without impacting legitimate users, and to do it quickly, identifying successful malicious logins and alerting users in time to protect them from fraud.

Enterprises can improve API security first and foremost by identifying all of their APIs including data exposure, usage, and even those they didn’t know existed. When APIs fly under security operators’ radar, otherwise secure infrastructure has a hole in the fence. Once full visibility is attained, enterprises can more tightly control API access and use, and thus, enable better security.

Your best defense against ransomware: Find the early warning signs

As ransomware continues to prove how devastating it can be, one of the scariest things for security pros is how quickly it can paralyze an organization. Just look at Honda, which was forced to shut down all global operations in June, and Garmin, which had its services knocked offline for days in July.

Ransomware isn’t hard to detect but identifying it when the encryption and exfiltration are rampant is too little too late. However, there are several warning signs that organizations can catch before the real damage is done. In fact, FireEye found that there is usually three days of dwell time between these early warning signs and detonation of ransomware.

So, how does a security team find these weak but important early warning signals? Somewhat surprisingly perhaps, the network provides a unique vantage point to spot the pre-encryption activity of ransomware actors such as those behind Maze.

Here’s a guide, broken down by MITRE category, of the many different warning signs organizations being attacked by Maze ransomware can see and act upon before it’s too late.

Initial access

With Maze actors, there are several initial access vectors, such as phishing attachments and links, external-facing remote access such as Microsoft’s Remote Desktop Protocol (RDP), and access via valid accounts. All of these can be discovered while network threat hunting across traffic. Furthermore, given this represents the actor’s earliest foray into the environment, detecting this initial access is the organization’s best bet to significantly mitigate impact.

ATT&CK techniques

Hunt for…

T1193 Spear-phishing attachment
T1192 Spear-phishing link

  • Previously unseen or newly registered domains, unique registrars
  • Doppelgangers of your organization / partner’s domains or Alexa top 500
T133 External Remote Services
  • Inbound RDP from external devices
T1078 Valid accounts
  • Exposed passwords across SMB, FTP, HTTP, and other clear text usage
T1190 Exploit public-facing application
  • Exposure and exploit to known vulnerabilities

Execution

The execution phase is still early enough in an attack to shut it down and foil any attempts to detonate ransomware. Common early warning signs to watch for in execution include users being tricked into clicking a phishing link or attachment, or when certain tools such as PsExec have been used in the environment.

ATT&CK techniques

Hunt for…

T1024 User execution

  • Suspicious email behaviors from users and associated downloads
T1035 Service execution
  • File IO over SMB using PsExec, extracting contents on one system and then later on another system
T1028 Windows remote management
  • Remote management connections excluding known good devices

Persistence

Adversaries using Maze rely on several common techniques, such as a web shell on internet-facing systems and the use of valid accounts obtained within the environment. Once the adversary has secured a foothold, it starts to become increasingly difficult to mitigate impact.

ATT&CK techniques

Hunt for…

T1100 Web shell

  • Unique activity connections (e.g. atypical ports and user agents) from external connections
T1078 Valid accounts
  • Remote copy of KeePass file stores across SMB or HTTP

Privilege escalation

As an adversary gains higher levels of access it becomes significantly more difficult to pick up additional signs of activity in the environment. For the actors of Maze, the techniques used for persistence are similar to those for privileged activity.

ATT&CK techniques

Hunt for…

T1100 Web shell

  • Web shells on external facing web and gateway systems
T1078 Valid accounts
  • Remote copy of password files across SMB (e.g. files with “passw”)

Defense evasion

To hide files and their access to different systems, adversaries like the ones who use Maze will rename files, encode, archive, and use other mechanisms to hide their tracks. Attempts to hide their traces are in themselves indicators to hunt for.

ATT&CK techniques

Hunt for…

T1027 Obfuscated files or information

  • Adversary tools by port usage, certificate issuer name, or unknown protocol communications
T1078 Valid accounts
  • New account creation from workstations and other non-admin used devices

Credential access

There are several defensive controls that can be put in place to help limit or restrict access to credentials. Threat hunters can enable this process by providing situational awareness of network hygiene including specific attack tool usage, credential misuse attempts and weak or insecure passwords.

ATT&CK techniques

Hunt for…

T110 Brute force

  • RDP brute force attempts against known username accounts
T1081 Credentials in files
  • Unencrypted passwords and password files in the environment

Discovery

Maze adversaries use a number of different methods for internal reconnaissance and discovery. For example, enumeration and data collection tools and methods leave their own trail of evidence that can be identified before the exfiltration and encryption occurs.

ATT&CK techniques

Hunt for…

T1201 Password policy discovery

  • Traffic of devices copying the password policy off file shares
  • Enumeration of password policy
T1018 Remote system discovery

T1087 Account discovery

T1016 System network configuration discovery

T1135 Network share discovery

T1083 File and directory discovery

  • Enumeration for computer names, accounts, network connections, network configurations, or files

Lateral movement

Ransomware actors use lateral movement to understand the environment, spread through the network and then to collect and prepare data for encryption / exfiltration.

ATT&CK techniques

Hunt for…

T1105 Remote file copy

T1077 Windows admin shares

  • Suspicious SMB file write activity
  • PsExec usage to copy attack tools or access other systems
  • Attack tools copied across SMB
T1076 Remote Desktop Protocol

T1028 Windows remote management

T1097 Pass the ticket

  • HTTP POST with the use of WinRM user agent
  • Enumeration of remote management capabilities
  • Non-admin devices with RDP activity

Collection

In this phase, Maze actors use tools and batch scripts to collect information and prepare for exfiltration. It is typical to find .bat files or archives using the .7z or .exe extension at this stage.

ATT&CK techniques

Hunt for…

T1039 Data from network share drive

  • Suspicious or uncommon remote system data collection activity

Command and control (C2)

Many adversaries will use common ports or remote access tools to try and obtain and maintain C2, and Maze actors are no different. In the research my team has done, we’ve also seen the use of ICMP tunnels to connect to the attacker infrastructure.

ATT&CK techniques

Hunt for…

T1043 Common used port

T1071 Standard application layer protocol

  • ICMP callouts to IP addresses
  • Non-browser originating HTTP traffic
  • Unique device HTTP script like requests
T1105 Remote file copy
  • Downloads of remote access tools through string searches
T1219 Remote access tools
  • Cobalt strike BEACON and FTP to directories with cobalt in the name

Exfiltration

At this stage, the risk of exposure of sensitive data in the public realm is dire and it means an organization has missed many of the earlier warning signs—now it’s about minimizing impact.

ATT&CK techniques

Hunt for…

T1030 Data transfer size limits

  • External device traffic to uncommon destinations
T1048 Exfiltration over alternative protocol
  • Unknown FTP outbound
T1002: Data compressed
  • Archive file extraction

Summary

Ransomware is never good news when it shows up at the doorstep. However, with disciplined network threat hunting and monitoring, it is possible to identify an attack early in the lifecycle. Many of the early warning signs are visible on the network and threat hunters would be well served to identify these and thus help mitigate impact.

Secure data sharing in a world concerned with privacy

The ongoing debate surrounding privacy protection in the global data economy reached a fever pitch with July’s “Schrems II” ruling at the European Court of Justice, which struck down the Privacy Shield – a legal mechanism enabling companies to transfer personal data from the EU to the US for processing – potentially disrupting the business of thousands of companies.

The plaintiff, Austrian privacy advocate Max Schrems, claimed that US privacy legislation was insufficiently robust to prevent national security and intelligence authorities from acquiring – and misusing – Europeans’ personal data. The EU’s top court agreed, abolishing the Privacy Shield and requiring American companies that exchange data with European partners to comply with the standards set out by the GDPR, the EU’s data privacy law.

Following this landmark ruling, ensuring the secure flow of data from one jurisdiction to another will be a significant challenge, given the lack of an international regulatory framework for data transfers and emerging conflicts between competing data privacy regulations.

This comes at a time when the COVID-19 crisis has further underscored the urgent need for collaborative international research involving the exchange of personal data – in this case, sensitive health data.

Will data protection regulations stand in the way of this and other vital data sharing?

The Privacy Shield was a stopgap measure to facilitate data-sharing between the US and the EU which ultimately did not withstand legal scrutiny. Robust, compliant-by-design tools beyond contractual frameworks will be required in order to protect individual privacy while allowing data-driven research on regulated data and business collaboration across jurisdictions.

Fortunately, innovative privacy-enhancing technologies (PETs) can be the stable bridge connecting differing – and sometimes conflicting – privacy frameworks. Here’s why policy alone will not suffice to resolve existing data privacy challenges – and how PETs can deliver the best of both worlds:

A new paradigm for ethical and secure data sharing

The abolition of the Privacy Shield poses major challenges for over 5,000 American and European companies which previously relied on its existence and must now confront a murky legal landscape. While big players like Google and Zoom have the resources to update their compliance protocols and negotiate legal contracts between transfer partners, smaller innovators lack these means and may see their activities slowed or even permanently halted. Privacy legislation has already impeded vital cross-border research collaborations – one prominent example is the joint American-Finnish study regarding the genomic causes of diabetes, which “slowed to a crawl” due to regulations, according to the head of the US National Institutes of Health (NIH).

One response to the Schrems II ruling might be expediting moves towards a federal data privacy law in the US. But this would take time: in Europe, over two years passed between the adoption of GDPR and its enforcement. Given that smaller companies are facing an immediate legal threat to their regular operations, a federal privacy law might not come quickly enough.

Even if such legislation were to be approved in Washington, it is unlikely to be fully compatible with GDPR – not to mention widening privacy regulations in other countries. The CCPA, the major statewide data protection initiative, is generally considered less stringent than GDPR, meaning that even CCPA-compliant businesses would still have to adapt to European standards.

In short, the existing legislative toolbox is insufficient to protect the operations of thousands of businesses in the US and around the world, which is why it’s time for a new paradigm for privacy-preserving data sharing based on Privacy-Enhancing Technologies.

The advantages of privacy-enhancing technologies

Compliance costs and legal risks are prompting companies to consider an innovative data sharing method based on PETs: a new genre of technologies which can help them bridge competing privacy frameworks. PETs are a category of technologies that protect data along its lifecycle while maintaining its utility, even for advanced AI and machine learning processes. PETs allow their users to harness the benefits of big data while protecting personally identifiable information (PII) and other sensitive information, thus maintaining stringent privacy standards.

One such PET playing a growing role in privacy-preserving information sharing is Homomorphic Encryption (HE), a technique regarded by many as the holy grail of data protection. HE enables multiple parties to securely collaborate on encrypted data by conducting analysis on data which remains encrypted throughout the process, never exposing personal or confidential information. Through HE, companies can derive the necessary insights from big data while protecting individuals’ personal details – and, crucially, while remaining compliant with privacy legislation because the data is never exposed.

Jim Halpert, a data regulation lawyer who helped draft the CCPA and is Global Co-Chair of the Data Protection, Privacy and Security practice at DLA Piper, views certain solutions based on HE as effective compliance tools.

“Homomorphic Encryption encrypts data elements in such a way that they cannot identify, describe or in any way relate to a person or household. As a result, homomorphically encrypted data cannot be considered ‘personal information’ and is thus exempt from CCPA requirements,” Halpert says. “Companies which encrypt data through HE minimize the risk of legal threats, avoid CCPA obligations, and eliminate the possibility that a third party could mistakenly be exposed to personal data.”

The same principle applies to GDPR, which requires any personally identifiable information to be protected.

HE is applicable to any industry and activity which requires sensitive data to be analyzed by third parties; for example, research such as genomic investigations into individuals’ susceptibility to COVID-19 and other health conditions, and secure data analysis in the financial services industry, including financial crime investigations across borders and institutions. In these cases, HE enables users to legally collaborate across different jurisdictions and regulatory frameworks, maximizing data value while minimizing privacy and compliance risk.

PETs will be crucial in allowing data to flow securely even after the Privacy Shield has been lowered. The EU and the US have already entered negotiations aimed at replacing the Privacy Shield, but while a palliative solution might satisfy business interests in the short term, it won’t remedy the underlying problems inherent to competing privacy frameworks. Any replacement would face immediate legal challenges in a potential “Schrems III” case. Tech is in large part responsible for the growing data privacy quandary. The onus, then, is on tech itself to help facilitate the free flow of data without undermining data protection.

How security theater misses critical gaps in attack surface and what to do about it

Bruce Schneier coined the phrase security theater to describe “security measures that make people feel more secure without doing anything to actually improve their security.” That’s the situation we still face today when it comes to defending against cyber security risks.

The insurance industry employs actuaries to help quantify and manage the risks insurance underwriters take. The organizations and individuals that in-turn purchase insurance policies also look at their own biggest risks and the likelihood they will occur and opt accordingly for various deductibles and riders.

Things do not work the same way when it comes to cyber security. For example: Gartner observed that most breaches are the result of a vulnerability being exploited. Furthermore, they estimate that 99% of vulnerabilities exploited are already known by the industry and not net-new zero-day vulnerabilities.

How is it possible that well known vulnerabilities are a significant conduit for attackers when organizations collectively spend at least $1B on vulnerability scanning annually? Among other things, it’s because organizations are practicing a form of security theater: they are focusing those vulnerability scanners on what they know and what is familiar; sometimes they are simply attempting to fulfill a compliance requirement.

While there has been a strong industry movement towards security effectiveness and productivity, with approaches favoring prioritizing alerts, investigations and activities, there are still a good number of security theatrics carried out in many organizations. Many simply continue conducting various security processes and maintaining security solutions that may have been valuable at one time, but now don’t address the right concerns.

Broaching a concern such as security theater with security professionals can result in defensiveness or ire from disturbing a well-established process, or worse, practitioners assuming there is some implied level of foolishness or ineptitude. Rather than lambasting security theater practices outright, a better approach is to systematically consider what gaps may exist in your organization’s security posture. Part of this exercise requires asking yourself what you don’t know. That might seem like an oxymoron: how does one know what one does not know?

The idea of not knowing what you don’t know is a topic that frequently turns up on CISOs’ list of reasons that “keep them up at night.” The challenge with this type of security issue is less about swiftly applying software patches or assessing vulnerabilities of identified infrastructure. Here the main concern is to identify what might be completely unaddressed: is there some aspect of the IT ecosystem that is unprotected or could serve as an effective conduit to other resources? The question is basically, “What have we overlooked?” or “What asset or business system might be completely unknown, forgotten or not under our control?” The issue is not about the weakness of the known attack surface. It’s about the unknown attack surface that is not protected.

Sophisticated attackers are adept at developing a complete picture of an organization’s entire attack surface. There are numerous tools, techniques and even hacking services that can help attackers with this task. Most attackers are pragmatic and even business-oriented, and their goal is to find the path of least resistance that will provide the greatest payoff. Often this means focusing on the least monitored and least protected part of an organization’s attack surface.

Attackers are adept at finding internet-exposed, unprotected assets or systems. Often these are forgotten or unknown assets that are both an easy entrance to a company’s network as well as valuable in their own right. The irony is that attackers therefore often have a truer picture of an attack surface than the security team charged with defending it.

Interestingly, a security organizations’ effectiveness is often diminished by its own constraints, because theteam will focus on what they know they need to protect along with the established processes for doing that. Attackers have no such constraints. Rather than following prescribed rules or management by tradition, attackers will first perform reconnaissance and pursue intelligence to find the places of greatest weakness. Attackers look for these unprotected spots and favor them over resources that are actively monitored and defended.

Security organizations, on the other hand, typically start and end their assessments with their known assets. Security theater has them devoting too much focus to the known and not enough on the unknown.

Even well-established practices such as penetration testing, vulnerability assessment and security ratings result in security theater because they revolve around what is known. To move beyond theatrics into real effectiveness, security teams need to develop new processes to uncover the unknowns that are part of their IT ecosystem. That is exactly what attackers target. Few organizations are able to do this type of discovery and detection today. It is not viable either because of the existing workload or level of expertise needed to do a complete assessment. In addition, it is common for bias based on the pre-existing perceptions of the organization’s security posture to influence the search for the previously unknown.

The process of discovering previously unknown, exposed assets should be done on a regular basis. Automating this process—particularly due to the range of cloud, partner and subsidiary IT that must be considered—makes it more viable. While automation is necessary, it is still important for fully trained researchers to be involved to tune the process, interpret results and ensure its proper scope.

Adding a continuous process of identifying unknown, uncontrolled or abandoned assets and systems not only helps close gaps, but it expands the purview of security professionals to focus on not just what they know, but to also start considering what they do not know.

Four ways network traffic analysis benefits security teams

The march towards digital transformation and the increasing volume of cyberattacks are finally driving IT security and network teams towards better collaboration. This idea isn’t new, but it’s finally being put into practice at many major enterprises.

network traffic analysis security

Network traffic analysis and security

The reasons are fairly straightforward: all those new transformation initiatives – moving workloads to the cloud, pursuing virtualization or SD-WAN projects, etc. – create network traffic blind spots that can’t easily be monitored using the security tools and process designed for simpler, on-premise traditional architectures. This result is a series of data and system islands, tool sprawl and lack of correlation. Basically, there is lots of data, but little information. As the organization grows, the problems compound.

For a company hit with a cyber attack, the final cost can be astronomical as it includes investigation and mitigation costs, costs tied to legal exposure, insurance hikes, the acquisition of new tools, the implementation of new policies and procedures, and the hit to revenues and reputation.

Size doesn’t matter – all companies are vulnerable to an attack. To improve organizational security postures in this new hybrid network environment, Security Operations (SecOps) and Network Operations (NetOps) teams are becoming fast friends. In fact, Gartner has recently changed the name of one of their market segments from “Network Traffic Analysis” to “Network Detection and Response” to reflect the shift in demand for more security-focused network analysis solutions.

Here are four ways that network data in general and network traffic analysis in particular can benefit the SecOps team at the Security Operations Center (SOC) level:

1. Enabling behavioral-based threat detection

Signature-based threat detection, as found in most antivirus and firewall solutions, is reactive. Vendors create signatures for malware as they appear in the wild or license them from third-party sources like Google’s VirusTotal, and update their products to recognize and protect against the threats.

While this is a useful way to quickly screen out all known dangerous files from entering a network, the approach has limits. The most obvious is that signature-based detection can’t catch new threats for which no signature exists. But more importantly, a growing percentage of malware is obfuscated to avoid signature-based detection. Research by network security company WatchGuard Technologies found that a third of all malware in 2019 could evade signature-based antivirus, and that number spiked to two-thirds in Q4 2019. These threats require a different detection method.

Network traffic analysis (also known as network detection and response, or NDR) uses a combination of advanced analytics, machine learning (ML) and rule-based detection to identify suspicious activities throughout the network. NDR tools consume and analyze raw traffic, such as packet data, to build models that reflect normal network behavior, then raise alerts when they detect abnormal patterns.

Unlike signature-based solutions, which typically focus on keeping malware out of the network, most NDR solutions can go beyond north-south traffic to also monitor east-west traffic, as well as cloud-native traffic. These capabilities are becoming increasingly important as businesses go virtual and cloud-first. NDR solutions thus help SecOps detect and prevent attacks that can evade signature-based detection. To function, these NDR solutions require access to high-quality network data.

2. Providing data for security analytics, compliance and forensics

The SecOps team will often need the network data and behavior insights for security analytics or compliance audits. This will usually require network metadata and packet data from physical, virtual and cloud-native elements of the network deployed across the data center, branch offices and multi-cloud environments.

The easier it is to access, index and make sense out of this data (preferably in a “single pane of glass” solution), the more value it will provide. Obtaining this insight is entirely feasible but will require a mix of physical and virtual network probes and packet brokers to gather and consolidate data from the various corners of the network to process and deliver it to the security tool stack.

NDR solutions can also offer the SecOps team the ability to capture and retain network data associated with indicators of compromise (IOCs) for fast forensics search and analysis in case of an incident. This ability to capture, save, sort and correlate metadata and packets allows SecOps to investigate breaches and incidents after the fact and determine what went wrong, and how the attack can be better recognized and prevented in the future.

3. Delivering better network visibility for better security automation

Qualified security professionals are rare, and their time is extremely valuable. Automating security tasks can help businesses resolve incidents more quickly and free up time for the SecOps team to focus on more important tasks. Unfortunately, visibility and automation only work as well as the quality and granularity of the data – and both too little and too much can be a problem.

Too little data, and the automated solutions are just as blind as the SecOps team. Too much data, in the form of a threat detection system that throws out too many alerts, can result in a “boy who cries wolf” scenario with the automated responses shutting down accounts or workloads and doing more harm than good.

Missing data, too many alerts or inherent blind spots can mean that the machine learning and analytical models that NDR relies on will not work correctly, producing false positives while missing actual threats. In the long run, this means more work for the SOC team.

The key to successful automation is to have high-quality network data to enable accurate security alerts, so responses can be automated.

4. Decreasing malware dwell time

NDR solutions typically have little to no blocking ability because they are generally not deployed inline (although that choice is up to the IT teams). But even so, they are effective in shortening the incident response window and reducing the dwell time of malware by quickly identifying suspect behavior or traffic. Results from NDR tools can be fed into downstream security tools that can verify and remediate the threats.

Malware dwell time has been steadily decreasing across the industry; the 2019 Verizon Data Breach Investigation Report (DBIR) found that 56% of breaches took months or longer to detect, but the 2019 Verizon Data Breach Investigation Report2020 DBIR found that 81% of data breaches were contained in days or less. This is an encouraging statistic and hopefully SecOps teams will continue partnering with NetOps to reduce it even further.

The benefits of network detection and response or network traffic analysis go far beyond the traditional realm of NetOps. By cooperating, NetOps and SecOps teams can create a solid visibility architecture and practice that strengthens their security posture, leaving organizations well prepared if an attack takes place.

Full network visibility allows security teams to see all the relevant information through a security delivery layer, use behavioral-based or automated threat detection methods, and be able to capture and store relevant data for deep forensics to investigate and respond to any incident.

How can the C-suite support CISOs in improving cybersecurity?

Among the individuals charged with protecting and improving a company’s cybersecurity, the CISO is typically seen as the executive for the job. That said, the shift to widespread remote work has made a compelling case for the need to bring security within the remit of other departments.

improving cybersecurity

The pandemic has torn down physical office barriers, opening businesses up to countless vulnerabilities as the number of attack vectors increased. The reality is that every employee is a potential vulnerability and, with the security habits of workers remaining questionable even amid a rising number of data breaches, it’s never been more important to foster a culture of security throughout an organization.

Improving security with culture

We continue to see different data breaches in the news, with hundreds of millions of users on Instagram, TikTok and YouTube having their accounts compromised in the latest breach. These instances, and countless others, are a testament to the critical importance of strong security behaviors – both at work and home – and the training and attentiveness they require.

The shared responsibility in security is closely tied to how employees at all levels perceive the importance of security. If this is ingrained within the culture, they will have the abilities and tools to protect themselves. This is, of course, easier said than done.

Creating and maintaining a security culture is a never ending and constantly evolving mission and influencing people’s behavior is often the most challenging part of the effort. People have become numb to the security threats they face, and although they understand the potential risks, they don’t do anything about it. For example, recent research revealed that 92 percent of UK workers know that using the same password over and over is risky, but 64 percent of the respondents do it anyway. So, how do we get through that dissonance and get people engaged in security?

Encouraging cyber-secure practices from the top

As security continues to grow in importance, organizations absolutely need an executive at the top to vocally and adamantly advocate for security.

CISOs typically lead this charge. They are often tasked with leading a security team and a program responsible for protecting all information assets, as well as ensuring disaster recovery, business continuity and incident response plans are in place and regularly tested. In addition, CISOs and their teams are usually responsible for evaluating new technologies, staying updated on compliance regulations, overseeing identity and access management, communicating risks and security strategies to the C-suite and providing trainings.

Today, CISOs are also focused on protecting a highly distributed workforce and customers – in offices, at home or a mix of both – and meeting the new security challenges and threats that come along with this hybrid environment. That’s why it’s more important than ever for other C-suite executives to help promote and drive the organization’s security culture – especially through communications, training and enforcement of best practices.

While CISOs continue to spearhead the development of the organization’s security program and define the security mission and culture, other C-suite executives can vocally support these programs to ensure their integrity throughout the whole process, from vision and development to implementation and ongoing enforcement. The participation of the C-suite can also help CISOs focus on the most important security issues and adjust the program to ensure it is aligned with broader business plans and strategies, thereby helping to get broader support without compromising security.

One likely companion for this type of cross-department alignment is the Chief Operating Officer (COO). As this role typically reports directly to the CEO and is considered to be second in the chain of command, the COO will be able to provide the authority needed to advocate for security and how it can impact employees, customers, products and ultimately the business. This means a good COO today needs to encourage a business culture that supports security efforts thoroughly, while also ensuring security is prioritized at a tactical level.

However, the COO is not the only one that needs to serve as a security advocate. All C-level executives have a critical role to play in establishing a strong security culture. Because of their connections to different stakeholders, they will be able to share diverse insights.

For example, the COO can better incorporate input from the board, which is vital to ensuring the CISO understands the company’s risk tolerance which will directly impact innovation and revenue. The Chief Financial Officer (CFO) could share insights into the spending priorities and various obligations needed to protect financial systems and the Chief Human Resources Manager (CHRM) could get valuable data from employees. The CHRM is instrumental when driving the development of the security culture; their level of engagement often determines the overall success of developing a successful security-conscious culture.

Security-conscious C-suite executives will be able to step in to support the CISO’s mission that security needs to be a top priority.

Think security-first

Having model behavior fed from the very top will help to underline an organization’s collective commitment to cybersecurity. In doing so, employees are empowered by a sense of shared responsibility around their role in keeping a company’s corporate data secure. To this end, it’s crucial that the C-suite of modern companies are trailblazers of security, particularly in the current landscape.

The techniques employed by cybercriminals are becoming more and more sophisticated, and the risk of data breaches and stolen information being offered for sale on the dark web has never been higher. As the pandemic continues to influence developments in information security, senior leadership, middle management and junior staff members must all work together towards a collective aim of securing their workplace.

Fostering a culture of security awareness is by no means an easy feat, but the long-term gains outweigh any teething issues and will serve to make businesses watertight in the midst of a growing threat landscape.

Most compliance requirements are completely absurd

Compliance is probably one of the dullest topics in cybersecurity. Let’s be honest, there’s nothing to get excited about because most people view it as a tick-box exercise. It doesn’t matter which compliance regulation you talk about – they all get a collective groan from companies whenever you start talking about it.

compliance requirements

The thing is, compliance requirements are often being poorly written, vague and confusing. In my opinion, the confusion around compliance comes from the writing, so it’s no surprise companies are struggling, especially when they have to comply with multiple requirements simultaneously.

Poor writing is smothering compliance regulations

Take ISO 27001 as an example. Its goal is to improve a business’ information security management and its process has six-parts, which include commands like “conduct a risk assessment”, “define a security policy” and “manage identified risks”. The requirements for each of these commands are extremely vague and needlessly subjective.

The Sarbanes-Oxley Act (SOX), which covers all businesses in the United States, is no better. Section 404 vaguely says that all publicly traded organizations have to demonstrate “due diligence” in the disclosure of financial information, but then it does not explain what “due diligence” means.

The Gramm-Leach-Bliley Act (GLBA) requires US financial institutions to explain information-sharing practices to their customers. It says financial organizations have to “develop a written information security plan”, but then doesn’t offer any advice on how to achieve that.

Even Lexcel (an accreditation indicating quality in relation to legal practice management standards) in the United Kingdom, which is written by lawyers for lawyers, is not clear: “Practices must have an information management policy with procedures for the protection and security of the information assets.”

For a profession that prides itself on being able to maintain absolute clarity, I’m surprised Lexcel allows this type of subjectivity in its compliance requirements.

It’s not easy to write for such a wide audience

Look, I understand. It’s a pretty tricky job to write compliance requirements. It needs to be applicable to all organizations within a particular field, each of which will have their differences in the way they conduct business and how they’ve set up their technological infrastructure.

Furthermore, writers are working against the clock with compliance requirements. IT regulations are changing at such a quick pace that the requirements they write today might be out of date tomorrow.

However, I think those who write requirements should take the Payment Card Industry Data Security Standard (PCI DSS) as an example. The PCI DSS applies to all organizations that store cardholder data and the requirements are clear, regularly updated, and you can find everything you need in one place.

The way PCI DSS compliance is structured (in terms of requirement, testing procedures and guidance) is a lot clearer than anything else I’ve seen. It contains very little room for subjectivity, and you know exactly where you stand with it.

The GDPR is also pretty well written and detailed. The many articles referring to data protection are specific, understandable and implementable.

For example, when it comes to data access, this sentence is perfectly clear: “Unauthorized access also includes accidental or unlawful destruction, loss, alteration, or unauthorized disclosure of personal data transmitted, stored or otherwise processed” (Articles 4, 5, 23 and 32).

It’s also very clear when it comes to auditing processes: “You need to maintain a record of data processing activities, including information on ‘recipients to whom the personal data have been or will be disclosed’, i.e. whom has access to data” (Articles 5, 28, 30, 39, 47).

So, while you’re faced with many compliance requirements, you need to have a good strategy in place. However, it can get complex when you’re trying to comply with multiple mandates. If I can give you one tip, it is to find the commonalities between all of them, before coming up with a solution.

You need to do the basics right

In my opinion, the confusing nature of compliance only spawns the relentless bombardment of marketing material from vendors on “how you can be compliant with X” or the “top five things you need to know about Y”.

You have to understand that at the core of any compliance mandate is the desire to keep protected data secure, only allowing access to those who need it for business reasons. This is why all you need to do with compliance is to start with the basics: data storage, file auditing and access management. Get those right, and you’re on your way to demonstrating your willingness to comply.

Mapping the motives of insider threats

Insider threats can take many forms, from the absent-minded employee failing to follow basic security protocols, to the malicious insider, intentionally seeking to harm your organization.

motives insider threats

Some threats may stem from a simple mistake, others from a personal vendetta. Some insiders will work alone, others at the behest of a competitor or nation-state.

Whatever the method and the motives, the results can be devastating. The average cost of a single negligent insider incident exceeds $300k. That figures increases to over $755k for a criminal or malicious attack and up to $871k for one involving credential theft.

Unlike many other common attacks, insider attacks are rarely a smash-and-grab. The longer a threat goes undetected, the more damage it can do to your organization. The better you understand your people – their motivations, and their relationship with your data and networks – the earlier you can detect and contain potential threats.

Insiders’ drivers

Insider threats can be loosely split into two categories – negligent and malicious. Within those categories are a range of potential drivers.

As the mechanics of an attack can differ significantly depending on its motives, gaining a thorough understanding of these drivers can be the difference between a potential threat and a successful breach.

Financial gain

Financial gain is perhaps the most common driver for the malicious insider. Employees across all levels are aware that corporate data and sensitive information has value.

To an employee with access to your data, allowing it to fall into the wrong hands can seem like minimal risk for significant reward.
This is another threat that is likely higher risk in the current environment. The coronavirus pandemic has placed millions of people under financial pressure, with many furloughed or facing job insecurity. What once seemed an unimaginable decision, may now feel like a quick solution.

Negligence

Negligence is the most common cause of insider threats, costing organizations an average of $4.58 million per year.

Such a threat usually results from poor security hygiene – a failure to properly log in/out of corporate systems, writing down or reusing passwords, using unauthorized devices or applications, and a failure to protect company data.

Negligent insiders are often repeat offenders who may skirt round security for greater speed, increased productivity or just convenience.

Distraction

A distracted employee could fall into the “negligent” category. However, it is worth highlighting separately as this type of threat can be harder to spot.

Where negligent employees may raise red flags by regularly ignoring security best practices, the distracted insider may be a model employee until the moment they make a mistake.

The risk of distraction is potentially higher right now, with most employees working remotely, many for the first time, often interchanging between work and personal applications. Outside of the formal office environment and distracted by home life, they may have different work patterns, be more relaxed and inclined to click on malicious links or bypass formal security conventions.

Organizational damage

Some malicious insiders have no interest in personal gain. Their sole driver is harming your organization.

The headlines are full of stories about the devastating impact of data breaches. For anyone wishing to damage an organization’s reputation or revenues, there is no better way in the digital world than by leaking sensitive customer data.

Insiders with this motivation will usually have a grievance against your business. They may have been looked over for a pay rise or promotion, or recently subject to disciplinary action.

Espionage and sabotage

Malicious insiders do not always work alone. In some cases, they may be passing information to a third-party such as a competitor or a nation-state.

Such cases tend to fall under espionage or sabotage. This could mean a competitor recruiting a plant in your organization to syphon out intellectual property, R&D, or customer information to gain an edge, or a nation-state looking for government secrets or classified information to destabilize another.

Cases like these are on the increase in recent years. Hackers and plants from Russia, China, and North Korea are regularly implicated in cases of corporate and state-sponsored insider attacks against Western organizations.

Defending from within

Just as they affect method, motives also dictate the appropriate response. An effective deterrent against negligence is unlikely to deter a committed and sophisticated insider intent on causing harm to your organization.

That said, the foundation for any defense is comprehensive controls. You must have total visibility of your networks – who is using them and what data they are accessing. These controls should be leveraged to limit sensitive information to only the most privileged users and to strictly limit the transfer of data from company systems.

With this broad base in place, you can now add further layers to counter specific threats. To protect against disgruntled employees, for example, additional protections could include filters on company communications to flag high-risk vocabulary, and specific controls applied to high-risk individuals, such as those who have been disciplined or are soon to be leaving the company.

Finally, any successful defense against insider threats should have your people at its heart.

You must create a strong security culture. This means all users must be aware of how their behavior can unintentionally put your organization at risk. All must know how to spot early signs of potential threats, whatever the cause. And all must be aware of the severe consequences of intentionally putting your organization in harm’s way.

Cybersecurity after COVID-19: Securing orgs against the new threat landscape

Picture this: An email comes through, offering new COVID-19 workplace safety protocols, and an employee, worn down by the events of the day or feeling anxious about their safety, clicks through. In a matter of seconds, the attacker enters the network. Factor in a sea of newly remote workers and overloaded security teams, and it’s easy to see how COVID-19 has been a boon for cybercriminals.

Cybersecurity after COVID-19

Cracks in cyber defenses

The global pandemic has exposed new cracks in organizations’ cyber defenses, with a recent Tenable report finding just under half of businesses have experienced at least one “business impacting cyber-attack” related to COVID-19 since April 2020. For the most part, COVID-19 has exacerbated pre-existing cyberthreats, from counter incident response and island hopping to lateral movement and destructive attacks. Making matters worse, today’s security teams are struggling to keep up.

A survey of incident response (IR) professionals found that 53% encountered or observed a surge in cyberattacks exploiting COVID-19, specifically pointing to remote access inefficiencies (52%), VPN vulnerabilities (45%) and staff shortages (36%) as the most daunting endpoint security challenges.

VPNs, which many organizations rely on for protection, have become increasingly vulnerable and it may be cause for concern that the average update cycle for software patches tends to generally occur on a weekly basis, with very few updating daily. While these updates might seem frequent, they might not be enough to protect your information, primarily due to the explosion of both traditional and fileless malware.

As for vulnerabilities, IR professionals point to the use of IoT technologies, personal devices like iPhones and iPads, and web conferencing applications, all of which are becoming increasingly popular with employees working from home. Last holiday season, the number one consumer purchase was smart devices. Now they’re in homes that have become office spaces.

Cybercriminals can use those family environments as a launchpad to compromise organizations. In other words, attackers are still island hopping, but instead of starting from one organization’s network and moving along the supply chain, the attack may now originate in home infrastructures.

Emerging attacks on the horizon

Looking ahead, we’ll continue to see burgeoning geopolitical tensions, particularly as we near the 2020 presidential election. These tensions will lead to a rise in destructive attacks.

Moreover, organizations should prepare for other emerging attack types. For instance, 42% of IR professionals agree that cloud jacking will “very likely” become more common in the next 12 months, while 34% said as much of access mining. Mobile rootkits, virtual home invasions of well-known public figures and Bluetooth Low Energy attacks are among the other attack types to prepare for in the next year.

These new methods, in tandem with a surge in counter IR, destructive attacks, lateral movement and island hopping, make for a perilous threat landscape. But with the right tools, strategies, collaboration and staff, security teams can handle the threat.

Best practices for a better defense of data

As the initial shock of COVID-19 subsides, we should expect organizations to firm up their defenses against new vulnerabilities, whether it’s addressing staff shortages, integrating endpoint technologies, aligning IT and security teams or adapting networks and employees to remote work. The following five steps are critical in order to fight back against the next generation of cyber attacks:

  • Gain better visibility into your system’s endpoints – This is increasingly important in today’s landscape, with more attackers seeking to linger for long periods on a network and more vulnerable endpoints online via remote access.
  • Establish digital distancing practices – People working from home should have two routers, segmenting traffic from work and home devices.
  • Enable real-time updates, policies and configurations across the network – This may include updates to VPNs, audits or fixes to configurations across remote endpoints and other security updates.
  • Remember to communicate – about new risk factors (spear phishing, smart devices, file-sharing applications, etc.), protocols and security resources.
  • Enhance collaboration between IT and security teams – This is especially true under the added stress of the pandemic. Alignment should also help elevate IT personnel to become experts on their own systems.

Hackers continue to exploit vulnerable situations, and the global disruption brought on by COVID-19 is no different. Organizations must now refocus their defenses to better protect against evolving threats as workforces continue to shift to the next “normal” and the threat landscape evolves.

September 2020 Patch Tuesday forecast: Back to school?

Another month has passed working from home and September Patch Tuesday is upon us. For most of us here in the US, September usually signals back to school for our children and with that comes a huge increase in traffic on our highways. But I suspect with the big push for remote learning from home, those of us in IT may be more worried about the increase in network traffic. So, should we expect a large number of updates this Patch Tuesday that will bog down our networks?

The good news is that I expect a more limited release of updates from Microsoft and third-party vendors this month. In August, we saw a HUGE set of updates for Office and also an unexpected .NET release after just having one in July.

Also looking back to last month, there were some reported issues on the Windows 10 version 1903, 1909, and 2004 updates. Applying the updates for KB 4565351 or KB 4566782 resulted in a failure for many users on automatic updates with return codes/explanations that were not very helpful. Let’s hope the updates are more stable this month without the need to re-apply, or worse, redistribute these large updates across our networks using even more bandwidth.

Last month I talked about software end-of-life (EOL) and making sure you had a plan in place to properly protect your systems in advance. Just as an early reminder we have the EOL of Windows Embedded Standard 7 coming up on October Patch Tuesday. Microsoft will offer continued Extended Security Updates (ESUs) for critical and important security updates just like they did for Windows 7 and Server 2008.

These updates will be available for three years through October 2023. Microsoft also provided an update on the ‘sunset’ of the legacy Edge browser in March 2021 along with the announcement that Microsoft 365 apps and services will no longer support IE 11 starting in August 2021. They made it clear IE 11 is not going away anytime soon, but the new Edge is required for a modern browser experience. These changes are all still a few months out but plan accordingly.

September 2020 Patch Tuesday forecast

  • We’ll see the standard operating system updates, but as I mentioned earlier, with the large Office and individual application updates release last month expect both smaller and more limited set this time.
  • Service stack updates (SSUs) are hit or miss each month. The last required update was released in May. Expect to see a few in the mix once again.
  • A security update for Acrobat and Reader came out last Patch Tuesday. There are no pre-announcements on their web site so we may see a small update, if any.
  • Apple released security updates last month for iTunes and iCloud, so we should get a break this month if they maintain their quarterly schedule.
  • Google Chrome 85 was released earlier week, but we may see a security release if they have any last-minute fixes for us.
  • We’re due for a Mozilla security update for Firefox and Thunderbird. The last security release was back on August 25.

Remote security management of both company-provided and user-attached systems provides many challenges. With a projected light set of updates this month, hopefully tying up valuable bandwidth isn’t one of those challenges.