Four ways to prevent data breaches

When it comes to breaches, there are no big fish, small fish, or hiding spots. Almost every type of organization – including yours – has critical personally identifiable information (PII) stored. Storing PII makes you a target regardless of size, industry, or other variables, and all it takes is one employee thinking a phishing attempt is legitimate. That means everyone’s at risk.

prevent data breaches

Statistics show that data breaches are on the rise and can bring devastating, long-term financial and reputational repercussions to your organization. The 2019 Cost of a Data Breach Report, conducted by Ponemon Institute, estimates the average total cost of a data breach in the United States to be close to $4 million. And the average price for each lost data record, says the report, is around $150.

Breaches happen in so many ways, a one-size-fits-all solution doesn’t exist. Security requires a multifaceted approach to be successful. Here are four ways (plus one) your organization can beef up its data security barriers and prevent data breaches.

1. Train employees

Put all new employees through data security training and require all employees to take a refresher course at the start of every year, so the latest security guidelines are fresh in their minds.

While this type of training can be dull, it only takes a few minutes to cover the essential details. For example, employees should:

  • Treat all devices (e.g., desktops, laptops, tablets, phones) as being capable of accessing the organization’s systems
  • Never write down or leave a record of passwords where others can easily find them
  • Be extra suspicious of emails or phone calls from unverified people requesting passwords or other sensitive information (There’s more on that last one below.)

Incorporate some up-to-date breach statistics to help convey the seriousness and pervasiveness of threats and the possible financial ramifications.

2. Simulate phishing attacks

Many security issues are the result of human error, such as clicking on a link in a malicious email.

Spear phishing attempts – i.e., highly targeted and customized phishing efforts – tend to lead to more breaches because they target specific personnel. The messages may reference a department or regular job function and can appear similar to other relevant messages in the target’s inbox on any given day.

Free or paid phishing simulators can test your employees’ ability to detect phishing emails by sending some of those types of emails yourself. Alerts and reports are provided for when someone responds to one of these messages.

Using one of these simulators, you can put your employees through active training to help them become more secure.
Remember to remind staff to double-check anytime they aren’t 100% positive that an email is legitimate. If an employee receives something that looks even a little off or out of the ordinary from a sender they know or can contact, they should run the thing by the IT team.

3. Evaluate accounts

How often does your IT team evaluate existing accounts? It can undoubtedly be a complicated process, but evaluating all of the activated accounts within your organization can go a long way in shoring up security and minimizing digital bloat.

Are there orphaned accounts floating around within your organization that former employees can still access? Are there review processes for determining and updating what different users should be able to access as their position within the organization changes?

The best time of year to evaluate accounts may be when you update everyone’s accounts from the previous year. If the time to sit down and evaluate accounts continually eludes your IT team, have them chip away at it between other processes, or have them schedule it as a larger project during less demanding months.

4. Review your user account lifecycle processes

What is the standard process for deactivating accounts when employees leave your organization or outside consultants are no longer providing services? These types of departures – whether involving immediate security concerns or not – are the most significant contributors to orphaned accounts plaguing in your systems.

Manually managing or automating account deactivation is crucial. Review and optimize your organization’s deactivation processes to determine how fast and comprehensive they are when it comes to quickly restricting accounts.

Rapid responses can prove invaluable, providing peace of mind that comes from knowing your account review process cleans everything up.

Side note: Consider implementing a secure SSO solution

Having a single point of entry for the majority of your systems and applications can make things easier for all employees. Users will only need to remember one set of credentials and administrators can protect resources behind more restrictions without reducing easy access. By limiting the point of entry to one single spot, you can protect against potential data breaches. Configurable security settings, like date and time restrictions, allow administrators to control their environment even as systems and applications are extended to the cloud.

Applications and systems containing certain sensitive information can be made inaccessible from anywhere other than specific physical locations to help prevent risks, and secure portals can maintain logs of user activity, including when and how information is accessed.

Your organization’s data is one of its most valuable resources. Protecting it doesn’t have to be complicated or expensive, but it must be done right. Strengthen your organization’s data security practices today by starting to implement some or all of these practices.

Python backdoor attacks and how to prevent them

Python backdoor attacks are increasingly common. Iran, for example, used a MechaFlounder Python backdoor attack against Turkey last year. Scripting attacks are nearly as common as malware-based attacks in the United States and, according to the most recent Crowdstrike Global Threat Report, scripting is the most common attack vector in the EMEA region.

Python backdoor attacks

Python’s growing popularity among attackers shouldn’t come as a surprise. Python is a simple but powerful programming language. With very little effort, a hacker can create a script of less than 100 lines that establishes persistence, so that even if you kill the process, it will start itself back up, establish a backdoor, obfuscate communications both internally and with external servers and set up command and control links. And if an attacker doesn’t want to write the code, that’s no problem either. Python backdoor scripts are easy to find – a simple GitHub search turns up more than 200.

Scripting attacks are favored by cybercriminals and nation states because they are hard to detect by endpoint detection and response (EDR) systems. Python is heavily used by admins, so malicious Python traffic looks exactly like the traffic produced by day-to-day network management tools.

It’s also fairly easy to get these malevolent scripts onto targeted networks. Simply include a malicious script in a commonly used library, change the file name by a single character and, undoubtedly, someone will use it by mistake or include it as a dependency in some other library. That’s particularly insidious, given how enormous the list of dependencies can be in many libraries.

By adding a bit of social engineering, attackers can successfully compromise specific targets. If an attacker knows the StackOverflow usernames of some of the admins at their targeted organization, he or she can respond to a question with ready-to-copy Python code that looks completely benign. This works because many of us have been “trained” by software companies to copy and paste code to deploy their software. Everyone knows it isn’t safe, but admins are often pressed for time and do it anyway.

Anatomy of a Python backdoor attack

Now, let’s imagine a Python backdoor has established itself on your network. How will the attack play out?

First, it will probably try to establish persistence. There are many ways to do this, but one of the easiest is to establish a crontab that restarts the script, even if it’s killed. To stop the process permanently, you’ll need to kill it and the crontab in the right sequence at the right time. Then it will make a connection to an external server to establish command and control, obfuscating communications so they look normal, which is relatively easy to do since its traffic already resembles that of ordinary day-to-day operations.

At this point, the script can do pretty much anything an admin can do. Scripting attacks are often used as the point of the spear for multi-layered attacks, in which the script downloads malware and installs it throughout the environment.

Fighting back against Python backdoors

Scripting attacks often bypass traditional perimeter and EDR defenses. Firewalls, for example, use approved network addresses to determine whether traffic is “safe,” but it can’t verify exactly what is communicating on either end. As a result, scripts can easily piggyback on approved firewall rules. As for EDR, traffic from malicious scripts is very similar to that produced by common admin tools. There’s no clear signature for EDR defenses to detect.

The most efficient way to protect against scripting attacks is to adopt an identity-based zero trust approach. In a software identity-based approach, policies are not based on network addresses, but rather on a unique identity for each workload. These identities are based on dozens of immutable properties of the device, software or script, such as a SHA-256 hash of the binary, the UUID of the bios or a cryptographic hash of a script.

Any approach that’s based on network addresses cannot adequately protect the environment. Network addresses change frequently, especially in autoscaling environments such as the cloud or containers, and as mentioned earlier, attackers can piggyback on approved policies to move laterally.

With a software and machine identity-based approach, IT can create policies that explicitly state which devices, software and scripts are allowed to communicate with one another — all other traffic is blocked by default. As a result, malicious scripts would be automatically blocked from establishing backdoors, deploying malware or communicating with sensitive assets.

Scripts are rapidly becoming the primary vector for bad actors to compromise enterprise networks. By establishing and enforcing zero trust based on identity, enterprises can shut them down before they have a chance to establish themselves in the environment.

Overcoming crypto assessment challenges to improve quantum readiness

Large enterprises have a major problem when it comes to preparing for the advent of quantum computing: few, if any, have a working knowledge of all the locations where cryptographic keys are being stored and used across applications, browsers, platforms, files and modules, as well as being shared with third parties and vendors.

quantum readiness

Enterprises with tens or hundreds of thousands of employees require a massive technology base including computers, mobile devices, operating systems, applications, data, and network resources to keep operations running smoothly. Cryptography in all of its various forms is broadly used to encrypt and protect sensitive information as it moves across this vast landscape of systems and devices. Exactly which algorithms and cryptography methods are being used is virtually unknowable without a concerted effort to track down and compile a comprehensive inventory of the literally hundreds of crypto assets in use across an enterprise.

Most enterprise IT managers and chief security officers are well-acquainted with tracking software assets as a way to improve security. A good understanding of software versions can help with ensuring that updates and patches are applied before the next big vulnerability is discovered and systems get compromised. There’s a sense of urgency around patching software as new flaws and data breaches get discovered on a nearly daily basis.

Crypto systems, in contrast, are often perceived to already be hardened and less vulnerable than software applications. Changes to cryptography systems tend to happen slowly, so there is less immediacy. Organizations often take years to upgrade their cryptography, as with migrations from SHA-1 to SHA-2 or Triple Data Encryption Standard (TDES) to Advanced Encryption Standard (AES).

Quantum uncertainty

The lack of urgency concerning cryptography is one of most significant problems facing most enterprises as they consider what steps they should be taking to survive in a post-quantum world. With Y2K, for instance, the deadline to revamp systems with two-digit date codes was obvious. That’s not the case here – the timeline is anything but certain. It could happen in two or three years or it might happen in 10-15 years, or it might never happen. At the current rate of advancement, most experts expect that functional quantum computers capable of breaking current-grade cryptography such as RSA will arrive within the next 10 years. Maybe. Or maybe not.

Uncertainty is a deal breaker for driving urgency. When there are 50,000 fires to fight on a daily basis, enterprises don’t have time to think about a fire that someone tells them is going to happen sometime in the future. It’s a matter of human nature. We all continue living our lives knowing that someday the sun will explode and life on Earth as we know it will be over. We tell ourselves, “Sure, someday quantum computers will arrive on the scene and I’ll deal with it when the time comes, but I’m too busy to think about it right now.”

If guarding against the threat of quantum were a simple matter of using different algorithms, a wait-and-see attitude might be sufficient. In reality, in the 40 years that asymmetric encryption technology has been in use, there has never be a threat to cryptography of this scale. There will be massive upheaval and disruption.

A sweeping crypto transition like this will happen at Internet scale. Making the move to quantum-resistance algorithms will be a complex process for the entire industry, involving countless systems and devices and will require intense engagement with partners and third-party vendors. It will take time and patience.

Every enterprise is different and the only way to know how your organization will fare in a post-quantum world is to gain an understanding of what systems are doing cryptographic signing or encryption. The ultimate goal is a listing all the applications, systems and devices across the organization and its subsidiaries detailing the type of cryptography and algorithms in use. You’ll also want to evaluate exposure to attack, the sensitivity of information that is being protected, and whether there’s support for crypto agility to determine if the system will need to be replaced by something more agile. Such information is often not immediately obvious and may require special tools, expert-level sleuthing and discussions with vendors to figure out. Given a general lack of urgency toward quantum, few enterprises are likely to invest the necessary resources for a comprehensive cryptography audit.

Quantum readiness: Focus on business-critical systems first

A much more practical approach – and one that business leaders will more likely find acceptable – is to focus on understanding the exposure to your more important, business-critical set of applications. For example, if you’re a bank, what systems do you have that allow you to operate daily as a bank? You’re not going to care about an employee website that sells Disneyland tickets. It that were to be turned off tomorrow, it wouldn’t be a problem. By focusing on business-critical systems, you’ve just overcome a major obstacle to getting started toward quantum readiness.

Once you have the ball rolling and business-critical systems identified, now comes the task of tracking down where and how those system are using signing or encryption. Is that SQL database sitting on the network using certificates? How do I know? There’s no magical tool that can run in an environment and tell you everything. You’ll need to look at network ports and look for certificates and even then, you’ll only find a small portion.

If your company makes widgets, you’ll likely decide that the systems you use to make widgets are business critical. Is encryption or signing enabled? If so, what type of cryptographic keys and can it be upgraded? Is there something in the documentation, or will I need to have a conversation with the vendor? It’s also important not to overlook systems that may not be business critical per se but could expose the organization to considerable risk. The video conferencing system used to discuss quarterly earnings could be prime target, for instance.

Improving crypto ability

Even if you aren’t sure about post-quantum impact, having a list of all the systems and algorithms is important for other security controls and standards as well as knowing where your risks are. So even if the quantum supremacy is never realized, it’s still a good process to go through – it’s not wasted nor is it only for the doomsayers.

What’s more, cryptographic algorithms are constantly evolving. Having a list of the type of cryptography in use makes it relatively simple to move to stronger algorithms as needed. Researchers are constantly looking for ways to crack encryption algorithms and sometimes they are successful, such as the discovery of a significant flaw that caused all major browser vendors to flag SHA-1 certificates as unsafe, finally putting that outdated algorithm to bed.

A good understanding of cryptography also puts you in a better position with vendors. As quantum-safe algorithms and methods are developed, you can put pressure on vendors to implement them within a reasonable time frame, or if they refuse, you can move to different vendors. And to some degree, time is of the essence. Even before a quantum computer capable of breaking encryption arrives, malicious actors are already starting to harvest encrypted data hoping they can one day unlock a veritable treasure trove.

Despite the uncertainty surrounding the arrival of quantum computing, sitting back and waiting for the sky to fall is a sure recipe for disaster. Avoid the worst-case scenario by at least documenting how your organization uses cryptography across all business-critical systems.

Why a risk-based approach to application security can bolster your defenses

Like it or not, cybercrime is big business these days. A casual glance at the news at any given time will typically reveal several new breaches, usually involving eye-watering amounts of personal or sensitive information stolen. As such, any executive board worth its salt should have long realized the importance of robust cyber defenses.

approach application security

Sadly, even in the face of mounting evidence, this isn’t always the case. Often business priorities are given precedence over security priorities, particularly when optimal security practices risk interfering with business efficiency or overall productivity. Underfunding is another common concern for many CSOs and CISOs, with the board simply not prepared to give them the budget and/or resources they truly need to keep the business safe.

Businesses need to think long term

Underfunding security in order to boost other areas of the business may seem like a good idea in the short term, but it’s a big risk that can come back to bite senior executives pretty spectacularly if they aren’t careful. For example, while an additional £500,000 towards new security resources may not seem viable during annual budgeting cycles, it pales in comparison to the millions of pounds worth of fines, legal costs and mitigation expenses many organizations are faced with in the aftermath of a breach.

Just ask British Airways, which has been hit with a record £183 million fine from the Information Commissioner’s Office (ICO), following what it described as a “sophisticated, malicious criminal attack” on its website, during which details of about 500,000 customers were harvested.

Examples like this highlight just how important it is to ensure long term security and compliance by implementing cybersecurity practices that prevent such data breaches from happening in the first place. A more proactive approach to integrating cybersecurity practices into the wider business strategy can go a long way towards protecting against data loss, as well as empowering security teams with the ability to respond much more swiftly and precisely to any threats that do present themselves.

With more and more organizations now relying on software applications to grow their business, properly securing these applications is becoming absolutely essential. A great way to do this is by adopting a systematic, risk-based approach to evaluating and addressing cybersecurity vulnerabilities earlier in the software development life cycle (SDLC), rather than trying to do it after the fact.

Business and security objectives must be aligned

The most effective security approaches are the ones that have been properly aligned with those of the wider organization. But all too often, the idea of building security into the SDLC is reconsidered the moment it’s deemed to be having a detrimental impact on development times or release windows.

When the time needed to remediate a vulnerability threatens to delay the release of an important application, pressure quickly starts building on the security team. If it can’t make a compelling business case to delay release in order to fix the issue, it can quickly find itself on the outside looking in.

The role of risk in effective security decision making

In situations like the one above, security teams need to be able to quickly make senior decision makers recognize the stakes involved and the potential consequences of not fixing the vulnerability. This requires both a solid understanding of the app’s intended business purpose and an ability to frame the argument in a way decision-makers will understand, rather than drowning them in security jargon. One of the best ways to do this is with a risk-based approach, which has two main stages.

Stage one involves taking a comprehensive inventory of all web applications currently in development and putting a stringent monitoring process in place to quickly identify vulnerabilities. It’s critical to be thorough during this stage because if just one application is missed, or one system left unsecured, it creates a new potential access point for cybercriminals.

With stage one completed then stage two can begin, which incorporates business impact into the strategic planning process. By properly defining the potential losses that could occur from a specific vulnerability and helping senior executives understand them in plain terms, not only does it help drive home the need for effective security, it allows for much finer tuning of activities based on the level of risk they present to the overall organization.

Taking a SaaS-based approach to application scanning

Adopting a SaaS-based approach to application scanning throughout the SDLC allows security teams to continuously assess risk during the production process, rather than just at a handful of milestones. As a result, when combined with proper prioritization of activities, a much more accurate risk profile can be created than would otherwise be possible, which all levels of the company can buy into.

When it comes to effective security, it’s important for security teams to speak a language the whole organization understands. Taking a risk-based approach does this, translating often complex vulnerabilities and analysis into terms that are meaningful to all, and particularly to the senior executives. Then proper discussions to take place, leading to mutual decisions that benefit the company as a whole and keep it protected from the plethora of cyber threats out there.

Maximizing customer engagement when fraud prevention is top of mind

With the number of data records breached in 2019 surpassing four billion, fraud prevention and regulatory compliance are, inevitably, top priorities for financial institutions (FIs).

fraud prevention

A recent report from Javelin, for example, found that FIs are significantly more focused on investing in digital fraud mitigation than companies in other industries. According to the report, 52% of consumer banks plan on implementing additional security solutions to keep customers’ accounts secure, and 46% want to invest in better identity verification measures.

But with attention – and budget – devoted almost exclusively to security and compliance, it’s easy for areas like innovation, customer engagement, and user experience to fall by the wayside. In the report cited above, only 28% of banks indicated an interest in adding support for new channels.

The situation is more complex than simply devoting a larger share of the budget and focus to fraud prevention and security: as companies find new ways to engage with their customers through new features and touchpoints, criminals find new vulnerabilities to exploit.

It’s no surprise, therefore, that more than a third of companies in the study report that “fraud is a significant impediment to digital innovation efforts, forcing them to slow the expansion of their features and functionality as they seek ways to mitigate the new risks these innovations attract”.

Fraud prevention on the spot

Research and experience have showed that fraud mitigation and cutting-edge security strategies can go hand-in-hand with – and even drive – innovation, customer engagement and a great user experience.

Consumers have indicated that they want more information about their transactions and more control over authenticating them. Today, digital channels enable financial institutions to give their customers the insights and control they demand, while making it easy to check all the necessary security and compliance boxes. With the right approach in place, there need be no trade-off between fraud mitigation and customer engagement.

Imagine, for example, a state-of the art in-app messaging solution that combines instant communication with banking-grade security and on-the go self-service functionality. A customer can be alerted when a suspicious activity occurs on their account, with the option of responding immediately by approving or rejecting the transaction before it’s processed. This eliminates frustration and other effects caused by false declines, while putting the customer in control of fraud prevention.

Turn insights into relevant engagements

Many FIs are starting to realize that there’s a missed opportunity when it comes to making the most of insights they already have on their customers. Even though the use of consumer data is a matter of increasing global concern – as regulations like Europe’s GDPR and California’s Consumer Privacy Act illustrate – much can be gained from using insights for good. And in the case of banking, what’s good for the customer is also good for the bank.

Customers demand relevant, personal experiences from their banks. If they don’t get it, they’re not afraid to look elsewhere – a recent report conducted by Capgemini indicated that 63% of consumers are currently using a financial product from a big tech company. But banks that are willing to invest in personalization and tailor advice, loyalty offers, and relevant products to customers based on their profile, will reap rewards. BCG reports that one bank that reinvented its personalization strategy saw a 20% increase in revenues over three years.

Use engagements to build trust

Apart from gaining revenue, banks can also use relevant, meaningful engagements with their customers to build trust and foster lasting relationships. In the U.S. today, the most-used functionalities of mobile apps have been checking account balances, managing card controls, and depositing checks.

With peer-to-peer payments becoming an increasingly popular and familiar function in banks’ mobile apps, banks have introduced another touchpoint through which they can engage with their customers, increase loyalty and provide an alternative source of revenue.

While introducing faster payments services ticks a big box when it comes to addressing customers’ needs, fraud and security remain crucial considerations – and potential roadblocks to adoption. Traditionally, banks have used the lapse in payment completion as time to examine transactions and respond to suspicious activity.

Now, the pressure for speed has impacted the time available to ensure accuracy. But by implementing a truly customer-focused omnichannel authentication strategy, FIs can offer customers a one-touch in-app authentication experience that engages them in real time, all while eliminating fraud and providing a great user experience. The bank can rest assured that it has digitally signed proof of consent of the transaction, while the customer feels secure, in control, and on the way to transacting more.

Opportunities moving forward

It is more important than ever for banks to remain competitive and innovative, but it should not come at the cost of customers’ security and increased fraud rates. Preventing fraud and delivering the best in digital security comes down to identifying the customer and engaging with them securely, when and where it matters. Keeping them engaged and building loyalty is a matter of trust, built by offering consistent, relevant experiences regardless of when and where a customer chooses to interact with their bank.

Europe’s Gaia-X cloud service faces a difficult future

In January, Microsoft reported its fiscal 2020 second quarter results. Among the company’s many impressive accomplishments is a 62% growth (yeay-over-year) of its Azure cloud service. This secures the company’s spot as a dominant player in the cloud space for yet another quarter.

Gaia-X

The leaderboard in the cloud wars saga has remained stagnant, with a few powerhouses dominating market share: US-based companies AWS (47.8%), Azure (15.5%) and Google Cloud (4%), as well as China’s Alibaba (7.7%). In the absence of European cloud companies emerging as top contenders, concerns arise around the lack of data sovereignty and self-determination as European companies increasingly rely on foreign cloud services.

In response, the European Commission, France, Germany and hundreds of companies have announced their own cloud initiative – Gaia-X. Gaia-X will ostensibly help European providers not only compete with the US and Chinese tech giants, but also ensure they have more control over their own data.

The announcement of the Gaia-X project has already caused backlash throughout the cloud space, with some US companies warning that this move will impose unnecessary national restrictions on a global economy.

Regardless of any company’s perspective on the project, the creation of Gaia-X has broad implications for the cloud space on an international scale. But given the domination of these established tech giants, will Gaia-X succeed in its goals?

European companies want their data back

To answer that question, it’s critical to understand the primary motivation behind the project. Gaia-X is not necessarily aiming to birth the next hyperscale cloud provider — it’s looking to retain more European control over European data.

As of now, European companies using US cloud providers are subject to US legal restrictions. Under the conditions of the Cloud Act, which went into effect last year, local authorities can order US providers to turn over any company’s data stored on servers regardless of where that company is based. Similar compliance laws exist in China.

It’s easy to see why European companies are dissatisfied with this arrangement. Factors like growing geopolitical concerns, trade disputes, political uncertainties and broad suspicion of near-monopolies like Amazon Web Services contribute to the drive to bring European data back home. As the majority of enterprises shift to the cloud, European brands balk at the idea that an increasing volume of sensitive data (such as intellectual property, research findings, public health information and more) is subject to the whims of foreign authorities.

Not to mention, European companies are rightfully concerned about the competitive advantages they lose without control over this data. As AI-powered strategies like machine learning become key differentiators for companies across verticals, US legal restrictions could hamper brands’ ability to access and leverage the data at the core of these initiatives.

That said, an ambitious project like Gaia-X is not easy to execute.

Gaia-X aims to shake up a field of few players

There’s a reason US providers (especially Amazon, which boasts close to half the total cloud market share) excel — and why many aren’t worried about a threat from a project like Gaia-X. For one, big players make financial sense. Hyperscalers like AWS and Alibaba offer highly competitive prices that emerging competitors can’t easily match. As companies scale, these price differences become even more dramatic.

Price considerations play an even more important role given the nature of most contractual agreements between brands and cloud providers. With attractive benefits like discounts and rebates tied to long-term agreements, companies risk huge penalty costs if they try to migrate to a new cloud provider.

Additionally, hyperscalers offer a high level of personalization and customer-specific service that is difficult to match. These big players have the resources to create and iterate features based around their clients’ hyper-specific needs. Many companies don’t have much wiggle room to even consider smaller providers that can’t keep up with these functionalities.

On top of that, security concerns strengthen the case to use a big cloud provider for many companies. With the minimum technical requirements of Gaia-X’s infrastructure still undefined, it’s hard to say whether leaders behind the push can convince small- to medium-sized enterprises that it offers the same level of security as its industry-leading competitors.

The presence of Gaia-X illustrates a changing cloud future

The bottom line is that Gaia-X faces major challenges. For Gaia-X to become a viable player, it needs to compete on a practical scale. At this point, most European companies on the cloud can’t afford to switch from a hyperscaler if their options aren’t as functional or financially feasible. The success of Gaia-X requires cooperation from players across Europe, as well as both public and private funding.

On a broader note, the emergence of Gaia-X highlights the fact that, though only few names have powered the cloud computing boom, there’s still room for innovation. If Europe wants a greater share of the cloud market, leaders will need to invest in more cloud computing projects like Gaia-X, drive efforts to improve security and develop the right regulations to gain ground in a new cloud-powered future.

Automate manual security, risk, and compliance processes in software development

The future of business relies on being digital – but all software deployed needs to be secure and protect privacy. Yet, responsible cybersecurity gets in the way of what any company really wants to do: innovate fast, stay ahead of the competition, and wow customers!

SD Elements

In this podcast recorded at RSA Conference 2020, we’re joined by Ehsan Foroughi, Vice President of Products from Security Compass, an application security expert with 13+ years of management and technical experience in security research. He talks about a way of building software so that cybersecurity issues all but disappear, letting companies focus on what they do best.

Good morning. Today we have with us Ehsan Foroughi, Vice President of Products from Security Compass. We’ll be focusing on what Security Compass calls the Development Devil’s Choice and what’s being done about it. Ehsan tell me a little about yourself.

A brief introduction: I started my career in cybersecurity around 15 years ago as a researcher doing malware analysis and reverse engineering. Around eight years ago I joined an up and coming company named Security Compass. Security Compass has been around for 14 years or so, and it started as a boutique consulting firm focusing on helping developers code securely and push out the products.

When I joined SD Elements, which is the software platform and the flagship of the product was under development. I’ve worn many hats during that time. I’ve been a product manager, I’ve been a researcher, and now I own the R&D umbrella effort for the company.

Thank you. Can you tell me a little bit about Security Compass’ mission and vision?

The company’s vision is a world where people can trust technology and the way to get there is to help companies develop secure software without slowing down the business.

Here’s our first big question. The primary goals of most companies are to innovate fast, stay ahead of the competition and wow customers. Does responsible cybersecurity get in the way of that?

It certainly feels that way. Every industry nowadays relies on software to be competitive and generate revenue. Software is becoming a competitive advantage and it drives the enterprise value. As digital products are becoming critical, you’re seeing a lot of companies consider security as a first-class citizen in their DevOps effort, and they are calling it DevSecOps these days.

The problem is that when you dig into the detail, they’re mostly relying on reactive processes such as scanning and testing, which find the problems too late. By that time, they face a hard choice of whether to stop everything and go back to fix, or accept a lot of risk and move forward. We call this fast and risky development. It gets the software out to production fast, by eliminating the upfront processes, but it’s a ticking time bomb for the company and the brand. I wouldn’t want to be sitting on that.

Most companies know that they need proactive security like threat modeling, risk assessments, security training. That’s a responsible thing to do, but it’s slow and it gets in the way of the speed to the market. We call this slow and safe development. It might be safe by the way of security compliance, but it opens up to competitive risk. This is what we call the Development Devil’s Choice. Every company that relies on it has two bad choices, fast and risky or slow and safe.

Interesting. Do you believe the situation will improve over time as companies get more experienced in dealing with this dilemma?

I think it’s going to get worse over time. There are more regulations coming. A couple of years ago GDPR came up, and then it’s California Consumer Privacy Act, and then the new PCI regulations.

The technology is also getting more complex every day. We have Dockers and Kubernetes, there’s cloud identity management and the shelf life of the technology is reducing. We no longer have the 10 years end of life Linux systems that we can rely on.

SD Elements

So, how are companies dealing with this problem in the age of agile development?

I’m tempted to say that rather than dealing with it, they’re struggling with it. Most agile teams define their work by the way of user stories. On rare occasions, the teams take the time to compile the requirements and bake for security, and bake it into their stories. But in the majority of the cases, the security requirements are unknown and implicit. This means that they rely on people’s good judgment, and they rely on expertise. This expertise is hard to find and we do have a skill shortage in the security space. When you find them, they’re also very expensive.

How do these teams integrate security compliance into their workflow?

In our experience, most agile teams have been relying on testing and scanning to find the issues, and then that means that they have a challenge. When they uncover the issue, they have to figure out if they should go back and fix or they take the risk and move forward. Either way, it’s a lot of patchwork. When the software gets shipped, everybody crosses their fingers and hopes that everything went well. This usually leads to a lot of silos. Security becomes oppositional to development.

What happens when the silos occur? Are teams wasting their effort? Reworking software?

It adds a lot of time and anxiety. The work ends up being manual, expensive and painfully deliberate. The security compliance side of the business gets frustrated with the development, they find inconsistencies against each other and it just becomes a challenge.

No matter how companies develop software, their steps for security and compliance are likely not very accurate. That means that the management also has no visibility into what’s going on. There are lots of tools and processes today to check on the software that is being built, but usually they don’t help make it secure from the start. They usually point out to the problems and they show how it was built wrong.

Finding that out is a challenge because it exacerbates this dilemma of development versus security. It’s like being told that you didn’t need heart surgery if you ate healthy food for the past 10 years. It’s a bit too late and not particularly helpful.

I’m hearing you describe a serious problem that’s haunting company leaders. It seems they have two pretty bad options for development, fast and risky or slow and safe. Is that it? Are companies doom to choose between these two?

Well, there’s hope. There is a third option emerging. You don’t need to be fast and risky or slow and safe. The option is to be nearly as fast, without slowing down and being secure at the same time. We call it the balanced development. It’s similar to how the Waze app knows where you’re driving and tells you specifically at each step where you should be going and where you should be turning.

The key is to bring security left in the cycle, circle rapid around the development and make sure that it’s done in tandem. Testing and scanning should not find anything by the end of the cycle if this is done right. These systems mostly leverage automation to balance the development effort between the fast and risky and the slow and safe.

SD Elements

Ehsan, can you tell us more about these systems? How do they work and how do they support the jobs of security teams?

Well, automation is the key. It starts by capturing the knowledge of the experts into a knowledge base, and automating so that the system understands what you’re working on, what you’re doing, and delivering the actions that you need to take to bake security in right at the time you need it.

It constantly also updates the knowledge base to stay on top of the regulation changes, technology changes, and during development the teams are advised of the latest changes. When the project is finished, the system is almost done with the security and compliance actions and activities, and all of it is also documented so that the management can see what risk they are taking on.

Thank you very much for the insight and for the thoughtful discussion. What advice would you give company leaders as they start to tackle these issues?

Well, I have a couple of advice, mostly based on the companies we have been working with. I would say, stay pragmatic and balanced. Focus on getting 80% fast and 80% secure. Don’t get bogged down. Number two, I would say educate your organization, especially the executives. Executive buy-in is very important. Without that you can’t change the process and you can’t do it in silos from within one small team. You have to get people’s buy-in and support.

The next one is investing in automating the balanced approach. This investment is sometimes hard, but the earlier you do it, the better. I see a lot of companies bugged down by investing in the smaller, easier projects like updating and refreshing their scanning practice. It usually pays off to go to the heart of the problem and invest in that, because all of your future investments are more optimized.

I find it also useful when working with the developers, to always start with why? Why are you doing this? Why are you asking them to follow a certain process? If they understand the business value of it, they’ll be more cooperative with you.

And finally, try our system. We have a platform called SD Elements that enables you to automate your balanced development.

If anyone’s listening and interested in connecting with you or Security Compass, how can they find you?

Well, you should check out our website at www.securitycompass.com. We’d love to prove our motto to you: Go fast and stay safe. Thanks for joining us.

March 2020 Patch Tuesday forecast: Let’s put the madness behind us

Did you survive the madness of February 2020 Patch Tuesday and its aftermath? We saw Windows 7 and Server 2008 finally move into extended security support and then Microsoft pulled a rare, standalone Windows 10 security patch following some unexpected results.

March 2020 Patch Tuesday forecast

For some of us, these two events caused a bit of chaos until they were sorted out. Let’s take a quick look in the rearview mirror, before jumping ahead to what looks like an easy drive for March.

Microsoft did a great job providing information and testing tools in advance of the Windows 7 and Server 2008 end-of-life, but that doesn’t mean everyone was ready when it happened. The extended security updates (ESUs) are supplied as part of the update catalog, but installation on the endpoint fails without first installing and activating a subscription key. Other pre-requisites include the appropriate SHA-2 code signing update and latest service stack updates (SSUs) which, if you have been patching regularly, you will have already installed.

So, last Patch Tuesday, as you can imagine, getting the systems to the proper state with all three components in place – activated key, SHA-2 update, and latest SSU, and then applying the new ESU patches was disruptive for some. But now that everyone has been through the procedure, the process of applying the March updates should be much smoother.

The release and subsequent removal of KBs 4524244 and 4502496 created a lot of discussion and confusion. Woody Leonhard provided a detailed chronology and technical breakdown in his article. This is a complicated situation involving the Unified Extensible Firmware Interface (UEFI) boot loader.

In summary, Microsoft released this security update to fix an issue where a third-party UEFI boot manager could allow a reboot, bypassing secure boot entirely. By launching from a hostile operating system, the system would be compromised. Keep in mind this does require physical access to the system. Unfortunately, there were unexpected side effects to the fix which included breaking other boot routines, most notably on HP PCs with Ryzen processors. The updates were pulled, and we are waiting to see if Microsoft re-releases a more comprehensive fix this patch Tuesday.

I mentioned in the forecast last month that the Microsoft Security Advisory 190023 contained more detail on the upcoming security features for the Lightweight Directory Access Protocol (LDAP). This advisory was again updated on February 28, with recommendations on using the new options to harden this protocol.

The advisory specifically stated, “The March 10, 2020 and updates in the foreseeable future will not make changes to LDAP signing or LDAP channel binding policies or their registry equivalent on new or existing domain controllers.” These features will be included in the March Patch Tuesday updates, so take advantage and enable them. Also follow best practices and experiment on your test systems before rolling out to production.

March 2020 Patch Tuesday forecast

  • Microsoft addressed the highest number of CVEs in recent memory last month, so expect a lighter set of updates next week. The ESUs should again track the CVEs addressed with the other standard support operating systems. Office updates were light last month, so there may be a few more coming.
  • Mozilla had some major updates for all products last month but expect a minor update next week. Vulnerabilities continue to pop up in browser-related products.
  • Google just released their security update for Chrome this week, so I don’t expect to see anything on patch Tuesday.
  • Apple released their first major updates in January, so we may see a minor update.
  • Adobe issued major updates for Reader and Acrobat last month, so we should only see a minor update this month if any. I’ll go out on limb and say we won’t see a Flash update this month.

The forecast for updates looks light this month, so breathe a sigh of relief as we leave the February madness behind.

How adaptive trust makes security efficient

Zero trust is a comprehensive security framework that requires everyone—and every service account—to authenticate identity before entering the corporate network. Every app and every device, as well as all the data they contain, must also be verified for each session.

Considering the multitude of people, devices, and apps it takes to make today’s businesses hum, you might think zero trust requires extensive management.

And you would be right. But what makes this Herculean undertaking not only possible, but easy to manage is the next evolution which I like to refer to as adaptive trust.

Making sense of big data

Organizations have been collecting data for years, many collect so much that they don’t know what to do with it. Analyzing behavioral data through the lens of artificial intelligence enables companies to put it to good use.

Adaptive trust begins by collecting data across the enterprise about user activities – who does what and when, and which apps and data they use to accomplish their tasks. Then algorithms are trained on the information to discern typical patterns, creating alerts when an activity is outside of what has been established as a normal baseline.

For example, data patterns may show that an employee uses their laptop in Chicago during business hours. But one day they log in from Kyiv at 1 a.m. Noticing the anomaly, the adaptive system follows a pre-set company rule, requiring the employee to do a facial recognition scan. It turns out the employee is indeed in Kyiv, at a business meeting in Kyiv, so they meet the criteria and they continue to work without further disruption.

Other companies may have different pre-set rules, perhaps requesting verification of the user’s status from their manager or alerting the security team and shutting off access until the situation is sorted out. The point is, the adaptive trust system recognizes anomalies and takes action in accordance with company policy—with little or no human intervention involved.

Harnessing machine power

Automation provides a critical advantage in today’s fast-moving IT world, where companies struggle to find workers with the skills they need. Eighty-one percent of North American IT departments are experiencing a skills gap, a study by IT company Global Knowledge found. And every year, the gap gets wider.

As threats grow more sophisticated and cloud-based apps expand the surface of attack—often offering scant protection—the demand for cybersecurity skills is particularly acute.

By leveraging AI and machine learning algorithms to discover and respond to security threats, companies can fill the cybersecurity skills gap without hiring an army of highly skilled, hard-to-find human experts.

Fast learners

An automated, AI-based adaptive trust system can scan millions of data points at a time, and it doesn’t sleep, get tired, or charge overtime. It notices not only that the above employee works from 8 a.m. to 5 p.m. in Chicago, but that they open an app every day around 10 a.m. and download about the same amount of information when they use it.

Biometric authentication factors add even more to the knowledge base, recognizing voice, fingerprints, and device characteristics. If any of the ID or work pattern metrics look abnormal, an alert is triggered in accordance with the company’s security policy.

Adaptive trust doesn’t confine itself to people – it can monitor apps, devices, and data, too. By tracking patterns of data transfers between applications, it creates user profiles that can help stop a breach.

If a hacker is engaged in a spoofing campaign – redirecting users to a scam website – the system immediately spots a difference in the metadata that is generated and alerts the security team to the problem.

If an attacker inserts malware into a site to harvest personal data during online transactions, the system notices a slight delay after users click “Submit,” – a subtle change human workers likely wouldn’t catch, even if they had time to monitor for it.

Whether it’s analyzing human behavior or mechanical processes, an adaptive AI system finds problems faster, stopping breaches in their tracks or limiting the harm they can cause. Organizations that don’t have a security system incorporating AI, analytics, and automated incident response experience data breach costs 95 percent higher than those that do, according to the 2019 Ponemon Institute Cost of a Data Breach study.

In addition to saving organizations time and money and preventing critical data loss, adaptive trust allows employees to be more productive. Once it understands their work habits, it doesn’t have to bug them as much for additional authorizations. The more it learns, the smoother the process becomes.

As more people, apps, and devices connect to the enterprise, outpacing IT’s ability to keep up, organizations need to look beyond traditional security platforms. For obtaining optimal protection, minimal intrusion, and maximum efficiency, the best solution is adaptive trust.

Soon, your password will expire permanently

Passwords have been around since ancient times and they now serve as the primary method for authenticating a user during the login process. Individuals are expected to use unique username and password combinations to access dozens of protected resources every day – their social media accounts, banking profile, government portals and business resources.

Yet, to save time during the login process and reduce the difficulty of having to recall multiple sets of login credentials, individuals have succumbed to the malpractice of recycling usernames and passwords across their accounts. While this may be a time-saving practice, they are opening themselves up to monumental risk: the unfortunate truth is that cybercriminals with access to one user’s set of breached credentials can reuse that password and username combination in order to obtain unauthorized access to accounts with much more sensitive data, including healthcare portals with critical protected health information (PHI).

On top of this, attackers can use breached personal information for highly targeted, and highly effective phishing attacks. To combat this epidemic and protect users, enterprises will need to rethink their current approach to the user-login journey.

The reality of password malpractices

Despite increased investments in global information security spending, companies still continue to get breached, and the majority of the time this is due to poor password practices. The reality is that once data breaches occur, cybercriminals will sometimes opt to sell the stolen information on the dark web to other wrong-doers. In fact, there are a plethora of sites on the dark web where a threat actor can buy pilfered credentials.

For example, last year 617 million account details from 16 hacked websites have been found for sale on the dark web – and for less than $20,000 in Bitcoin. These lists of credentials are typically aimed at credential stuffers, which explains why the price point was just a few thousands of a cent per account (credentials).

Even the fact that some of the passwords in this instance were hashed did not deter attackers. Account details from 500px, a photo sharing service, were found for sale in this specific data dump, and some of the account details included were hashed with the message digest algorithm five (MD5). However, MD5 is infamous for no longer being cryptographically secure, as the 128-bit hash it generates can be broken in mere seconds. This adds to the reason why four out of five global data breaches are caused by weak and stolen passwords.

What are organizations currently doing to augment password security?

Some enterprises choose to improve password security by increasing their policies and requiring the inclusion of a greater number and diversified types of characters in passwords. Unfortunately, some users still choose to use passwords such as “123456” as well as “qwerty” and “password” when creating logins for new accounts. More complex passwords may also encourage end users to resort to writing down their password or picking to reuse a password. A password discovered on a “sticky note” or in a hacked database is no longer secure, regardless of its complexity.

Companies may also choose to schedule more frequent password resets, but this practice can be costly as the average large company spends over $1 million on password resets annually. Even though users may be prompted to choose a new password for the specific account undergoing a reset, that user can still opt to choose to use the same password from another profile, further adding to the password-reuse epidemic.

What should organizations be doing to secure the user login experience?

To eliminate weak password mishaps, password-free authentication methods will become more widely adopted. This includes the use of out-of-band steps on mobile devices, which are a form of two-factor authentication (2FA). This means that logins may require an additional layer of identity verification through a separate channel, typically with a smartphone or even a hardware-based token.

Gartner even estimates that 90% of midsize businesses, as well as 60% of large and global enterprises, will implement password-free authentication processes in more than half of all use cases by 2022. With Apple’s announcement that they will join the FIDO Alliance, an open industry association with a goal to “reduce the world’s over-reliance on passwords,” the end of passwords may come even faster.

Passwordless authentication practices will not only improve security posture, but they will also save organizations money by not requiring password resets. Additionally, password-free methods will improve the overall user experience by reducing friction in the login process.

In order to prosper in today’s digital economy, businesses will quickly learn that creating personalized and secure experiences that exceed customer expectations will pay off tenfold down the road. Passwordless authentication is just one step an organization can take in order to grow its business and obtain a competitive advantage through superior customer experiences.

Looking at the future of identity access management (IAM)

Here we are: at the beginning of a new year and the start of another decade. In many ways, technology is exceeding what we expected by 2020, and in other ways, well, it is lacking.

Back to the Future made us think we would all be using hoverboards, wearing self-drying and fitting jackets, and getting to and from the grocery store in flying cars by Oct. 21, 2015. Hanna-Barbera promised us a cutting-edge, underwater research lab in its 1972 cartoon, Sealab 2020.

While some of the wildest technology expectations from the big and small screen may not have come to fruition, the last decade of identity and access management development didn’t let us down.

And, I believe identity access management (IAM) cloud capabilities and integrations will continue their rapid spread – as well as their transformation of enterprise technology and the way we do business – in this new decade and beyond.

Here are three IAM predictions for 2020.

1. Single sign-on (SSO) protocols steadily decrease the need for unique accounts and credentials for every resource, so Active Directory (AD) is put on notice.

SAML, OAuth 2.0, OpenID, and other protocols mean people will see a drastic reduction in the number of unique accounts and credentials necessary to log in to certain websites. Do you need to log in to manage a site or do some online shopping? Likely, you can just use your Google or Facebook account to verify your identity.

This trend will continue to dominate throughout business-to-consumer efforts. I believe it will also take hold of business-to-business and internal business operations, thanks to the SSO developments made by Okta, Tools4ever, and other industry leaders.

The rise of SSO and the maturation of cloud platforms, such as G Suite, will likely result in a reduction in Microsoft’s market hold with on-premise AD. As more enterprises transition to hybrid infrastructures to the cloud, flexibility means relying less on systems and applications that pair with AD to authorize user access.

Google Chromebook and other devices prove that the AD divorce is possible. Because of this, expect to see directory battles between Davids and Goliaths like Microsoft.

2. Downstream resources benefit from improved integration.

Along with the increasing use of protocols connecting IT resources, expect downstream systems, applications, and other resources to utilize identity data better. We’ll see how information transferred within the protocols mentioned above can be leveraged.

Provisioning will be far more rapid since transferred identity data will help to create accounts and configure access levels immediately. Continual improving integrations will provide administrators and managers with far more granular control during initial setup, active management, and deactivation.

Also, increasing connectivity allows centralized management at the source of the authoritative identity data and pushed easily from there. At the same time, systems and applications will better incorporate identity data to enforce a given user’s permissions within that resource.

3. Multi-factor authentication (MFA) pervades our login attempts and increases the security of delivery to stay a step ahead.

MFA is already popular among some enterprise technologies and consumer applications handling sensitive, personal data (e.g., financial, healthcare), and will continue to transform authentication attempts. A lot has been said about increased password complexities, but human error is still persistent.

The addition of MFA immediately adds further security to authentication attempts by having the user enter a temporarily valid pin code or verify their identity by other methods.

An area to watch within MFA is the delivery method. For example, SMS notifications were the first stand-out but forced some organizations to weigh added costs that messaging might bring on their mobile phone plans. SMS remains prevalent, but all things adapt, and hackers’ increased ability to hijack these messages have made their delivery less secure.

Universal one-time password (OTP) clients, such as Google Authenticator, have both increased security and made the adoption of MFA policies much easier through time-sensitive pin codes. Universal OTPs also do away with the requirement for every unique resource to support its own MFA method.

PIN codes are now getting replaced by “push notifications,” which send a simple, secure “yes” or “no” verification prompt that allows access. After the client app is downloaded and registering your user account, a single screen tap is all that is needed for additional security to your logins.

Gartner has been praising push notifications as the way of the future for a couple of years. Gartner predicted that 50% of enterprises using mobile authentication would adopt it as their primary verification method by the end of 2019.

The cloud will undoubtedly control IAM’s potential for the foreseeable future.

The top four Office 365 security pain points

Many novice Office 365 (O365) shops do not know where platform-specific security vulnerabilities lie, or even that they exist. The threats that you are unaware exist do not cause pain until they rise up and bite – then the agony is fierce.

O365 security

Companies get themselves into trouble when they do not fully understand the way data moves through O365 or they apply on-premise security practices to their cloud strategy. While the O365 platform comes with some security features and configuration options – that all customers should take advantage of – native or built-on tools do not address many vulnerabilities or other security issues.

Below you will find four common areas that enterprises neglect when they adopt O365.

1. Impossible to implement zero trust with native tools

Enterprises are increasingly relying on zero trust cybersecurity strategies to mitigate risk and prevent data breaches. With the zero trust model, an organization only allows access between IT entities that have to communicate with each other. IT and security teams secure every communication channel and remove generic access to prevent malicious parties from eavesdropping or obtaining critical data or personally identifiable information (PII).

One problem with using a zero trust strategy is that implementing it in Azure Active Directory (Azure AD) is highly complicated. For instance, IT and security teams can label an employee an “Application Administrator,” which gives them and anyone else with that label the ability to perform/change 71 different attributes. The problem with these cookie-cutter roles is that organizations do not know precisely what all of the corresponding admin-controlled attributes mean nor do they know what functionally they are granted.

2. Difficult to manage privileged permissions

Under the O365 centralized admin model, all administrators have global credentials, which means they have access to/can see each and every user. Not only is this deeply inefficient, it also creates huge security problems. Did you know that 80% of SaaS breaches involve privileged permissions? And that admins have the most privileges of all? In O365, user identity must be treated as the security perimeter.

The native O365 admin center focuses on providing global admin rights, giving admins who tend to work locally too much power and privileges they do not need. This centralized management model of setting privileges with O365 entirely relies on granting “global admin rights” – including regional, local, or business unit administrators.

The native O365 Admin Center does not enable you to easily set up rights based on business unit or country, or for remote or satellite offices. In addition, you cannot easily limit an admin’s rights granularly, so they can only perform limited and specific functions, such as changing passwords when requested.

So, how do you mitigate the risk related to O365’s operator rights? Some IT veterans may answer with role-based access control (RBAC) as it allows organizations to partition permissions based on job roles, resulting in far fewer, truly trusted global administrators. These global admins are augmented by a set of local, or business unit focused admins with no global access, all leading to far better protection for your O365 environment.

3. Difficult to set up log and audit functions

O365 collects millions of bits of information on even the smallest implementation. Unfortunately, from a security standpoint, these data points do not exist for long and far too few are ever used for protection or forensics. Microsoft historically offers logs for only the last 30 days (though that is being increased to a year soon, but only for high-end E5 licenses), but businesses must ask themselves:

  • Why do they need to collect data logs?
  • How do logs impact regulatory compliance?
  • What happens if the logs aren’t saved or otherwise mined and audited?
  • What business value do these logs offer?

When used strategically, logs provide valuable forensics that not only help detect a breach, but also identify cybercriminals that may still reside on the network. Before businesses can even think about leveraging audits, IT and security teams have to turn on logging and implement a process to save log data far longer than Microsoft’s standard 30 days. It’s also important to know that even when logging is set up, event tracking is not an O365 default setting so businesses must turn that on.

Real-time monitoring and alerts for security compliance issues is the engine that drives much of the data that forms the logs. Smart IT shops now enable real-time monitoring and alerts for potential security compliance issues in their O365 environment.

4. The “right to be forgotten” challenge

Compliance is a big security and economic issue. There are almost daily incidents of fines occurring due to GDPR and other privacy regulations like CCPA. There is a lot involved in being compliant with GDPR, foremost among its statutes is the right to be forgotten. This statute states that individuals have the right to ask organizations to delete their personal data. However, as many businesses have learned, it is difficult to fulfill this requirement if the IT or security team cannot locate personal information or know how it was used.

Organizations must be able to track and audit individual user accounts to make sure that they not only comply with this request but have processes in place to differentiate between users with similar (or even identical) usernames, even if one of them exercises their right to be forgotten.

At their core, each of these challenges is centered around a general lack of visibility into the O365 infrastructure. Microsoft’s SaaS platform introduces a number of important business benefits and capabilities but requires enterprises to take proactive measures to account for their data and how it is accessed and shared externally. Organizations need to fulfill their end of the shared responsibility model to maintain a solid organizational security posture.

Take your SOC to the next level of effectiveness

Enterprise security infrastructures average 80 security products, creating security sprawl and a big management challenge for SOC teams. With high volumes of data generated from security controls across the infrastructure, SOC teams often rely on Security Information and Event Management (SIEM) solutions to aggregate data and deliver insight into events and alerts. Similarly, Security Orchestration, Automation and Response (SOAR) platforms can take the results and automate them into action.

However, the business needs to know that it’s safe—now. That’s why organizations are turning to Breach and Attack Simulation (BAS) integration with the SOC. BAS integration with SIEM and SOAR solutions enables SOC teams to continually evaluate the effectiveness of their security controls and improve the company’s security posture with real-time, accurate metrics.

SIEM integration

BAS validates that your SIEM is effectively picking up events and alerts. You can:

  • Validate SIEM integrations with other security controls across the infrastructure.
  • Refine SIEM rules using forensic artifacts—such as hash values, domain names, host artifacts, etc.—provided in attack simulation analyses.
  • Evaluate effectiveness of preventative controls, such as EPP, web gateways, email gateways, firewalls, and IPS.
  • Assess effectiveness of behavior-based detection controls, such as EDR, EUBA, deceptions, and honeypots.

The best BAS solutions deliver specific details about myriad controls’ ability to detect suspicious activity. A SOC team can launch an Immediate Threats Intelligence assessment to simulate the latest threats seen in the wild. Data from lateral movement, data exfiltration, and other attack vector simulations can be pulled into the SIEM for parsing, creating alerts, and remediation purposes.

SOAR Integration

BAS can run daily, hourly, or continuously with results pulled into the SOAR. Team members can prioritize remediation and take corrective steps right from the SOAR dashboard. Use BAS-generated data to:

  • Refine SOAR incident-response playbooks.
  • Assess effectiveness of post-breach controls.
  • Determine effectiveness of monitoring and response workflows.
  • Prioritize mitigation efforts according to heuristic cyber exposure scores.

Integration with GRC systems

Besides compliance risk, companies need to manage and report on risk associated with digital transformation efforts and supply-chain relationships. When BAS is integrated with Governance, Risk, and Compliance (GRC) tools, such as RSA Archer, organizations gain granular data to:

  • Proactively identify and preempt potential adverse impacts of IT configuration changes, software updates, and new technology deployments.
  • Measure control effectiveness at specific points in time and over time.
  • Reduce supply chain risk by continuously challenging security controls that defend portals, email and web gateways, and endpoints.

Power up vulnerability management tools

BAS data powers up vulnerability scanning, giving SOC teams visibility into common vulnerability and exposure (CVE) data combined with attack simulation results. Teams can prioritize and accelerate remediation according to various parameters, such as asset type, user privileges, and proximity to critical digital assets.

Integration with EDR tools

BAS enables teams to verify that EDR solutions are effectively detecting IoCs and attack techniques of the latest simulated threats. Teams can simulate specific threat behaviors on their endpoints and verify that response tools work as expected.

API Integration

BAS integration via API enables SOC teams to retrieve all assessment results from simulated attacks—including IoCs, TTPs, payload names, mitigations, other data—and move into their own environments. This gives them:

  • Immediate insights: BAS data is always available for incorporation with other SOC tools.
  • Latest threat intelligence: Detailed attacker TTP and daily threat data gives SOC teams the latest insight without needing a team of experts.
  • Unified visibility: Combining BAS results with SOC tools maximizes team productivity for decision-making and prioritization.
  • Mitigation guidelines: Teams receive specific guidance mapped to the MITRE ATT&CK™ framework for accelerating remediation.
  • Comprehensive coverage: BAS challenges controls across all vectors and the entire kill chain.
  • Continuous automated testing: SOC teams can continuously challenge controls and immediately identify infrastructure changes or security gaps before they are exploited.
  • Control optimization: Gain consistent assessment across the kill chain, ensuring that mitigation efforts deliver the expected benefit.

With just a few clicks, SOC teams can initiate thousands of attack simulations and see exactly where they’re exposed and how to fix it. Now, it’s possible to surface new threats daily, defend against advanced stealth techniques, preempt adverse effects of continuous IT change, and ensure that security controls maximize protection against state-sponsored threat actors and complex supply-chain attacks.

For more information visit Cymulate and sign up for a free trial.

Emotet: Crimeware you need to be aware of

According to the U.S. Department of Homeland Security, Emotet continues to be among the most costly and destructive malware threats affecting state, local, and territorial governments and its impact is felt across both the private and public sectors.

First identified as a banking Trojan in 2014 by Trend Micro, Emotet is often downplayed by network defenders as “commodity malware” or “crimeware”. The evolution of both the malware and the criminal network behind it continue to make it an elusive and impactful threat, continually evading antivirus and other security technologies to infect victims.

Simply put, Emotet is not a run-of-the-mill crimeware and therefore should not be underestimated. Emotet is defined by the constant investment that its operators put into continually developing technologies to defeat defensive measures.

In the past weeks we’ve seen massive changes from the malware packer to internal crypto, to new network command and control designs roll out. In addition to growing technical sophistication, Emotet plays a large and growing role in the criminal ecosystem. The better organizations can understand the evolution and role that Emotet plays, the better equipped they’ll be to protect themselves from becoming the next victim (or the better they’ll be prepared to respond if they have already fallen prey).

The criminal ecosystem

In the current criminal ecosystem, the best way to think about Emotet is as a primary provider of access to victims for multiple other large criminal actors and organizations. As such, an Emotet infection can’t be treated as a single event or incident, because Emotet is often the tip of the spear where access is provided or re-sold to other criminal groups, to conduct data theft and/or ransomware attacks against victims.

In the last month, the cybercriminals behind the Emotet infection have changed the ways they launch these types of attacks and how they evade detection. One technique is by simply swapping a .doc file with a .docx, which can make static detection of maldocs more difficult due to the compression native in the .docx format. This high-level change, combined with smaller changes in how their binary packer, command and control, embedded OLE objects, process names and powershell obfuscation are occurring, make tracking them and writing defensive signatures an ongoing challenge.

Additionally, they have become a part of the news cycle around the global health crisis: the Wuhan coronavirus. Through semi-targeted phishing campaigns, the attackers impersonate the CDC by sending fake emails with links and dangerous documents that then infect those who fall for the scam with the Emotet Trojan.

It is crucial that individuals and organizations of all sizes take this threat seriously and understand that no one is immune to falling victim. These threat actors don’t discriminate and are going to continue to look for new ways to profit off ransoms and theft by targeting anyone in their path. By having a variety of techniques, it’s harder for companies to track Emotet and detect it unless they have the right tools in multiple places.

Emotet malware: Layered defense

As their strategies become more apparent, and as new technologies are implemented to fend off Emotet, we can expect to see new techniques arise from threat actors (and in fact we have seen a round of back and forth in the last week between Emotet and JPCERT). Because of this, the best game plan for organizations to reduce their risk is by taking a proactive and layered approach.

Should an organization fall victim, the best response is a quick one. In these types of intrusion events, it is common to see the attackers gain initial access and seamlessly pass off the access to other criminal organizations, which expands the reach, depth and impact of an infection. This ends up a lot like having a house guest that won’t leave and then invites their awful and more destructive friends to the party.

The worst-case scenario for larger organizations in a successful Emotet infection is one that denies access to the environment and demands a ransom. Ransomware is estimated to have a global damage impact that cost organizations billions in 2019, so the sooner an organization is able to detect the infection, the better off they will be.

While antivirus remains critical, it is not bulletproof. Therefore, a strong multi-layered approach to defense includes an endpoint detection, a backup and response strategy, and a network monitoring capability. If an employee falls victim to a phishing scam, anti-malware protections are necessary to help contain the initial infection. However, if and when these fail, more advanced behavioral technologies can help detect these types of malware.

In addition to multiple layers of endpoint security, network visibility and network security monitoring allow organizations to have a deeper understanding of what and – more importantly – who has access to the network. Network visibility provides organizations with the ability to keep a constant eye on network traffic from endpoints to the internet and between each other. The more in-depth, proactive and extensive an organization’s network visibility capabilities are, the more likely they are to detect an infection early or catch lateral movement (the infection spreading between computers) before it turns into a widespread ransom event.

A communal approach

Beyond commercial technologies and companies, there is a communal approach that should be pointed out and which deserves credit as the main line of defense against Emotet.

Research groups and organizations are working together to disrupt Emotet’s activities and we want to name a few of them here:

  • Abuse.ch, a non-profit that works to help internet service providers and network operators protect their infrastructure from malware (and hosts one of the best threat intelligence feeds with current Emotet indicators)
  • Hatching.io, an up-and-coming sandbox provider
  • CAPE sandbox
  • Cryptolaemus (and those who support them), a research group specifically focused on fighting Emotet.

The individuals and researchers that are part of these groups spend countless (and often unpaid) hours fighting this particular criminal actor, which helps protect the millions of people targeted by this threat actor.

As the industry looks to predict the future of the Emotet botnet, organizations must continue to educate themselves about sophisticated threats. When we understand how threat actors are using the malware and what their technical capabilities are, we will all be in a better position to protect our organizations, and the sensitive data that lives within the infrastructure.

Cybersecurity is a board level issue: 3 CISOs tell why

As a venture capital investor who was previously a Chief Information Security Officer, I have noticed an interesting phenomenon: although cybersecurity makes the news often and is top of mind for consumers and business customers, it doesn’t always get the attention it deserves by the board of directors.

cybersecurity board level

Misconceptions and knowledge gaps increase this distance between security and oversight. How can boards dive deeper into the world of security and overcome the entry barriers to collaboration? Seeking advice, I reached out to prominent security leaders: Joel Fulton, the former CISO of Splunk; Jeff Trudeau, the CSO of Credit Karma; and Yassir Abousselham, the former CSO of Okta and the newly appointed CISO of Splunk. Here are their tips for board members.

Recognize security as both a business risk and an opportunity

First and foremost, it is imperative for the board to appreciate the impact that information security can have on the business. Boards should treat security as a top business risk as well as a top business opportunity. Major security events can have a significant impact on revenue, brand, and even lead to catastrophic results.

Abousselham elaborates: “In an era where organizations are handling large amounts of sensitive information and governments are actively pushing more stringent privacy laws, data breaches have serious ramifications for the organization, its customers, and partners.”

Bridge the technical gaps

Contrary to popular belief, security leaders believe that domain expertise is not a prerequisite to making smart security decisions. Instead of focusing on every technical bit and byte, Trudeau suggests the conversation should concentrate on understanding the risks and ensuring they are properly addressed.

Yet, even on a macro level, security concepts might be difficult to fully understand, so a short and dedicated security training for the board can come in handy. It’s also key to remember that it’s not only the board members who may feel like fish out of water. The CISO, too, can get intimidated and might over-rely on the comfort and familiarity of technical details.

To mitigate the differences, Abousselham offers to foster a synergic discussion by framing risks and mitigations in business terms. Fulton proposes focusing on the Venn overlap of the security program’s weaknesses and the board’s strengths (like governance and strategy). This enables the board to interact with security as they do with other domains, empowering the CISO with wise counsel, and letting both view clearly the current situation and the paths to success.

Ask the right questions

The board should operate on the notion that absolute security does not exist. The best way to assess your security program is often by focusing on and drilling down into the economic trade-offs.

Fulton’s suggested economic questions include: Are you applying your scarce resources, people, and time to the correct problems? Next, drill deep to understand the security leader’s rationale and thinking: How do you know you’re right? What evidence would indicate you’re wrong? and How can we find that evidence?

The board’s questions should also serve as a vehicle for both the CISO and Directors to think more strategically about security. As the technological environment has evolved tremendously in recent years, it is important to step outside the traditional realm of compliance and assess the potential catastrophic consequences of security deficiencies. For example, Trudeau proposes including questions like: Could what happened at this other company happen to us? What would be the damages from such threat materializing in our company?

Evaluate the effectiveness of the security program

The group offers structured approaches to synthetizing information and reaching conclusions about the security program. Abousselham recommends a top-down method: “Confirm that the CISO has a good grasp of security and compliance risks. Then validate that the CISO’s vision and strategies support the direction of the company and desired risk posture. Further, get comfortable with the CISO’s ability to execute, including the adequacy of the organizational structure, technical capabilities, funding, and ability to hire and retain talent. Lastly, because incidents are bound to happen, evaluate the ability to detect and respond to security compromises”.

Fulton advocates that the board seek to help the CISO with possible blind spots, looking to validate the security strategy and initiatives with questions like: Where are you intentionally reducing focus? Why is that decision the best decision in this company, environment, and vertical? In your areas of highest investment, what does “secure enough” look like?

Certainly, no evaluation will be complete without metrics that measure the progress and maturity of the security program. Fulton suggest boards inquire on how the program is measured and how the CISO knows the measures are valid and reliable. Abousselham offers focusing on objective risk measures with metrics to show progress against a baseline such as NIST CSF; and adopting no more than ten key metrics that summarize the state of the security program and its business influence.

When measuring the security program’s effectiveness, it is crucial to consider that it is tied to the CISO’s ability to influence the organization. The security leader’s ability to execute is very much dependent on the reporting structure.

According to Trudeau, reporting to the wrong executive could pose challenges for the security program and hinder its effectiveness. In addition, it is important to validate the CISO’s cross-functional operation. Most security practices and controls are implemented, operated, and maintained by employees without “security” in their title. Consequently, a CISO must be respected and influential outside her own organization.

Communicate in the right format and cadence

A good rule of thumb is for boards to meet the CISO at least once a year. Abousselham explains that some companies adopt a cadence of two updates per year, to the board and the audit committee. Boards might also ask the CISO for more frequent or ad hoc updates if the perceived risk is higher than the acceptable threshold.

Additionally, informal and off-schedule meetings improve relationships and information sharing simply by the reduction in formality. Fulton believes these keep strategy aligned and could be invaluable during actual or tabletop incident walk-throughs. However, boards should be careful to not overdo it as too frequent meetings can be inefficient, Trudeau warns.

With security becoming increasingly important, some organizations have created security committees to ensure independent oversight of security risk. The security leaders don’t believe it’s necessary in most cases, since it might be distracting. If a company is forming a security committee, Abousselham explains that committee members should be independent and with proper domain expertise to formulate and report an accurate opinion of the security risk posture to the board.

Conclusion

Fostering collaboration between the board and the CISO benefits both groups and the company as a whole. However, it’s not always easy and growing pains are to be expected. While everyone may share the same objective of seeing the company succeed, they often differ in their agendas and approaches.

The good news is that asking the right questions, conquering communication gaps, measuring progress and treating security as a business risk will set the board up for success in improving the company’s security standing.

February 2020 Patch Tuesday forecast: A lot of love coming our way

The January 2020 Patch Tuesday was a light one as predicted; everyone was still catching up from the end-of-year holidays. As we gain momentum into February and move towards Valentine’s Day, I anticipate Microsoft, and at least Mozilla, will give plenty of love and attention to their applications and operating systems.

February 2020 Patch Tuesday forecast

LDAP

Microsoft had announced back in August with Advisory 190023 that they were planning several updates to their implementation of the Lightweight Directory Access Protocol (LDAP). That advisory explained the need for LDAP channel binding and LDAP signing to increase security. Originally planned for Q4 2019, Microsoft has pushed the first part of this update out to March 2020.

The company is planning a two-part rollout, with the March release paving the way for major change and enforcement later in the year. As explained in the advisory, the “Windows Updates in March 2020 add new audit events, additional logging, and a remapping of Group Policy values that will enable hardening LDAP Channel Binding and LDAP Signing.”

Microsoft delayed this until March so administrators can properly test the LDAP configuration changes. There’s been a lot of discussion on the various security forums concerning this, so factor in some extra test time next month.

Windows 7 and Server 2008/2008 R2 patches

Getting back to February Patch Tuesday, the big change will be the lack of Windows 7 and Server 2008/2008 R2 patches this month. I say that tongue-in-cheek because they will still be publicly available but require a special key to install on the endpoint; this key is issued as part of the Microsoft Extended Security Update (ESU) program.

Microsoft has made this as painless as possible to accommodate the large, remaining installed base of these systems. However, with the end of any operating system there is always some confusion and panic as reality sets in.

If you have systems you just can’t migrate/upgrade yet to Windows 10 and you don’t have a planned ESU program in place, you should consider some additional options to mitigate their security risk. Consider virtualizing some of the workload and locking down the system to run only the specific applications you need. Application control can help with this lockdown and often provides some privilege management protection as well.

You can also consider a segmentation approach, i.e. remove them from direct internet connectivity or move them to more protected parts of your network.

Finally, add on some next-gen anti-virus (AV) or endpoint detection and response (EDR) solutions for added protection. You know these systems will become targets, so due diligence is important to their protection until you can migrate them.

February 2020 Patch Tuesday forecast

  • Microsoft is overdue to release some major updates, so expect them this month. We should see updates across the board with a large number of CVEs addressed in all of them. In addition to the usual OS and Office updates, we should see server updates for SharePoint, Exchange, and SQL. I don’t expect another .NET update since one was released in January, but you never know.
  • Mozilla is also overdue for a set of major updates across their product lines.
  • Google released major updates for Chrome this week, so we should only see a minor update, if any, on patch Tuesday.
  • Apple released their first major updates of the year last week, so similar to Google, we expect only minor updates, if any at all.
  • Adobe is a bit unpredictable this month. Their last major security update for Acrobat and Reader was back in early December, so the pressure is mounting for another one. Keep an eye for their pre-announcement bulletins and plan accordingly.

Even if we have a heavy patch release next Tuesday, make sure you set some time aside to spend with your significant other or a close friend the following Friday – Happy Valentine’s Day!

Data breach: Why it’s time to adopt a risk-based approach to cybersecurity

The recent high-profile ransomware attack on foreign currency exchange specialist Travelex highlights the devastating results of a targeted cyber-attack. In the weeks following the initial attack, Travelex struggled to bring its customer-facing systems back online. Worse still, despite Travelex’s assurances that no customer data had been compromised, hackers were demanding $6 million for 5GB of sensitive customer information they claim to have downloaded.

Providing services to some of the world’s largest banking corporations including HSBC, Lloyds, Barclays and RBS, the attack will clearly have a significant long-term impact on Travelex’s reputation and revenues. The company also potentially faces a catastrophic fine if customer data is found to have been accessed illegally.

The escalating costs and consequences of data breach

In the EU, the financial repercussions of a data breach can be significant. Falling foul of GDPR gives supervisory authorities the power to issue fines of up to €20 million or 4% of an organization’s annual global turnover, whichever is the higher. Meanwhile, from a reputational standpoint, a data breach has major ramifications for customer confidence and loyalty.

With cybercriminals representing a persistent risk to enterprise wellbeing, it’s little wonder that CEOs, CFOs, CISOs and CIOs now view cybersecurity as a top priority.

From lost business and falling share prices to regulatory fines and remediation costs, data breaches can have far-reaching and devastating financial consequences. According to the 2019 Cost of a Data Breach study conducted by the Ponemon Institute, the average cost of a data breach in the UK was $4.88 million – up 10.5% on the previous year.

The same report also found that UK companies took an average of 171 days to identify a breach and an average of 72 days to contain them, and highlighted that the accumulated costs in the second and third years post breach were highest for organizations operating in highly regulated environments such as healthcare, financial services and pharmaceuticals.

Building a cybersafe business requires enterprise-wide leadership collaboration

Research confirms that organizations with a well informed and involved CEO and board of directors are most likely to be successful at creating a strong security posture. Compliance with external and internal regulations and governance programs that are cascaded from above, together with effective oversight and management from leadership helps the entire organization view data security as a strategic rather than tactical activity.

Similarly, closely aligning the priorities of the IT operations and IT security functions will help ensure that the resolution and remediation of security problems can be completed successfully and that a strong security posture can be accomplished without impacting on enterprise productivity.

Strong accountability models, in which decision-making on risk rests with those that have the authority and overview to address these issues, can go a long way to ensuring that systemic security problems are not ignored or brushed under the carpet. At the end of the day, data security should not be viewed as simply a technical problem that’s handled by technical personnel working in IT.

Best practices for minimizing cyber risk

Knowing there’s a need to address cybersecurity and making the right decisions about how much money to invest and on what is one of the top challenges today’s enterprise leaders face. With the threat landscape constantly evolving, the following practices can help organizations make the shift to a more proactive risk-based approach.

1. Understand your organization’s threat profile – Undertaking a detailed risk evaluation adapted to your business activities and infrastructure is the starting point. Profiling and scoring typical attacker types and the likely sophistication of their endeavors will help inform the strategies of your security analysts and provide insight into what cybersecurity products should top the investment list.

Unfortunately, research shows that all too often organizations throw money at the latest and most highly publicized security exploits rather than the most persistent and likely vectors for attack. For example, web application vulnerabilities have been the top cybersecurity risk for several years, yet only 3% of IT spend is currently directed at web application security.

2. Get outside help – Bringing in external expertise to evaluate and benchmark the organization’s security posture against similar organizations operating in the same market will help verify if information security policies and plans are appropriate to the identified enterprise risk profile. Utilize independent consultants to undertake security and risk management reviews to boost security resilience and help leaders to define an appropriate investment strategy for cyber security tools.

3. Consider cyber liability insurance – Utilizing experts to conduct a detailed evaluation of the organization’s cyber liability insurance cover to ensure it is adequate will also help to highlight ways in which doing security better could deliver additional commercial benefits – like a lower premium. Gaining full visibility into the cyber health of the company and documenting the security measures and controls in place can help organizations identify where they need additional coverage for crucial areas. Armed with a digital resilience score, organizations will be well placed to cover more risks for less.

4. Get CISOs talkingCISOs need to capitalize on every opportunity to talk to business leaders and communicate the importance of prioritizing cyber risk and building robust internal controls. Rather than being viewed as a roadblock to potential innovation, closer collaboration with executive teams and peers across the business will foster open dialogue and problem solving that acts as a business catalyst for the enterprise.

5. Evaluate, check and review – Undertake regular risk audits to reassess the current state of play, evaluating the impact of any changes such as the implementation of new technologies, the introduction of new revenue lines or the incorporation of new units or company takeovers. This activity should be complemented by periodic testing of disaster recovery and business continuity plans to ensure everything is in place and works as expected, to mitigate the potential damage resulting from a cyber breach.

6. Take steps to protect against insider threats – Malicious insiders are the leading cause of data breaches, so putting in place programs to monitor users’ behavior is vital. Instituting good information management practices that include mobile device management, network monitoring and access control management will help eliminate the potential risk of negligence by naïve employees and contractors.

With business leaders focused on forging ahead with their digital business initiatives that enable new customer interactions and service delivery, getting everyone on board with managing security and risk exposure will be key to protecting the enterprise against malicious attack.

To succeed, organizations will need to take a proactive stance that incorporates risk-based decision making that ultimately improves business agility.

Zero Trust: Beyond access controls

As the Zero Trust approach to cybersecurity gains traction in the enterprise world, many people have come to recognize the term without fully understanding its meaning.

One common misconception: Zero Trust is all about access controls and additional authentication, such as multi-factor authentication.

Zero Trust approach cybersecurity

While these two things help organizations get to a level of Zero Trust, there is more to it: a Zero Trust approach is really an organization-wide architecture. Things aren’t always as they seem, and access controls by themselves are meaningless without a comprehensive, centrally managed infrastructure to back them up.

Let’s consider this – if an employee of an organization has their laptop stolen and their account becomes compromised, the only protective measure the organization can take is controlling access to the device. Whoever is impersonating the employee can now access the infrastructure and anything the identity tied to that account had access to.

Zero Trust: A centralized approach to cybersecurity

Organizations can avoid problems like this by managing and enforcing policies for all identities, devices, and applications centrally and setting automated rules to require additional authentication as needed. With Zero Trust, every activity related to the enterprise must be authenticated and authorized, whether it’s undertaken by a person, a device, or a line of code that drives an internal process.

If a laptop is registered, the company can still require a software token or a fingerprint scan when someone uses it to access sensitive financial information. If the user wants to change or add data, it may be a good idea to add another authentication factor, or to monitor this activity in real time – especially if making changes is not something the person ordinarily does.

The same is true when someone who routinely uses just subsets of customer information tries to download the entire customer database, or if anyone tries to copy product development specifications.

Visibility and control

In today’s world, where devices and applications are expanding rapidly and people often change roles, eliminating every potential security gap is a quixotic ideal. The Zero Trust principle acknowledges that vulnerabilities will always exist, and posits that the best way of dealing with them is to provide visibility into activity across the enterprise ecosystem.

If an event seems out-of-place, an automated alarm is triggered. That may mean alerting a manager or shutting off someone’s access while the security team investigates. By understanding context and having the ability to intervene immediately, organizations can close inevitable gaps as they arise, preventing them from evolving into security breaches.

Perhaps you’re thinking additional authorization measures will frustrate your employees, or event logs a mile long will drive your security team crazy. But when Zero Trust is managed properly, the system recognizes normal employee activity and becomes less intrusive, allowing you to offer workers convenient features like single sign-on and provides them a range of choices for authorization.

Zero Trust’s contextual awareness also helps organize event logs, prioritizing real threats instead of forcing security teams to slog through endless lists of trivialities and false alarms.

Broad reach

The key aspect of Zero Trust is the breadth of its scope. It covers the entire organization, including:

  • People: Everyone who interacts with the organization—including vendors, contractors, and IT service accounts—is given an identity and conditional access rights. Conditional, because as we have seen, legitimate access may be used for nefarious purposes, so context and activity must always be considered. If an action seems out of line, additional authorization or monitoring is activated.
  • Devices: All endpoints are included, with changes and updates made as they occur to avoid accumulating security gaps.
  • Applications: Today’s enterprises operate in a multi-cloud environment, using a host of internal and external apps, many of which interconnect or connect to other outside apps. Zero Trust provides visibility into the dependencies within and among all applications and databases and uses automation to spot irregularities no human could ever keep up with. Enterprise security rules are enforced at all times, even if the apps themselves lack adequate protection. In this way, Zero Trust removes the burden of compliance from employees, devices, and applications and places it on the central automated system.
  • Data: With Zero Trust, almost all enterprise data is encrypted. If it ever ends up in the wrong hands, the unauthorized party will not be able to decipher it, even if the user’s access credentials are compromised.

Though we have barely scratched the surface of Zero Trust here, it should be clear that it is a robust, comprehensive, and responsive security architecture extending well beyond access controls. It can be viewed as the evolution of the least privilege model. Zero Trust is strong enough to keep bad actors out, it is also flexible enough to accommodate user preferences and incorporate new people, devices, applications, and data as they flow into and out of the enterprise.

Review: Enzoic for Active Directory

Seemingly every day news drops that a popular site with millions of users had been breached and its user database leaked online. Almost without fail, attackers try to use those leaked user credentials on other sites, making password stuffing one of the most common attacks today.

Users often use the same username/email and password combination for multiple accounts and, unfortunately, enterprise accounts are no exception. Attackers can, therefore, successfully use leaked credentials to access specific company resources.

For example: An attacker wants to target CompanyX and sees that 30 users that work in CompanyX also had their account credentials leaked following a recent breach (let’s say Zynga). Trying to enter those credentials into the company’s SharePoint, Exchange, VPN, and various web portals to see if they might gain access is a no-brainer for them.

This common occurrence has resulted in the launch of several commercial and free solutions that try to mitigate this specific risk. One of them is Enzoic for Active Directory.

About Enzoic for Active Directory and this review

“Enzoic for AD is a tool that integrates into Active Directory and enforces additional password rules to prevent users from using compromised credentials,” the product’s page says.

“Unlike products that only check passwords after they are saved, thus requiring subsequent reset by the user, Enzoic validates the password at the time it is being selected. Passwords are then continuously monitored to detect if they become compromised – with automated remediation and alerts. It helps organizations with NIST Password Guideline compliance in Active Directory.”

We tested the Enzoic for AD solution and this review will focus on the following main points:

1. Setup experience – The solution’s install process and setup process.
2. A cursory overview of the privacy implications of the solution – Since the solution has to query Enzoic’s cloud to verify if a password is contained in a breached set, we decided to check what is actually sent to the cloud.
3. Usefulness and coverage – The effectiveness of the solution when tested against multiple breached credentials lists.
4. Final thoughts and impressions.

Setup experience

The installer for Enzoic for AD is available in both EXE and MSI file format. The software is a plugin for Microsoft Active Directory, which needs to be installed on all AD servers in your organization to achieve coverage.

The installation process begins with a standard Windows install:

Enzoic for Active Directory

Enzoic for Active Directory needs to be configured. Which Users, Groups, Containers should be covered by its functionality to check for compromised password? Will the entire AD be covered? (For this test, we left the default “All Users in Active Directory” option.)

Enzoic for Active Directory

After confirming coverage, monitoring options can be configured. The options are:

1. Reject common passwords found in cracking dictionaries (or not).
2. Check passwords during password resets (or not).
3. Use fuzzy password matching (or not).

Enzoic for Active Directory

In the next step we needed to select the remediation action. The solution allows for the following options:

1. User must change password on next login.
2. User must change password on next login (delayed).
3. Disable account.
4. Disable account (Delayed).
5. Notify only (via email to the user and to a number of other accounts). E-mail is sent by Enzoic (through Amazon SES) and you cannot configure a specific email server to use.

Enzoic for Active Directory

Installation and configuration are simple and easy even for a beginner. After-setup configuration capabilities are also very easy to understand and to tweak.

They include the same options offered at setup-time, plus two additional ones. One allowed adding a custom password dictionary, which can include a word or parts of words that should not appear in a password (e.g., the name of your business). Another setting allowed password blocking based on similarity, according to a configurable distance value that defines how closely a new password can match a previous password.

Enzoic for Active Directory

Enzoic for Active Directory

After a quick mandatory server restart, we proceeded to test the usability of the application.

A cursory overview of the privacy implications of the solution

“Trust but verify,” says an old proverb, so we decided to inject a CA certificate into our AD server, to be able to sniff the communications between our AD server and Enzoic’s servers to see what actually gets shared with Enzoic. We entered a very common password (administrator) and tried to verify it:

Enzoic for Active Directory

As you can see, that password was rejected, but let’s see what was shared on the wire:

Enzoic for Active Directory

In the request you can see that the application takes the input string “administrator” and hashes that value with MD5, SHA1 and SHA256 hashes and sends the first 40 bits of each hash to Enzoic’s cloud, which responds with the possible candidates to check. This is similar to the k-anonymity algorithm used by HaveIBeenPwned’s API service, which shares only the starting 20 bits of SHA1 hash output.

Enzoic for Active Directory

We did not actually try to reverse engineer the application, since this was a cursory review just to make sure that the actual passwords are not being sent to Enzoic’s cloud.

We also left our domain controller (DC) connected to the internet for 48 hours to see what kind of data (if any) is being sent to Enzoic. We found that the app shares some telemetry with the Enzoic cloud, namely the number of matches of breached passwords in the organization and number of users, probably for licensing purposes:

Enzoic for Active Directory

Enzoic for Active Directory

Usefulness and coverage

Next, we wanted to see how Enzoic for AD handles leaked passwords, so we covered a few scenarios that might be interesting to our readers:

  • Verifying if the application correctly detects passwords from common wordlists used by attackers.
  • Verifying if the application correctly detects passwords from common large-scale breaches (LinkedIn, RockYou).
  • Verifying if the application correctly detects passwords from very recent leaks (Zynga).

We decided to take a random sample from SecLists, the LinkedIn and RockYou leaks, and even fully random passwords that were a part of a breached set (e.g., *23P%GWtUPST2jQ&auUB7j542) were correctly identified. We also ran a random sample of passwords from other leaks (e.g., the Hak5 leak) and they were also correctly detected.

One thing that interested us was whether Enzoic for AD could detect passwords from recent leaks. (Un)fortunately, a week before this test the full user database from game company Zynga was leaked on the internet, so we decided to test Enzoic for AD with the newly available leaked passwords.

We sampled passwords randomly but also tried to find unique passwords that were contained in the Zynga breach but not in the sets we used previously. We found a couple of such passwords, and they were successfully detected as breached passwords by Enzoic for AD. Good job!

Looking to the future

We couldn’t test the breached password notification option, since that would require us to actually have users who are a part of an actual breach that is about to occur, which cannot be easily simulated.

Looking forward to the future, there are a few things that could be changed, but are in no way a deal-breaker from our perspective.

The first one is the sharing of three types of hashes and 40 bits of data per hash. We could argue this is excessive since the reference implementation for k-anonymity only shares 20 bits of a single hash.

Enzoic tells us that they chose that length of partial hash as a good balance between anonymity and performance. Keeping the number of candidate hashes returned to a more reasonable number and thus reducing latency for the call is an important concern, since many of their customers are very sensitive to latency.

They view the additional data sent as of minimal risk (keep in mind no usernames are shared and none of these requests are logged on their end). That said, they do have it on their roadmap to make the partial hash match length configurable in the future – with the trade-off that some users might have longer latencies when attempting a password change if this length is significantly reduced.

Secondly, when the user gets notified that a breached password was found, the notification could also contain the information in which breached set that password was found. This would be interesting to both users and security personnel in an organization. We are aware that this information cannot be shared with the user through the standard Windows interface, but it can be sent via email or stored in event logs.

Final verdict

Enzoic for Active Directory is a first-rate solution for ensuring that your users don’t select passwords that were part of a breach. Its coverage of leaked lists is very good, since any list we could legally obtain was correctly flagged by it. Installation is simple and configuration and maintenance are no hassle.

One excellent aspect of this tool is that even someone who is marginally acquainted with Active Directory and has zero experience with Enzoic’s solution can install and make the solution work out of the box. Definitely a 10/10 for user experience.

January 2020 Patch Tuesday forecast: Let’s start the new decade right

The holidays are over, and another Patch Tuesday is rapidly approaching. My New Year’s resolution was to stop procrastinating when it comes to getting organized. I have several locations in my house where I store things and every time I open a drawer or door, I think “I really could make better use of this space if I just took the time to get it organized.”

January 2020 Patch Tuesday forecast

Over the holidays, I finally took the time to get started. I cleared out stuff I no longer needed, cleaned out the area, arranged what was left, and was amazed at the results. One less thing I had to worry about, and I felt better about myself too. Maybe there is a lesson here to be carried over to our security operations?

We all have those systems that always have issues during updates. We know they are there and dread working on them, just because they slow down our patch cycle. In the end, they are either the last to get patched or they don’t get patched at all and we just wait another month worrying about them being in a possible vulnerable state. Maybe we need a resolution to tackle these systems head-on so we don’t need to worry about them anymore.

Take the time to resolve the issues, or if they are old, consider a complete replacement of the hardware and software. We have enough stress in our lives so don’t prolong it worrying about these systems month after month. Take the time to fix the issues and you will be more efficient overall. Join me in this resolution and we can start the new decade right.

The January 2020 Patch Tuesday will provide us with the last free update of Windows 7 and Server 2008/2008 R2. We’ve talked about it for the last several months and it is finally here. Microsoft released additional guidance if you are planning on subscribing to extended security updates; make sure your systems are prepared.

It’s challenging to forecast what we will see from Microsoft this month. I was expecting to finish out last year with a bang, but we really ended on a whimper. The OS updates contained minimal CVE fixes with only 16 for Windows 10 and the low teens across the remaining legacy systems.

Other than these OS updates, we had the usual Office releases but no Exchange, Sharepoint, .NET, or other updates. It was one of the lightest patch Tuesday releases in a long time. Microsoft may have ‘saved up’ other updates for January Patch Tuesday, but I suspect not.

January is a typically a light month for releases, and I expect that trend to continue.

January 2020 Patch Tuesday Forecast

  • We are overdue, so expect a .NET update from Microsoft. Windows 7 and Server 2008/2008 R2 may get some special attention this month since it is the final public security release.
  • Mozilla released a major update on Tuesday, so if we get anything next week it will only be a minor update.
  • Google released their last major updates back on December 10 and a minor update this week, so I don’t expect anything here.
  • We saw security updates for Acrobat, Reader, and Flash (after several months with none) last month. Be on the lookout for a possible Flash update, but no pre-announcements have been made for any of these products so far.
  • Apple released major security updates on December Patch Tuesday, so I don’t expect any this month.

With a light January 2020 Patch Tuesday forecast, give some thought to starting the decade right!

Why outsourcing your DPO role is an effective insurance policy

Organizations are starting to take a much more considered approach to data protection as high-profile regulatory action for data mishandlings has raised both the stakes and interest in data privacy operations.

DPO role

Since the EU General Data Protection Regulation (GDPR) came into force in May 2018, data protection has risen to the top of the news agenda. Simultaneously, the GDPR has raised the profile and highlighted the importance of the Data Protection Officer (DPO) internationally as, under this legislation, certain entities are under legal obligation to appoint a DPO.

Noncompliance with the GDPR carries hefty fines and is generally associated with a wave of negativity when public trust is compromised. Moreover, there is a growing global awareness that data protection matters, and people expect organizations to handle their personal data with care. It is for this reason that legislators around the world are actively seeking new ways to protect the security and privacy of personal data.

Organizations should strive for ethical handling of personal data

The global movement for an ethical handling of personal information is multidimensional. Investor activism and customer scrutiny – over the way their data is collected, processed and used – is putting the pressure on organizations to act ethically and on legislators to enact laws that effectively deal with rapid technological changes. Issues related to corporate governance and accountability are at the center of this movement.

Every day at HewardMills we speak with more and more organizations recognizing the value of in-depth knowledge and the need for total autonomy in this area. Businesses understand that their reputation is closely aligned with the processes around privacy and data protection in place. As a result, clearer lines are being drawn around departmental responsibilities to better operationalize data protection regulations.

Similar to other data specialist skill sets, demand for qualified and experienced DPOs is raising. This is a result of the role being both legally required for certain entities and organizations realizing the value of fostering a data protection culture.

The DPO role is a cornerstone

The DPO can be internal or external, but they must be allowed to function independently. They are the link between the organization, the supervisory authorities and the data subjects. Thus, it is important that the DPO strike a careful balance to meet their own obligations toward all parties involved.

DPOs play a pivotal role in an organization’s data management health and are required to report directly to the highest level of management. Some tasks that fall under the DPO role include advising on issues around data protection impact assessments (DPIAs), training, overseeing the accuracy of data mapping and responding to data subject access requests (DSARs). These things are all mandated under the GDPR.

Even the best intentions fall flat without the right execution

Organizations may have good intentions to achieve best practices and meet their legal obligations, but the data protection process does not stop there. Practical knowledge on how to operationalize legal obligations is the key to success. For example, if an organization is not adequately prepared to respond to DSARs, it may miss the one-month GDPR deadline or respond in an incomplete manner.

Since the GDPR came into effect, supervisory authorities have actively sought greater transparency. This means that there is a particular focus on accurate privacy notices, data protection impact assessments and legitimate interest assessments. Given the global trend toward accountability, it is safe to argue that investing in data protection and privacy will win the trust of individuals, be the customers or employees. Organizations that foster a culture of integrity are at a competitive advantage in a world where privacy and data protection matter. For those that do not, the financial, legal and public opinion risks can be significant.

Getting ahead of the risks

Being responsive to GDPR data subject requests helps to build trust with individuals and demonstrates a serious dedication to data protection obligations. The DPO is the contact point for data subjects who are exercising their rights. As such, DPOs must be easily accessible, be it by telephone, mail or other avenues. Lack of resources is not an excuse for neglecting legal obligations and denying data subjects their rights. A consultant or outsourced DPO role can provide a cost-effective way to fill this gap.

DPOs help organizations to prioritize risks. While they themselves must address highest-risk activities first, they must also educate on how DPIAs are reached. This allows controllers to know which activities should be prioritized. Ultimately, ensuring data controllers are informed about the perceived risks relating to different processing activities. For instance, the DPO could flag data protection audits, the need for enhanced security measures, or gaps in staff training and resource allocations.

The insurance policy of an autonomous partner

To maintain the level of autonomy needed to act as an independent body, job security has been built into the DPO appointment. The DPO can be disciplined or even terminated for legitimate reasons. However, they cannot be dismissed or penalized by the controller or processor as a result of carrying out their duties. In other words, the organization cannot direct the DPO or instruct them to reach a certain desired conclusion. The DPO must also be given the resources required to achieve this level of independence and carry out their duties. Typically, these resources are budget, equipment and staff.

One of the benefits of using an external DPO is that conflicts of interest are less likely. Organizations should strive to give the DPO the necessary autonomy to successfully act as a bridge between data subjects, the organization and the supervisory authorities. The DPO should not be assigned tasks that would put them in a position of “marking their own homework”. Used correctly, the DPO is a partner that helps navigate the organization toward an ethical handling of personal data.

Faced with meeting strict obligations under GDPR, organizations controlling and processing personal data must empower and embrace their DPOs and work closely with them. Organizations should view DPOs as a type of insurance policy for data risk and not think of them as the regulators’ undercover watchmen.