Five critical cloud security challenges and how to overcome them

Today’s organizations desire the accessibility and flexibility of the cloud, yet these benefits ultimately mean little if you’re not operating securely. One misconfigured server and your company may be looking at financial or reputational damage that takes years to overcome.

critical cloud security challenges

Fortunately, there’s no reason why cloud computing can’t be done securely. You need to recognize the most critical cloud security challenges and develop a strategy for minimizing these risks. By doing so, you can get ahead of problems before they start, and help ensure that your security posture is strong enough to keep your core assets safe in any environment.

With that in mind, let’s dive into the five most pressing cloud security challenges faced by modern organizations.

1. The perils of cloud migration

According to Gartner, the shift to cloud computing will generate roughly $1.3 trillion in IT spending by 2022. The vast majority of enterprise workloads are now run on public, private or hybrid cloud environments.

Yet if organizations heedlessly race to migrate without making security a primary consideration, critical assets can be left unprotected and exposed to potential compromise. To ensure that migration does not create unnecessary risks, it’s important to:

  • Migrate in stages, beginning with non-critical or redundant data. Mistakes are often more likely to occur earlier in the process. So, begin moving data that won’t lead to damaging consequences to the enterprise in case it gets corrupted or erased.
  • Fully understand your cloud provider’s security practices. Go beyond “trust by reputation” and really dig into how your data is stored and protected.
  • Maintain operational continuity and data integrity. Once migration occurs, it’s important to ensure that controls are still functioning and there is no disruption to business operations.
  • Manage risk associated with the lack of visibility and control during migration. One effective way to manage risk during transition is to use breach and attack simulation software. These automated solutions launch continuous, simulated attacks to view your environment through the eyes of an adversary by identifying hidden vulnerabilities, misconfigurations and user activity that can be leveraged for malicious gain. This continuous monitoring provides a significant advantage during migration – a time when IT staff are often stretched thin, learning new concepts and operating with less visibility into key assets.

2. The need to master identity and access management (IAM)

Effectively managing and defining the roles, privileges and responsibilities of various network users is a critical objective for maintaining robust security. This means giving the right users the right access to the right assets in the appropriate context.

As workers come and go and roles change, this mandate can be quite a challenge, especially in the context of the cloud, where data can be accessed from anywhere. Fortunately, technology has improved our ability to track activities, adjust roles and enforce policies in a way that minimizes risk.

Today’s organizations have no shortage of end-to-end solutions for identity governance and management. Yet it’s important to understand that these tools alone are not the answer. No governance or management product can provide perfect protection as organizations are eternally at the mercy of human error. To help support smart identity and access management, it’s critical to have a layered and active approach to managing and mitigating security vulnerabilities that will inevitably arise.

Taking steps like practicing the principle of least privilege by permitting only the minimal amount of access necessary to perform tasks will greatly enhance your security posture.

3. The risks posed by vendor relationships

The explosive growth of cloud computing has highlighted new and deeper relationships between businesses and vendors, as organizations seek to maximize efficiencies through outsourcing and vendors assume more important roles in business operations. Effectively managing vendor relations within the context of the cloud is a core challenge for businesses moving forward.

Why? Because integrating third-party vendors often substantially raises cybersecurity risk. A Ponemon institute study in 2018 noted that nearly 60% of companies surveyed had encountered a breach due to a third-party. APT groups have adopted a strategy of targeting large enterprises via such smaller partners, where security is often weaker. Adversaries know you’re only as strong as your weakest link and take the least path of resistance to compromise assets. Due to this, it is incumbent upon today’s organizations to vigorously and securely manage third-party vendor relations in the cloud. This means developing appropriate guidance for SaaS operations (including sourcing and procurement solutions) and undertaking periodic vendor security evaluations.

4. The problem of insecure APIs

APIs are the key to successful cloud integration and interoperability. Yet insecure APIs are also one of the most significant threats to cloud security. Adversaries can exploit an open line of communication and steal valuable private data by compromising APIs. How often does this really occur? Consider this: By 2022, Gartner predicts insecure APIs will be the vector most commonly used to target enterprise application data.

With APIs growing ever more critical, attackers will continue to use tactics such as exploiting inadequate authentications or planting vulnerabilities within open source code, creating the possibility of devastating supply chain attacks. To minimize the odds of this occurring, developers should design APIs with proper authentication and access control in mind and seek to maintain as much visibility as possible into the enterprise security environment. This will allow for the quick identification and remediation of such API risks.

5. Dealing with limited user visibility

We’ve mentioned visibility on multiple occasions in this article – and for good reason. It is one of the keys to operating securely in the cloud. The ability to tell friend from foe (or authorized user from unauthorized user) is a prerequisite for protecting the cloud. Unfortunately, that’s a challenging task as cloud environments grow larger, busier and more complex.

Controlling shadow IT and maintaining better user visibility via behavior analytics and other tools should be a top priority for organizations. Given the lack of visibility across many contexts within cloud environments, it’s a smart play to develop a security posture that is dedicated to continuous improvement and supported by continuous testing and monitoring.

Critical cloud security challenges: The takeaway

Cloud security is achievable as long as you understand, anticipate and address the most significant challenges posed by migration and operation. By following the ideas outlined above, your organization will be in a much stronger position to prevent and defeat even the most determined adversaries.

Qualys Multi-Vector EDR: Protection across the entire threat lifecycle

Traditional endpoint detection and response (EDR) solutions focus only on endpoint activity to detect attacks. As a result, they lack the context to analyze attacks accurately.

In this interview, Sumedh Thakar, President and Chief Product Officer, illustrates how Qualys fills the gaps by introducing a new multi-vector approach and the unifying power of its Cloud Platform to EDR, providing essential context and visibility to the entire attack chain.

Qualys Multi-Vector EDR

How does Qualys Multi-Vector EDR differ from traditional EDR solutions?

Traditional EDR solutions focus only on endpoint activity, which lacks the context necessary to accurately analyze attacks and leads to a high rate of false positives. This can put an unnecessary burden on incident response teams and requires the use of multiple point solutions to make sense of it all.

Qualys Multi-Vector EDR leverages the strength of EDR while also extending the visibility and capabilities beyond the endpoint to provide a more comprehensive approach to protection. Multi-Vector EDR integrates with the Qualys Cloud Platform to deliver vital context and visibility into the entire attack chain while dramatically reducing the number of false positives and negatives as compared with traditional EDR.

This integration unifies multiple context vectors like asset discovery, rich normalized software inventory, end-of-life visibility, vulnerabilities and exploits, misconfigurations, in-depth endpoint telemetry and network reachability all correlated for assessment, detection and response in a single app. It provides threat hunters and incident response teams with crucial, real-time insight into what is happening on the endpoint.

Vectors and attack surfaces have multiplied. How do we protect these systems?

Many attacks today are multi-faceted. The suspicious or malicious activity detected at the endpoint is often only one small part of a larger, more complex attack. Companies need visibility across the environment to effectively fully understand the attack and its impact on the endpoint—as well as the potential consequences elsewhere on their network. This is where Qualys’ ability to gather and assess the contextual data on any asset via Qualys Global IT Asset Inventory becomes so important.

The goal of EDR is detection and response, but you need a holistic view to do it effectively. When a threat or suspicious activity is detected, you need to act quickly to understand what the information or indicator means, and how you can pivot to take action to prevent any further compromise.

Qualys unveils Multi-Vector EDR

How can security teams take advantage of Qualys Multi-Vector EDR?

Attack prevention and detection are two sides of the same coin for security teams. With current endpoint tools focusing solely on endpoint telemetry, security teams end up bringing in multiple point solutions and threat intelligence feeds to figure out what is happening in their environment.

On top of it, they need to invest their budget and time in integrating these solutions and correlating data for actionable insights. With Qualys EDR, security teams can continuously collate asset telemetry such as process, files and hashes to detect malicious activities and correlate with natively integrated threat intel for prioritization score-based response actions.

Instead of reactively taking care of malicious events one endpoint at a time, security teams can easily pivot to inspect other endpoints across the hybrid infrastructure for exploitable vulnerabilities, MITRE-based misconfigurations, end-of-life or unapproved software and systems that lack critical patches.

Additionally, through native workflows that provide exact recommendations, security and IT teams can patch or remediate the endpoints for the security findings. This is an improvement over previous methods which require handshaking of data from one tool to another via complex integrations and manual workflows.

For example, Qualys EDR can help security teams not only detect MITRE-based attacks and malicious connections due to RDP (remote desktop) exploitation but can also provide visibility across the infrastructure. This highlights endpoints that can connect to the exploited endpoint and have RDP vulnerabilities or a MITRE-mapped configuration failure such as LSASS. Multi-Vector EDR then lets the user patch vulnerabilities and automatically remediate misconfigurations.

Thus, Qualys’ EDR solution is designed to equip security teams with advanced detections based on multiple vectors and rapid response and prevention capabilities, minimizing human intervention, simplifying the entire security investigation and analyze processes for organizations of all sizes. Security practitioners can sign up for a free trial here.

What response strategies does Qualys Multi-Vector EDR use?

Qualys EDR with its multi-layered, highly scalable cloud platform, retains telemetry data for active and historical view and natively correlates it with multiple external threat intelligent feeds. This eliminates the need to rely on a single malware database and provides a prioritized risk-based threat view. This helps security teams hunt for the threats proactively and reactively with unified context of all security vectors, reducing alert fatigue and helping security teams concentrate on what is critical.

Qualys EDR provides comprehensive response capabilities that go beyond traditional EDR options, like killing process and network connections, quarantining files, and much more. In addition, it uniquely orchestrates responses such as preventing future attacks by correlating exploitable-to-malware vulnerabilities automatically, patching endpoints and software directly from the cloud and downloading patches from the vendor’s website, without going through the VPN bandwidth.

How to drive business value through balanced development automation

Aligning security and delivery at a strategic level is one of the most complex challenges for executives. It starts with an understanding that risk-based thinking should not be perceived as an overhead or tax, but a value added component of creating a high-quality product or service.

development automation

One solution is balanced development automation, which is about aligning automated DevOps (development and IT operations) pipelines with business risk and compliance. To attain this, alignment must be achieved between risk and business teams at two different levels:

1. Strategic level (CEO, COO, CFO, CRO, CIO, DPO)
2. Operational level (DevOps engineers, risk engineers)

The strategic level is more focused on delivery of business value, customer needs, risk, regulations, compliance, and so on. The operational level is focused on aligning to governance protocols like risk thresholds, delivery timelines, and automation during the build phases of business value creation.

Achieving alignment at the strategic level

At the executive level, both sides of business and risk need to concentrate on quality first – only then does it make sense to go about balancing risk and speed. Otherwise, risk and speed wind up as the only concerns and that risks poor quality showing up in products and services at the end of the line.

The end of the line in any process is where the actual customer that receives the value from a product or service experiences the touchpoint with your portfolio of valued items. It is there that perceived value needs to have the appropriate operational indicators. Some refer to these as customer-driven metrics. These are the ones that can measure Operational Key Results in alignment with operational risk metrics.

Once executive alignment is achieved on quality, the next step is to measure against key strategic customer metrics like attrition and satisfaction. This gives an indication of the value customers receive from a product or service. Organizations should think about appropriate high level metrics and measurements at the end of the development lifecycle, risk thresholds, and how these map to their customer. I consider these as the “parent” metrics.

After that, consider “child” metrics in the plan, delivery, and operation of DevOps – from here, governance and speed will come into play. A key problem today is the self-attestation audit activity at the end of the line process, which is hard to validate. This just doesn’t integrate well with a DevOps process because the measurement is reactive and coming too far down the pipeline. Worse yet, going back and fixing risk issues later on gets perceived as getting in the way. What needs to happen is a shift to the left of the development process where risk is measured early and often.

As organizations evolve into a more digital set of processes, this shift left is critical to understanding those key measurements from the beginning of the lifecycle. Otherwise, junk at the beginning will just automate junk faster all the way down the line. Eventually, there will be a higher price to pay for poor quality.

Achieving alignment at the operational level

Operationally, challenges stem from misalignment in understanding who the end customer really is. Companies often design products and services for themselves and not for the end customer. Once an organization focuses on the end user and how they are going to use that product and service, the shift in thinking occurs. Now it’s about looking at what activities need to be done to provide value to that end customer.

Thinking this way, there will be features, functions, and processes never done before. In the words of Stephen Covey, “Keep the main thing the main thing”. What is the main thing? The customer. What features and functionality do you need for each of them from a value perspective? And you need to add governance to that.

Effective governance ensures delivery of a quality product or service that meets your objectives without monetary or punitive pain. The end customer benefits from that product or service having effective and efficient governance.

That said, heavy governance is also waste. There has to be a tension and a flow or a balance between Hierarchical Governance and Self Governance where the role of every person in the organization is clearly aligned in their understanding of value contributed to the end customer. With that, employees and contractors alike feel empowered and purposeful in their work and contributions.

Once the customer value proposition is clearly identified, organizations can identify how day to day operations contribute value to that end customer in an efficient way. This is where lean thinking helps, looking for ways to reduce waste in the value creation process. If something is not a part of the value proposition, is it necessary? If something is missing that would add significant value, how can we add it? This will lead to an alignment that drives value creation.

Conclusion

Delivering on DevOps speed is no longer good enough. Organizations also need to balance the need for speed against regulatory, compliance, and security concerns—and we need to do this fast and first. If a firm can’t get there fast through re-structure of an operating model and associated skills, it is best to have SCRUM Masters trained in LEAN and Six Sigma, TOGAF, and assorted Cybersecurity GRC Frameworks to helps you through iterations. I call that the big “Iterative, Fast and First” (IFF) principle of GRC by Design.

Are the activities an organization is conducting offering something of value to the business? Answering this question has implications for both strategic and operational teams. The business value context sets up alignment with the end customer and drives value at each stage through balanced development automation.

How do I select a password management solution for my business?

91 percent of people know that using the same password on multiple accounts is a security risk, yet 66 percent continue to use the same password anyway. IT security practitioners are aware of good habits when it comes to strong authentication and password management, yet often fail to implement them due to poor usability or inconvenience.

To select a suitable password management solution for your business, you need to think about a variety of factors. We’ve talked to several cybersecurity professionals to get their insight on the topic.

Simran Anand, Head of B2B Growth, Dashlane

select password managementAn organization’s security chain is only as strong as its weakest link – so selecting a password manager should be a top priority among IT leaders. While most look to the obvious: security (high grade encryption, 2FA, etc.), support, and price, it’s critical to also consider the end-user experience. Why? Because user adoption remains by far IT’s biggest challenge. Only 17 percent of IT leaders incorporate the end-UX when evaluating password management tools.

It’s not surprising, then, that those who have deployed a password manager in their company report only 23 percent adoption by employees. The end-UX has to be a priority for IT leaders who aim to guarantee secure processes for their companies.

Password management is too important a link in the security chain to be compromised by a lack of adoption (and simply telling employees to follow good password practices isn’t enough to ensure it actually happens). For organizations to leverage the benefits of next-generation password security, they need to ensure their password management solution is easy to use – and subsequently adopted by all employees.

Gerald Beuchelt, CISO, LogMeIn

select password managementAs the world continues to navigate a long-term future of remote work, cybercriminals will continue to target users with poor security behaviors, given the increased time spent online due to COVID-19. Although organizations and people understand that passwords play a huge role in one’s overall security, many continue to neglect best password practices. For this reason, businesses should implement a password management solution.

It is essential to look for a password management solution that:

  • Monitors poor password hygiene and provides visibility to the improvements that could be made to encourage better password management.
  • Standardizes and enforces policies across the organization to support proper password protection.
  • Provides a secure password management portal for employees to access all account passwords conveniently.
  • Reports IT insights to provide a detailed security report of potential threats.
  • Equips IT to audit the access controls users have with the ability to change permissions and encourage the use of new passwords.
  • Integrates with previous and existing infrastructure to automate and accelerate workflows.
  • Oversees when users share accounts to maintain a sense of security and accountability.

Using a password management solution that is effective is crucial to protecting business information. Finding the right solution will not only help to improve employee password behaviors but also increase your organization’s overall online security.

Michael Crandell, CEO, Bitwarden

select password managementEmployees, like many others, face the daily challenge of remembering passwords to securely work online. A password manager simplifies generating, storing, and sharing unique and complex passwords – a must-have for security.

There are a number of reputable password managers out there. Businesses should prioritize those that work cross-platform and offer affordable plans. They should consider if the solution can be deployed in the cloud or on-premises. A self-hosting option is often preferred by some organizations for security and internal compliance reasons.

Password managers need to be easy-to-use for every level of user – from beginner to advanced. Any employee should be able to get up and running in minutes on the devices they use.

As of late, many businesses have shifted to a remote work model, which has highlighted the importance of online collaboration and the need to share work resources online. With this in mind, businesses should prioritize options that provide a secure way to share passwords across teams. Doing so keeps everyone’s access secure even when they’re spread out across many locations.

Finally, look for password managers built around an open source approach. Being open source means the source code can be vetted by experienced developers and security researchers who can identify potential security issues, and even contribute to resolving them.

Matt Davey, COO, 1Password

select password management65% of people reuse passwords for some or all of their accounts. Often, this is because they don’t have the right tools to easily create and use strong passwords, which is why you need a password manager.

Opt for a password manager that gives you oversight over the things that matter most to your business: from who’s signed in from where, who last accessed certain items, or which email addresses on your domain have been included in a breach.

To keep the admin burden low, look for a password manager that allows you to manage access by groups, delegate admin powers, and manage users at scale. Depending on the structure of your business, it can be useful to grant access to information by project, location, or team.

You’ll also want to think about how a password manager will fit with your existing IAM/security stack. Some password managers integrate with identity providers, streamlining provisioning and administration.

Above all, if you want your employees to adopt your password manager of choice, make sure it’s easy to use: a password manager will only keep you secure if your employees actually use it.

Facing gender bias in facial recognition technology

In the 1960s, Woodrow W. Bledsoe created a secret program that manually identified points on a person’s face and compared the distances between these coordinates with other images.

facial recognition bias

Facial recognition technology has come a long way since then. The field has evolved quickly and software can now automatically process staggering amounts of facial data in real time, dramatically improving the results (and reliability) of matching across a variety of use cases.

Despite all of the advancements we’ve seen, many organizations still rely on the same algorithm used by Bledsoe’s database – known as “k-nearest neighbors” or k-NN. Since each face has multiple coordinates, a comparison of these distances over millions of facial images requires significant data processing. The k-NN algorithm simplifies this process and makes matching these points easier by considerably reducing the data set. But that’s only part of the equation. Facial recognition also involves finding the location of a feature on a face before evaluating it. This requires a different algorithm such as HOG (we’ll get to it later).

The problem

The algorithms used for facial recognition today rely heavily on machine learning (ML) models, which require significant training. Unfortunately, the training process can result in biases in these technologies. If the training doesn’t contain a representative sample of the population, ML will fail to correctly identify the missed population.

While this may not be a significant problem when matching faces for social media platforms, it can be far more damaging when the facial recognition software from Amazon, Google, Clearview AI and others is used by government agencies and law enforcement.

Previous studies on this topic found that facial recognition software suffers from racial biases, but overall, the research on bias has been thin. The consequences of such biases can be dire for both people and companies. Further complicating matters is the fact that even small changes to one’s face, hair or makeup can impact a model’s ability to accurately match faces. If not accounted for, this can create distinct challenges when trying to leverage facial recognition technology to identify women, who generally tend to use beauty and self-care products more than men.

Understanding sexism in facial recognition software

Just how bad are gender-based misidentifications? Our team at WatchGuard conducted some additional facial recognition research, looking solely at gender biases to find out. The results were eye-opening. The solutions we evaluated was misidentifying women 18% more often than men.

You can imagine the terrible consequences this type of bias could generate. For example, a smartphone relying on face recognition could block access, a police officer using facial recognition software could mistakenly identify an innocent bystander as a criminal, or a government agency might call in the wrong person for questioning based on a false match. The list goes on. The reality is that the culprit behind these issues is bias within model training that creates biases in the results.

Let’s explore how we uncovered these results.

Our team performed two separate tests – first using Amazon Rekognition and the second using Dlib. Unfortunately, with Amazon Rekognition we were unable to unpack just how their ML modeling and algorithm works due to transparency issues (although we assume it’s similar to Dlib). Dlib is a different story, and uses local resources to identify faces provided to it. It comes pretrained to identify the location of a face, and with face location finder HOG, a slower CPU-based algorithm, and CNN, a faster algorithm making use of specialized processors found in a graphics cards.

Both services provide match results with additional information. Besides the match found, a similarity score is given that shows how close a face must match to the known face. If the face on file doesn’t exist, a similarity score set to low may incorrectly match a face. However, a face can have a low similarity score and still match when the image doesn’t show the face clearly.

For the data set, we used a database of faces called Labeled Faces in the Wild, and we only investigated faces that matched another face in the database. This allowed us to test matching faces and similarity scores at the same time.

Amazon Rekognition correctly identified all pictures we provided. However, when we looked more closely at the data provided, our team saw a wider distribution of the similarities in female faces than in males. We saw more female faces with higher similarities then men and more female faces with less similarities than men (this actually matches a recent study performed around the same time).

What does this mean? Essentially it means a female face not found in the database is more likely to provide a false match. Also, because of the lower similarity in female faces, our team was confident that we’d see more errors in identifying female faces over male if given enough images with faces.

Amazon Rekognition gave accurate results but lacked in consistency and precision between male and female faces. Male faces on average were 99.06% similar, but female faces on average were 98.43% similar. This might not seem like a big variance, but the gap widened when we looked at the outliers – a standard deviation of 1.64 for males versus 2.83 for females. More female faces fall farther from the average then male faces, meaning female false match is far more likely than the 0.6% difference based on our data.

Dlib didn’t perform as well. On average, Dlib misidentified female faces more than male, leading to an average rate of 5% more misidentified females. When comparing faces using the slower HOG, the differences grew to 18%. Of interest, our team found that on average, female faces have higher similarity scores then male when using Dlib, but like Amazon Rekognition, also have a larger spectrum of similarity scores leading to the low results we found in accuracy.

Tackling facial recognition bias

Unfortunately, facial recognition software providers struggle to be transparent when it comes to the efficacy of their solutions. For example, our team didn’t find any place in Amazon’s documentation in which users could review the processing results before the software made a positive or negative match.

Unfortunately, this assumption of accuracy (and lack of context from providers) will likely lead to more and more instances of unwarranted arrests, like this one. It’s highly unlikely that facial recognition models will reach 100% accuracy anytime soon, but industry participants must focus on improving their effectiveness nonetheless. Knowing that these programs contain biases today, law enforcement and other organizations should use them as one of many tools – not as a definitive resource.

But there is hope. If the industry can honestly acknowledge and address the biases in facial recognition software, we can work together to improve model training and outcomes, which can help reduce misidentifications not only based on gender, but race and other variables, too.

A 2020 approach to security: People matter

The information security industry frequently utilizes the phrase “people, processes and technology” (PPT) to describe a holistic model of securing the business.

But though this phrase is repeated ad nauseum, we seem to have forgotten one of those three primary pillars: people.

In an effort to secure things technically, we prioritize the protection of our processes and technology, while ignoring a critical element to both the success and security of organizations. While it is common sense to prioritize humans – our first line of defense against cyberattacks – too often we only focus on processes and technology, leaving a significant part of our environment dangerously exposed.

Forgetting the people of the PPT approach is like operating a car without airbags. Perhaps you cannot physically see the hazardous gap, but the drive will be incredibly unsafe.

How do we mitigate this gap? By recognizing that people matter. In the information security domain, we place extensive premiums on the focus of the technical, which leads us to neglect humanism, soft skills and the human capital of the business.

Avoid disempowering your staff

Security professionals often describe humans within the cybersecurity space as the weakest link of the system. Security staff often use this phrase to describe everyone but themselves, which does little to enable trust between internal teams or to encourage collaboration among cross-functional groups. In addition, it cultivates an “us versus them” mentality, which is damaging to professional relationships and the success of our information security programs.

Even if people are the element most susceptible to phishing attempts, or the link most likely to be negligent in security practices, it becomes incredibly difficult to foster a culture of security awareness if we demoralize or denigrate the individuals we need to help drive our security priorities.

How does a security team avoid disempowering fellow employees? The solution is quite simple: be aware of the words and phrases you use to describe the people of the PPT model. Develop trust by utilizing positive language during communication and approaching all staff with respect when informing them that security is the responsibility of all employees. You will more effectively keep the attention of staff when you demonstrate that you respect them and indicate that you view them as a primary element of keeping the organization secure.

Steer clear of “My way or the highway!”

The stress of constant security incidents and continuous fear of potential data breaches lead many security teams to operate with a rigid, iron-fist management approach. Instead of allowing security to better enable the business, ideas and programs are forced through and collaboration is thrown by the wayside in the name of making our environments more secure.

While this certainly does not make us popular within the workplace, it also contributes to a lack of trust between security and other business functions. Trust is critical to the success of our security paradigm, which means we must take every opportunity possible to ensure that security enables the business. Without trust, the people of our businesses will not follow our security policies, report suspicious activity, or see cybersecurity in the organization as something they are directly responsible for.

Is it possible for security teams to operate in a flexible, and collaborative manner that guarantees the advancement of the security program, while simultaneously not hindering the day to day work of other staff?

Most definitely. And the solution, like the above, is free, and requires no processes or technology. Be open to opposing opinions regarding the implementation of your security project or program. Approach others cooperatively on how the integration of a new security tool or application should be managed. Asking others, candidly, if there is a “better” way to address a security problem is a wonderfully collaborative way to engage within a culture of teamwork.

Those outside the security team may have ingenious approaches to fixing security problems that we may never have thought of – solutions that both mitigate the security issue and don’t hinder the day-to-day work of employees. Acknowledging the skills and expertise of other non-security teams allows us to discover more innovative ways of approaching a security problem.

Continue to implement technical controls but consider implementing another element into your governance model: people matter. This value, though it sounds simple, is an effective way to not only manage security risk at an acceptable level, but also to ensure that we cultivate our security models holistically.

Three places for early warning of ransomware and breaches that aren’t the dark web

For better or worse, a lot of cybercrime sleuthing and forecasting tends to focus on various underground sites and forums across the deep and dark web corners of the Internet. Whenever a report cites passwords, contraband or fraud kits trafficked in these underground dens, it makes elusive fraudsters and extortion players sound tangible.

People instinctively want to infiltrate these spaces to see if their own company and data are up for sale. For time-strapped security professionals, however, the underground’s rapidly multiplying corridors are difficult to navigate and correlate at scale. Achieving the capability to sift through these domains productively, without wasting time – or getting in legal entanglements – is no small feat.

But there are three additional, sometimes overlooked sources of early warning clues of ransomware and breaches I have seen yield more direct, actionable insights in my years as an incident response leader.

1. Public sources and Good Samaritans

Sometimes the biggest risks and clues are hiding in plain sight, making it crucial not to overlook less-notorious places and people bringing important things to light. Today the forces of social media and cloud sync-and-share everywhere mean confidential slide decks, C-level cell phone numbers and sensitive databases can hit the public Web far too easily.

A few configuration swipes on a smartphone can be all that stands between sharing something with a work colleague or with anyone with a search engine. In Verizon’s latest Data Breach Investigations Report (DBIR), “misconfiguration” and “misdelivery” errors jumped dramatically as breach factors, now second only to phishing and credential theft on the leader board.

Fortunately, the security community is full of Good Samaritans reaching out when they see personal or customer data in harm’s way. But are you making it easy for them to find you and are you prepared to act on these discoveries? Many companies do not have clear, publicly-available contact information and processes for handing security issues and vulnerabilities, which hobbles good faith actors trying to make secure, responsible contact – sometimes until it is too late. Get ahead of any gaps here by establishing dedicated, continuously monitored channels for you to collect and vet inbound tips and concerns.

2. Subtle notes in the 24×7 concert of deployed security tools

Paradoxically, the more security and compliance tools an organization deploys – ostensibly to gain metrics and situational awareness – the more operators can feel blinded and overwhelmed by data growing faster than they can process it, decide and act on. A strong “defense in depth” gut instinct assumes that for every new control introduced, the bull’s-eye visible to attackers must be shrinking. But the bigger assumption here is that we even know “what” and “where” the bull’s-eyes are, in the first place. Too often, security tools alone provide data of diminished net value because they are deployed a step behind sprawling cloud systems, IoT devices, increasingly remote employees and other business shifts eclipsing defenders’ current understanding of assets.

At the same time, layered product fatigue promotes reliance on security tools’ pre-configured alert categories and arbitrary contextualizing, subtly tipping time-strapped administrators to look for reassuring “green light” indicators, before darting to the next dashboard. “What”, exactly was detected? Even if it was labeled “low” severity or nuisance activity, does that label change based on what else is being seen on the network? Driving interoperability between tools often trades depth of analysis for speed, burying clues in the process.

Ransomware attacks are a great example: A company typically calls in incident response once an attacker has detonated their ransomware payload and taken infected machines hostage. Yet, the scrambling of data and locking of screens often happens only after a seasoned ransomware gang has gained a foothold in networks for a while and first spent time mapping the size and composition of devices to make sure they hijack every visible device and back-up mechanism.

This precursor activity can get lost in rush-hour noise on the network. Not every security product will classify anomalous indexing and casing of IT systems the same, but setting this activity as critical behavior to recognize helps avert worst-case scenarios by buying time to backup files or initiate other measures as a precaution.

Likewise, keeping an eye on privileged accounts is an invaluable early-warning investment. First, take stock of who has these accounts in your organization – whether IT administrators, C-suite leaders or their staff. Assume you have too many privileged users in the first place and that some might even be shared. Confirm whether any can be restricted or deleted based on employee turnover or consolidation. Then implement rigorous logging of those narrower accounts’ patterns of life.

Attackers rely on defenders having incomplete understanding of dormant and other vulnerable accounts too frequently weaponized before anyone knows a crime is in progress. Is the number of privileged accounts changing? Who uses the accounts? Do their logins and behavior match to their role, time zones and workday routines? All things being equal, anomalies with privileged users demand urgent attention.

3. Intersections of third-party risk

The rise and dynamism of third-party developers, resellers, smart building owners and other partners dramatically affects security and compliance inside and outside a company’s walls. According to recent Deloitte enterprise risk management research, “information security” and “cyber risk” topped respondents’ lists of issues driving budget for greater third-party oversight.

A company may integrate third-party code in its Web site or business applications – meaning when that code is compromised, intruders have an express lane into the network. Network and cloud access granted to remote contractors could be compromised, giving criminals the camouflage of previously approved devices and usernames for entry.

Pinpointing the specific roads business partners have into your environment yields invaluable awareness. Take stock of the partners your organization relies on, concentrating on those with the highest associated risk (e.g., close proximity to crown jewel data or everyday applications offering wide lateral movement if compromised). Confirm norms and roles for these third-party services and accounts, so logging and monitoring tools can flag deviations immediately, which are often crucial early signs that a third-party might be employed in an attack.

In addition to serving as a practical early warning outpost, monitoring of third parties yields awareness and influence cybersecurity leaders can use to force wider, strategic conversations in business about risk tolerance and the criticality of these relationships. In addition to weighing the criticality versus risk aspects of these relationships, those watching the third-party touch points are well positioned to advocate for security terms in partner relationships, such as requiring partners to meet thresholds like multi-factor authentication for accounts touching their customers.

Cybersecurity is a constant struggle of measure-versus-countermeasure and the desire to peer into attackers’ next move is relentless. While exotic malware and infamous crime rings capture attention and deserve recognition, these threats must still discover and exploit the same vulnerabilities, business churn and network blind spots others have to.

Taking stock of a few underutilized, high-yield data sources already in your environment is a powerful way to keep perspective and view all risks on the same plane. This helps keep things in perspective and frame effective decisions about where and how to prioritize finite resources and test incident response readiness.

ERP security: Dispelling common misconceptions

Enterprise resource planning (ERP) systems are an indispensable tool for most businesses. They allow them to track business resources and commitments in real time and to manage day-to-day business processes (e.g., procurement, project management, manufacturing, supply chain, human resources, sales, accounting, etc.).

ERP security

The various applications integrated in ERP systems collect, store, manage, and interpret sensitive data from the many business activities, which allows organizations to improve their efficiency in the long run.

Needless to say, the security of such a crucial system and all the data it stores should be paramount for every organization.

Common misconceptions about ERP security

“Since ERP systems have a lot of moving parts, one of the biggest misconceptions is that the built-in security is enough. In reality, while you may not have given access to your company’s HR data to a technologist on your team, they may still be able to access the underlying database that stores this data,” Mike Rulf, CTO of Americas Region, Syntax, told Help Net Security.

“Another misconception is that your ERP system’s access security is robust enough that you can allow people to access their ERP from the internet.”

In actual fact, the technical complexity of ERP systems means that security researchers are constantly finding vulnerabilities in them, and businesses that make them internet-facing and don’t think through or prioritize protecting them create risks that they may not be aware of.

When securing your ERP systems you must think through all the different ways someone could potentially access sensitive data and deploy business policies and controls that address these potential vulnerabilities, Rulf says. Patching security flaws is extremely important, as it ensures a safe environment for company data.

Advice for CISOs

While patching is necessary, it’s true that business leaders can’t disrupt day-to-day business activity for every new patch.

“Businesses need some way to mitigate any threats between when patches are released and when they can be fully tested and deployed. An application firewall can act as a buffer to allow a secure way to access your proprietary technology and information during this gap. Additionally, an application firewall allows you to separate security and compliance management from ERP system management enabling the checks and balances required by most audit standards,” he advises.

He also urges CISOs to integrate the login process with their corporate directory service such as Active Directory, so they don’t have to remember to turn off an employee’s credentials in multiple systems when they leave the company.

To make mobile access to ERP systems safer for a remote workforce, CISOs should definitely leverage multi factor identification that forces employees to prove their identity before accessing sensitive company information.

“For example, Duo sends a text to an employee’s phone when logging in outside the office. This form of security ensures that only the people granted access can utilize those credentials,” he explained.

VPN technology should also be used to protect ERP data when employees access it from new devices and unfamiliar Wi-Fi networks.

“VPNs today can enable organizations to validate these new/unfamiliar devices adhere to a minimum security posture: for example, allowing only devices with a firewall configured and appropriate malware detection tools installed can access the network. In general, businesses can’t really ever know where their employees are working and what network they’re on. So, using VPNs to encrypt that data being sent back and forth is crucial.”

On-premise vs. cloud ERP security?

The various SaaS applications in your ERP, such as Salesforce and Oracle Cloud Apps, leave you beholden to those service providers to manage your applications’ security.

“You need to ask your service providers about their audit compliance and documentation. Because they are providing services critical to your business, you will be asked about these third parties by auditors during a SOC audit. You’ll thus need to expand your audit and compliance process (and the time it takes) to include an audit of your external partners,” Rulf pointed out.

“Also, when you move to AWS or Azure, you’re essentially building a new virtual data center, which requires you to build and invest in new security and management tools. So, while the cloud has a lot of great savings, you need to think about the added and unexpected costs of things like expanded audit and compliance.”

Protect your organization in the age of Magecart

The continuing wave of attacks by cybercriminal groups known under the umbrella term Magecart perfectly illustrates just how unprepared many e-commerce operations are from a security point of view. It all really boils down to timing. If the e-commerce world was able to detect such Magecart attacks in a matter of seconds (rather than weeks or months), then we could see an end to Magecart stealing all of the cybercrime headlines.

Magecart security

What steps can organizations take then to mitigate against this method of cyber attack? Let’s delve deeper.

Assess your degree of client-side visibility

To avoid the hindsight is 20/20 syndrome, a key first step is understanding how aware you are of what your users are actually getting when they visit your e-commerce platform. You may think that every user will get an identical, safe-to-use version of your website when, in fact, some users may be interacting with compromised web pages and hijacked forms.

It might surprise you to learn that neither business owners or security teams seem to have a definite answer here.

For far too long now, there has been a spotlight on server-side security. Consequently, just about everything that happens on the client-side (for example the browser and the environment where Magecart attacks thrive) is generally overlooked. Based on the information we have gleaned from previous Magecart attacks, it is obvious that there is no sure-fire way of preventing these types of attacks completely. However, a good place to start is to shift our focus and prioritize what is happening on the client-side.

The average Magecart attack remains undetected for 22 days and it only took 15 days for attackers to steal 380,000 credit cards during the British Airways breach. That’s 18 credit cards per minute. This is how you should look at this threat: each minute that goes by while there’s an undetected skimmer on your website means a growing critical business problem.

Third parties are your weakest link

Various Magecart groups use different strategies to breach e-commerce websites. However, most go after the weakest security link: they avoid breaching your servers and prefer delivering malicious code to your website through third parties.

Nearly all websites use one or more third-party solutions: a live chat widget, an analytics tool, or an accessibility service. By doing so, companies end up having almost no control over the security of this externally sourced code. When attackers breach one of these third parties and inject malicious code, this code very easily bypasses firewalls and browser security mechanisms because the attack originates from a source that is trusted by default – in this case, a legitimate third-party supplier.

It’s crucial that you make sure that your business is scrutinizing third-party code and also its supplier’s level of security. Sadly, though, this is not something companies prioritize, as they are concentrated on product development.

And while many businesses are only just now learning about Magecart web skimmers, these skimmers are far from being the first iteration. Over time, skimmers have evolved to include obfuscation techniques to conceal their malicious code and even go as far as using defense mechanisms to avoid being detected by bots, rendering many detection options useless.

Taking decisive action to detect and control Magecart web skimmers

An ever-evolving security mindset is needed here. Businesses should find ways to quickly detect these injected skimmers and swiftly block Magecart attacks. This is preferable to solutions that prevent malicious (unpreventable) code injections.

Whilst third-party management and validation play a good part, they alone are not enough. The key is to look for malicious behavior.

We know that a skimmer always displays at least one sign of malicious activity. For example, a known script like a live chat has no business interacting with a payment form (formjacking). If that happens, it’s an indicator that something may be wrong. Also, if we start seeing a new script appearing in some user sessions, that is also something that warrants further analysis. Sure, it could be harmless – but it could also be a skimmer. Similarly, a network request to a previously unknown domain may be an indication that attackers are trying to exfiltrate data to their drop servers.

It is precisely here where most businesses are deficient. Not only do companies lack client-side visibility, but they also lack proper detection and control capabilities. Taking decisive action against web skimming means being able to detect and control any malicious activity on the client-side in real-time. To this extent, consider a web page monitoring solution, as it brings real-time visibility of malicious code and provides a more effective Magecart mitigation approach.

Know the threats to mobile security

Where there’s money, there’s also an opportunity for fraudulent actors to leverage security flaws and weak entry-points to access sensitive, personal consumer information.

threats mobile security

This has caused a sizeable percentage of consumers to avoid adopting mobile banking completely and has become an issue for financial institutions who must figure out how to provide a full range of financial services through the mobile channel in a safe and secure way. However, with indisputable demand for a mobile-first experience, the pressure to adapt has become unavoidable.

In order to offer that seamless, omnichannel experience consumers crave, financial institutions have to understand the malicious actors and fraudulent tactics they are up against. Here are a few that have to be on the mobile banking channel’s radar.

1. Increased device usage sparks surge in mobile malware

Banking malware has become a very common mobile threat, even more so now as fraudsters leverage fear and uncertainty surrounding the global pandemic. According to a recent report by Malwarebytes, mobile banking malware has surged over recent months, focused on stealing personal information and using weakened remote connections and mobile devices in a work-from-home environment to gain access to more valuable corporate networks.

The financial burden of a data breach resulting from mobile malware could potentially set organizations back millions of dollars, as well as do some serious damage to customer trust and loyalty.

2. Sacrificing software quality and security by effecting premature product rollouts

Securing mobile is a laborious task that requires mobile app developers to factor in several entities, including device manufacturers, mobile operating system developers, app developers, mobile carriers, and service providers. No platform nor device can be secured in the same way, meaning developers are constantly having to overcome a unique set of challenges in order to reduce the risk of fraudulent activity.

The reality of such a complex ecosystem is that mobile app developers are not always qualified to understand all the risks at play, which leads to unsecured mobile data, connections, and transactions. Additionally, the speed at which the market moves thanks to emerging technologies and innovations creates an added layer of pressure for developers. Lacking the resources and time to properly protect consumers can lead to high-profile attacks where sensitive data is exploited.

3. Vulnerabilities in digital security protocols

At any given time, every entity in the ecosystem described above must have high confidence in the entity on the other side of the transaction to ensure its legitimacy. A lack of digital security protocols like secure sockets layer (SSL) and transport layer security (TLS) in mobile banking apps makes it difficult to establish encrypted links between every entity that ultimately help prevent phishing and man-in-the-middle attacks.

If we continue growing our ecosystem at the current rate, adding to its complexity and connecting more and more third-party services and networks, we can no longer avoid fixing the broken system we have for SSL certificate validation.

4. Unreliable mobile device identification

Another issue at play is device identification. The only way other entities in the ecosystem can recognize a unique device is through device fingerprinting. This is a process through which certain unique attributes of a device – operating system, type and version of web browser, the device’s IP address, etc. – are combined for identification. This information can then be pulled from a database for future fraud prevention purposes and a range of other use-cases.

Data privacy concerns and limited data sharing on devices, however, have weakened the process and reliability of identification. If we do not have enough discrete data points to establish a reliable digital fingerprint, the whole system becomes ineffective.

5. Time to update authentication techniques

Fraudsters are always on the lookout for ways to intercept confidential login information that grants them access to protected accounts. Two-factor authentication (2FA) has become banks’ preferred security method for reliably authenticating users trying to access the mobile channel and staying ahead of cybercriminals.

More often than not, 2FA relies on one-time-passwords (OTPs) delivered by SMS to the account holder upon attempted login. Unfortunately, with phishing – especially via SMS – on the rise, hackers can gain access to a mobile device and OTPs delivered via SMS, and gain access to accounts and authenticate fraudulent transactions.

There are also a number of other tactics – e.g., SIM-swapping – attackers use to gain access to sensitive information and accounts.

6. Lack of industry regulation and standards

Without the establishment of rigorous standards and guidance on online banking security and protecting the end-user, low consumer trust will inhibit mass market acceptance. The Federal Financial Institutions Examination Council (FFIEC) has yet to issue ample guidance on the topic of authentication and identification on mobile devices. Mobile security standards need to be a top priority for regulators, especially as new technologies and mobile malware continue to disrupt the market.

The underlying theme for banks to keep in mind is that trust is a currency they cannot afford to lose in such a competitive financial services market. In the race to provide seamless, omnichannel banking experiences, integrating better security protocols without compromising usability can feel like a constant balancing act. Researching the latest tools and technology as well as building trusted partner relationships with third-party service providers is the only way banks can differentiate themselves in a dynamic security landscape.

What enterprises should consider when it comes to IoT security

Many enterprises have realized that the IoT presents tremendous business opportunities. The IoT can help businesses stay agile in changing situations and maintain a high level of visibility into operations, while positively impacting their bottom line. According to a BI Intelligence report, those who adopt IoT can experience increased productivity, reduced operating costs and expansion into new markets.

enterprises IoT

Yet despite this proven success, security concerns have historically been a barrier to IoT adoption for enterprises. In fact, more than 50% of organizations say that security is a main reason they have not taken advantage of IoT.

Fortunately, with new technology and new networks, enterprises don’t have to choose between valuable business insights and organizational security anymore.

Here are a few questions enterprises should ask themselves to ensure they’re maximizing the value of IoT while upholding security.

What information do we need and how often?

When thinking about potential IoT deployments, it’s important to assess what information would need to be collected by devices to deliver insights that can steer the business forward.

Not all IoT use cases are created equal, and not all information needs the same level of protection. While information such as the location of critical items (e.g., medicine or vaccines) requires high levels of security, other information, like the humidity levels of soil, may not. It’s unlikely a hacker would even care about low-impact information, and even if they did, it would be hard for them to abuse it in such a way that would be significantly detrimental to a brand.

Organizations should also consider how often they need information. If IoT devices are reporting critical information frequently – say, four or five times every hour – that poses a larger security risk than devices that only need to communicate information two or three times a day.

To constantly transmit data, devices will need to be continuously connected to a network. This constant connectivity makes it easier for hackers to get into the network, take over devices and gain access to data. Therefore, the more often data is transmitted, the more companies will need to put appropriate safeguards in place to protect that information.

Do we have a backup system in place?

If enterprises have a more complicated use case that requires lots of data and frequent collection and, therefore, need a device with an IP address, they should take extra precautions to shut down an IoT system in the event of a hacking.

Network hacks occur when devices are compromised via the network to which they are connected. This type of breach enables the hacker to gain control of the device and use it. However, organizations can avoid network hacks by connecting IoT devices without an IP address to a 0G network.

A 0G network is a dedicated, low-power wireless network that is specifically designed to send small, critical messages from any IoT device to the internet. Because the network is created to save power, it does not rely on the traditional, constant and synchronized two-way communication protocol between the device and the receiver. Once the IoT device wakes up and sends the data asynchronously to the 0G network, it goes back into sleep-mode. This creates an extremely small window for hackers to break into the network and take control of the device.

Additionally, because a 0G network is difficult to hack or jam, many companies use it as a backup network for IoT devices susceptible to RF jamming. Being connected to this network allows devices to send a distress signal to shut down a system if jamming or hacking is detected, and the primary network is compromised.

Can we get by with an IoT device without an IP address?

To transmit large amounts of data frequently, organizations generally require IoT devices that have IP addresses and to be constantly connected to the internet. Unfortunately, this makes them more vulnerable to attacks, requiring enterprises to put extra security measures in place.

However, other devices exist that do not require an IP address, therefore decreasing the security risk. For example, by operating on a lower frequency network, like 0G, devices can “sleep” in between uses. This means that enterprises can increase their security due to the lack of constant communication between devices and the receiver.

A 0G network is perfect for simple use cases – such as collecting soil temperature – that do not require constant updates or large amounts of data. Instead the data may only be transmitted once or twice a day at random times. This is not to say that 0G can’t transfer more complicated messages – it certainly can. And in both cases, devices are not beholden to the network and therefore are not as susceptible to hacking.

While IoT security remains top of mind for many executives, there are several ways to decrease risk while still moving forward with deployments. With the proper safeguards in place, enterprise and industry organizations can unlock the limitless potential of IoT – without compromising security.

Five ways to maximize FIDO

Perform a quick Google search for “causes of data breaches”, and you will be inundated with reports of stolen credentials and weak passwords. Organizations can spend billions on technology to harden their systems against attack, but they are fighting a losing battle until they are able to confidently attribute a login with a valid user.

maximize FIDO

Image by the FIDO Alliance

What is FIDO, and why does it matter?

FIDO stands for Fast Identity Online. It is a free and open set of standards and technologies that aims to reduce the world’s reliance on passwords. FIDO is designed to bolster authentication assurance by “protecting” and eliminating passwords.

FIDO-enabled advances in authentication are paving the way to this foundational paradigm shift. Unfortunately, authenticators are not quite there yet, because even though the capabilities are available for incredible strong authentication, implementations can vary, and it is up to implementers to determine how much of FIDO’s security will be integrated into their products.

A few examples: biometrics are supported, but not always implemented; authentication procedures are often cumbersome; passwords are still used as a primary credential. Further, as inherently secure as FIDO standards are, there is always room for improvement. Here are five ways to maximize FIDO.

Maximize FIDO: Use all three factors

More is better – most of the time. Thanks to smartphones, three-factor authentication – something you know, something you have, something you are – should be ubiquitous, but it is not. Many FIDO authenticators are only using two-layered factors, usually something you have and something you know.

While certainly better than just a password, this does not protect against instances such as a device being left open at a café. Using the built-in biometric capabilities inherently supported in all modern smartphones, FIDO-based authenticators can provide 3FA, bolstering security and eliminating such vulnerabilities, all while keeping user friction to a minimum.

Make it simple and secure

Many FIDO-based authenticators implement two-factor authentication (2FA) by interjecting an additional code/PIN from within their authenticator app. The user must remember the PIN and attempt to type it in before the timer runs out, or if the timer is already low, wait for it to be reset before attempting to enter it. Either way, this increases friction for the user and decreases security, and this PIN can still be extracted from the user through social engineering.

There are better ways. Apps should be designed from the ground up with simplicity in mind. An example of a simple and secure method could be a simple three-digit code paired with an image, and nothing for the user to enter. The user would simply ensure the code and image match on their device and portal, and then click “ok”.

Fully leverage existing MDM features

Smartphones, and smart devices for that matter, are everywhere. With the growing number of these devices permeating our planet, wise and insightful minds saw fit to develop technologies to monitor and protect these devices. Mobile device management (MDM) functions can bolster existing authentication paradigms through features such as “geofencing”.

FIDO-enabled authenticators can use geofencing to allow or prevent authentication based on the user’s physical location. Another key MDM feature that should be in place can prevent connections for devices that have been “rooted” or “jailbroken”. These devices present a much greater security threat and can be easily identified using existing technology.

Get rid of passwords

Who here is not guilty of reusing a password or two… or three? Passwords are a legacy security afterthought. Unfortunately, many FIDO-based authenticators are still relying on usernames and passwords as the primary authentication credential pair. But FIDO enables secure certificate-based authentication – we no longer need the password. Passwordless authentication also brings with it the added benefit of decentralized key stores allowing the organizations to get rid of the big red targets that are centralized password repositories.

Use bidirectional authentication

Last but not least, implementing bidirectional authentication can improve on FIDO’s already stellar authentication model. Bidirectional authentication takes the traditional FIDO authentication model and adds server-to-user authentication as well, so before the user sends their authentication information to the server, the server authenticates to the user. This provides an added degree of confidence to the end user and all but eliminates the possibility of a Man-in-the-Middle attack due to there being nothing for the end user to share.

The technology for simple and secure authentication is available and – thanks to FIDO standards and protocols – straightforward to implement. In the end, it comes down to the creativity and diligence of those designing current authenticators to completely leverage the available technology and integrate them in a well-thought-out manner that increases security and decreases user friction.

How do I select a risk assessment solution for my business?

One of the cornerstones of a security leader’s job is to successfully evaluate risk. A risk assessment is a thorough look at everything that can impact the security of an organization. When a CISO determines the potential issues and their severity, measures can be put in place to prevent harm from happening.

To select a suitable risk assessment solution for your business, you need to think about a variety of factors. We’ve talked to several cybersecurity professionals to get their insight on the topic.

Jaymin Desai, Offering Manager, OneTrust

select risk assessmentFirst, consider what type of assessments or control content as frameworks, laws, and standards are readily available for your business (e.g., NIST, ISO, CSA CAIQ, SIG, HIPAA, PCI DSS, NYDFS, GDPR, EBA, CCPA). This is an area where you can leverage templates to bypass building and updating your own custom records.

Second, consider the assessment formats. Look for a technology that can automate workflows to support consistency and streamline completion. This level of standardization helps businesses scale risk assessments to the line of business users. A by-product of workflow-based structured evaluations is the ability to improve your reporting with reliable and timely insights.

One other key consideration is how the risk assessment solution can scale with your business? This is important in evaluating your efficiencies overtime. Are the assessments static exports to excel, or can they be integrated into a live risk register? Can you map insights gathered from responses to adjust risk across your assets, processes, vendors, and more? Consider the core data structure and how you can model and adjust it as your business changes and your risk management program matures.

The solution should enable you to discover, remediate, and monitor granular risks in a single, easy-to-use dashboard while engaging with the first line of your business to keep risk data current and context-rich with today’s information.

Brenda Ferraro, VP of Third Party Risk, Prevalent

select risk assessmentThe right risk assessment solution will drive program maturity from compliance, to data breach avoidance, to third-party risk management.

There are seven key fundamentals that must be considered:

  • Network repository: Uses the ‘fill out once, use with many approach’ to rapidly obtain risk information awareness.
  • Vendor risk visibility: Harmonizes inside-out and outside-in vendor risk and proactively shares actionable insights to enhanced decision-making on prioritization, remediation, and compliance.
  • Flexible automation: Helps the enterprise to place focus quickly and accurately on risk management, not administrative tasks, to reduce third-party risk management process costs.
  • Enables scalability: Adapts to changing processes, risks, and business needs.
  • Tangible ROI: Reduces time and costs associated with the vendor management lifecycle to justify cost.
  • Advisory and managed services: Has subject matter experts to assist with improving your program by leveraging the solution.
  • Reporting and dashboards: Provides real-time intelligence to drive more informed, risk-based decisions internally and externally at every business level.

The right risk assessment solution selection will enable dynamic evolution for you and your vendors by using real-time visibility into vendor risks, more automation and integration to speed your vendor assessments, and by applying an agile, process-driven approach to successfully adapt and scale your program to meet future demands.

Fred Kneip, CEO, CyberGRX

select risk assessmentOrganizations should look for a scalable risk assessment solution that has the ability to deliver informed risk-reducing decision making. To be truly valuable, risk assessments need to go beyond lengthy questionnaires that serve as a check the box exercises that don’t provide insight and they need to go beyond a simple outside in rating that, alone, can be misleading.

Rather, risk assessments should help you to collect accurate and validated risk data that enables decision making, and ultimately, allow you to identify and reduce risk ecosystem at the individual level as well as the portfolio level.

Optimal solutions will help you identify which vendors pose the greatest risk and require immediate attention as well as the tools and data that you need to tell a complete story about an organization’s third-party cyber risk efforts. They should also help leadership understand whether risk management efforts are improving the organization’s risk posture and if the organization is more or less vulnerable to an adverse cyber incident than it was last month.

Jake Olcott, VP of Government Affairs, BitSight

select risk assessmentOrganizations are now being held accountable for the performance of their cybersecurity programs, and ensuring businesses have a strong risk assessment strategy in place can have a major impact. The best risk assessment solutions meet four specific criteria— they are automated, continuous, comprehensive and cost-effective.

Leveraging automation for risk assessments means that the technology is taking the brunt of the workload, giving security teams more time back to focus on other important tasks to the business. Risk assessments should be continuous as well. Taking a point-in-time approach is inadequate, and does not provide the full picture, so it’s important that assessments are delivered on an ongoing basis.

Risk assessments also need to be comprehensive and cover the full breadth of the business including third and fourth party risks, and address the expanding attack surface that comes with working from home.

Lastly, risk assessments need to be cost-effective. As budgets are being heavily scrutinized across the board, ensuring that a risk assessment solution does not require significant resources can make a major impact for the business and allow organizations to maximize their budgets to address other areas of security.

Mads Pærregaard, CEO, Human Risks

select risk assessmentWhen you pick a risk assessment tool, you should look for three key elements to ensure a value-adding and effective risk management program:

1. Reduce reliance on manual processes
2. Reduce complexity for stakeholders
3. Improve communication

Tools that rely on constant manual data entry, remembering to make updates and a complicated risk methodology will likely lead to outdated information and errors, meaning valuable time is lost and decisions are made too late or on the wrong basis.

Tools that automate processes and data gathering give you awareness of critical incidents faster, reducing response times. They also reduce dependency on a few key individuals that might otherwise have responsibility for updating information, which can be a major point of vulnerability.

Often, non-risk management professionals are involved with or responsible for implementation of mitigating measures. Look for tools that are user-friendly and intuitive, so it takes little training time and teams can hit the ground running.

Critically, you must be able to communicate the value that risk management provides to the organization. The right tool will help you keep it simple, and communicate key information using up-to-date data.

Steve Schlarman, Portfolio Strategist, RSA Security

select risk assessmentGiven the complexity of risk, risk management programs must rely on a solid technology infrastructure and a centralized platform is a key ingredient to success. Risk assessment processes need to share data and establish processes that promote a strong governance culture.

Choosing a risk management platform that can not only solve today’s tactical issues but also lay a foundation for long-term success is critical.

Business growth is interwoven with technology strategies and therefore risk assessments should connect both business and IT risk management processes. The technology solution should accelerate your strategy by providing elements such as data taxonomies, workflows and reports. Even with best practices within the technology, you will find areas where you need to modify the platform based on your unique needs.

The technology should make that easy. As you engage more front-line employees and cross-functional groups, you will need the flexibility to make adjustments. There are some common entry points to implement risk assessment strategies but you need the ability to pivot the technical infrastructure towards the direction your business needs.

You need a flexible platform to manage multiple dimensions of risk and choosing a solution provider with the right pedigree is a significant consideration. Today’s risks are too complex to be managed with a solution that’s just “good enough.”

Yair Solow, CEO, CyGov

select risk assessmentThe starting point for any business should be clarity on the frameworks they are looking to cover both from a risk and compliance perspective. You will want to be clear on what relevant use cases the platform can effectively address (internal risk, vendor risk, executive reporting and others).

Once this has been clarified, it is a question of weighing up a number of parameters. For a start, how quickly can you expect to see results? Will it take days, weeks, months or perhaps more? Businesses should also weigh up the quality of user experience, including how difficult the solution is to customize and deploy. In addition, it is worth considering the platform’s project management capabilities, such as efficient ticketing and workflow assignments.

Usability aside, there are of course several important factors when it comes to the output itself. Is the data produced by the solution in question automatically analyzed and visualized? Are the automatic workflows replacing manual processes? Ultimately, in order to assess the platform’s usefulness, businesses should also be asking to what extent the data is actionable, as that is the most important output.

This is not an exhaustive list, but these are certainly some of the fundamental questions any business should be asking when selecting a risk assessment solution.

Why do healthcare organizations have a target on their back?

Medical records command a high value on the dark web due to the large amount of personal information they hold. Cybercriminals can sell stolen healthcare data for a massive profit, up to $1,000 for each record, a fact that encourages them to continue hacking as the payoff is worth it.

healthcare organizations target

While there has been an uptick of attacks on healthcare organizations due to coronavirus, a 2019 Healthcare Data Breach Report found more healthcare records were breached in 2019 than in the six years from 2009 to 2014, indicating that the rise of threats to healthcare records has been an ongoing trend.

Healthcare organizations need to understand the interconnected relationship between cybersecurity and patient care. Investing in cybersecurity ensures organizations have the appropriate controls in place to protect the patient, their data, the brand, and business, all while complying with HIPAA requirements.

Healthcare organizations will remain a top target beyond COVID-19

Healthcare data is of interest to nation-state threat actors looking to steal clinical trials and research data to solve concerns in their country and create economic and political advantage by being first to market on innovation or a critical vaccine.

A patient’s record can also hold insight that could be used to inflict harm to persons of interest. On top of personally identifiable data, patient records contain very sensitive and personal information, such as blood type, allergies, medications, medical devices in use, and past procedures. All of it can be used to commit identity theft, insurance fraud, blackmail, or to cause bodily harm.

Unfortunately, most of this data cannot be changed if stolen. In addition, this information is often stored in legacy systems, not built with security in mind, in multiple disparate systems and different locations, and the organization cannot afford the cost to migrate data to a modern, secure, system.

Biomedical devices have historically been bad at implementing even the most basic security controls, such as encryption, authentication, and access controls. To complicate things further, these devices are often out of scope when it comes to implementing basic protections like anti-virus, endpoint detection and response, and other software that could be seen as intrusive to the system and impact or influence the operation of the device. Technically, the device is a sitting duck for any attacker that can get access to the same network. It has only been about five years since healthcare organizations started pressing and holding device manufacturers accountable to basic security standards.

The unfortunate thing is these biomedical devices and the inherent lack of security controls and protections have been moved directly to the consumer whether that is in their home or installed on their bodies. This has broadened the reach, capabilities, attack surface and potential damage that can be inflected by the malicious threat actor.

Pacemakers are a prime example as hackers have had success interfering with the device. Recently, patients, providers, and manufacturers were notified of a security vulnerability called SweynTooth. The vulnerability, associated with Bluetooth Low Energy, could be exploited to wirelessly target certain medical devices – crash, deadlock them, or bypass their security protections to gain unauthorized access. This makes medical devices a real life and death situation, beyond what we normally would consider.

Patient infrastructure is also not up to par. Telehealth is more integral to healthcare than ever before as it reduces expenses, lowers exposure to illnesses and is more convenient. In March, telehealth visits surged 50% and analysts estimate coronavirus-related virtual visits could top 1 billion globally in 2020.

This surge further complicates the security landscape if hospital groups don’t have the infrastructure in place to protect consumer data as they are often leveraging telehealth platforms that were not built for healthcare; and have expanded the risk by inheriting the existing weaknesses and vulnerabilities of those platforms.

How can organizations combat these attacks?

Organizations should ensure their infrastructure is secure by using platforms that are designed for healthcare use, while meeting legal privacy requirements. The infrastructure systems should be configured according to security standards, with ample visibility and a strategy should be in place for patient owned devices and endpoints. It’s important the healthcare provider has visibility into what is happening across the environment to monitor for signs of suspicious activity in real-time so immediate action can be taken.

Furthermore, as 48% of threat actors in healthcare are internal, organizations must monitor for behavioral changes in users and their data, providing visibility to uncover user-based threats that might otherwise go undetected. There must also be security tools in place that automate common investigation tasks and streamline remediation to halt a breach immediately and in real-time. Detection and response early in the cyberattack lifecycle is key to protecting health records and the company from a large-scale impact.

Organizations that do not have the above security capabilities in place and suffer from a data breach can expect to face financial penalties under HIPAA for not effectively protecting confidential customer information. As a fine could range from $100 to $50,000 per violation (or per record), it is critical companies go beyond the minimal security requirements to avoid such a fate.

Furthermore, a significant incident or breach erodes patient trust and damages the brand, including reduced revenues. Given the current evolving threat landscape and increased focus on healthcare by cybercriminals, companies must commit to improving their security operations to protect patients and the organization.

3 tips to increase speed and minimize risk when making IT decisions

There is nothing like a crisis to create a sense of urgency and spawn actions. This is especially true for enterprise IT teams, who are tasked with new responsibilities and critical decisions. Speed matters in the heat of the moment and many leaders may not take the necessary steps to assess the risk of their decisions in order to mitigate the crisis quickly. When processes are rushed, security concerns and other gaps in the system … More

The post 3 tips to increase speed and minimize risk when making IT decisions appeared first on Help Net Security.

State-backed hacking, cyber deterrence, and the need for international norms

As time passes, state-backed hacking is becoming an increasingly bigger problem, with the attackers stealing money, information, credit card data, intellectual property, state secrets, and probing critical infrastructure.

state-backed hacking

While Chinese, Russian, North Korean and Iranian state-backed APT groups get most of the spotlight (at least in the Western world), other nations are beginning to join in the “fun.”

It’s a free for all, it seems, as the world has yet to decide on laws and norms regulating cyber attacks and cyber espionage in peacetime, and find a way to make nation-states abide by them.

There is so far one international treaty on cybercrime (The Council of Europe Convention on Cybercrime) that is accepted by the nations of the European Union, United States, and other likeminded allies, notes Dr. Panayotis Yannakogeorgos, and it’s contested by Russia and China, so it is not global and only applies to the signatories.

Dr. Yannakogeorgos, who’s a professor and faculty lead for a graduate degree program in Global Security, Conflict, and Cybercrime at the NYU School of Professional Studies Center for Global Affairs, believes this treaty could be both a good model text on which nations around the world can harmonize their own domestic criminal codes, as well as the means to begin the lengthy diplomatic negotiations with Russia and China to develop an international criminal law for cyber.

Cyber deterrence strategies

In the meantime, states are left to their own devices when it comes to devising a cyber deterrence strategy.

The US has been publicly attributing cyber espionage campaigns to state-backed APTs and regularly releasing technical information related to those campaigns, its legislators have been introducing legislation that would lead to sanctions for foreign individuals engaging in hacking activity that compromises economic and national security or public health, and its Department of Justice has been steadily pushing out indictments against state-backed cyber attackers and spies.

But while, for example, indictments by the US Department of Justice cannot reasonably be expected to result in the extradition of a hacker who has been accused of stealing corporate or national security secrets, the indictments and other forms of public attribution of cyber enabled malicious activities serve several purposes beyond public optics, Dr. Yannakogeorgos told Help Net Security.

“First, they send a clear signal to China and the world on where the United States stands in terms of how governmental resources in cyberspace should be used by responsible state actors. That is, in order to maintain fair and free trade in a global competitive environment, a nation’s intelligence services should not be engaged in stealing corporate secrets and then handing those secrets over to companies for their competitive advantage in global trade,” he explained.

“Second, making clear attribution statements helps build a framework within which the United States can work with our partners and allies on countering threats. This includes joint declarations with allies or multilateral declarations where the sources of threats and the technical nature of the infrastructure used in cyber espionage are declared.”

Finally, when public attribution is made, technical indicators of compromise, toolsets used, and other aspects are typically released as well.

“These technical releases have a very practical impact in that they ‘burn’ the infrastructure that a threat actor took time, money, and talent to develop and requires them to rebuild or retool. Certainly, the malware and other infrastructure can still be used against targets that have not calibrated their cyber defenses to block known pathways for attack. Defense is hard, and there is a complex temporal dimension to going from public indicators of compromise in attribution reports; however, once the world knows it begins to also increase the cost on the attacker to successfully hack a target,” he added.

“In general, a strategy that is focused on shaping the behavior of a threat needs to include actively dismantling infrastructure where it is known. Within the US context, this has been articulated as persistently engaging adversaries through a strategy of ‘defending forward.’”

The problem of attack attribution

The issue of how cyber attack attribution should be handled and confirmed also deserves to be addressed.

Dr. Yannakogeorgos says that, while attribution of cyber attacks is definitely not as clear-cut as seeing smoke coming out of a gun in the real world, with the robust law enforcement, public private partnerships, cyber threat intelligence firms, and information sharing via ISACs, the US has come a long way in terms of not only figuring out who conducted criminal activity in cyberspace, but arresting global networks of cyber criminals as well.

Granted, things get trickier when these actors are working for or on behalf of a nation-state.

“If these activities are part of a covert operation, then by definition the government will have done all it can for its actions to be ‘plausibly deniable.’ This is true for activities outside of cyberspace as well. Nations can point fingers at each other, and present evidence. The accused can deny and say the accusations are based on fabrications,” he explained.

“However, at least within the United States, we’ve developed a very robust analytic framework for attribution that can eliminate reasonable doubt amongst friends and allies, and can send a clear signal to planners on the opposing side. Such analytic frameworks could become norms themselves to help raise the evidentiary standard for attribution of cyber activities to specific nation states.”

A few years ago, Paul Nicholas (at the time the director of Microsoft’s Global Security Strategy) and various researchers proposed the creation of an independent, global organization that would investigate and publicly attribute major cyber attacks – though they admitted that, in some cases, decisive attribution may be impossible.

More recently, Kristen Eichensehr, a Professor of Law at the University of Virginia School of Law with expertise in cybersecurity issues and cyber law, argued that “states should establish an international law requirement that public attributions must include sufficient evidence to enable crosschecking or corroboration of the accusations” – and not just by allies.

“In the realm of nation-state use of cyber, there have been dialogues within the United Nations for nearly two decades. The most recent manifestation is the UN Group of Governmental Experts that have discussed norms of responsible state behavior and issued non-binding statements to guide nations as they develop cyber capabilities,” Dr. Yannakogeorgos pointed out.

“Additionally, private sector actors, such as the coalition declaring the need for a Geneva Convention for cyberspace, also have a voice in the articulation of norms. Academic groups such as the group of individuals involved in the research, debating, and writing of the Tallinn Manuals 1.0 and 2.0 are also examples of scholars who are articulating norms.”

And while articulating and agreeing to specific norms will no doubt be a difficult task, he says that their implementation by signatories will be even harder.

“It’s one thing to say that ‘states will not target each other’s critical infrastructure in cyberspace during peacetime’ and another to not have a public reaction to states that are alleged to have not only targeted critical infrastructure but actually caused digital damage as a result of that targeting,” he concluded.

Surge in cyber attacks targeting open source software projects

There has been a massive 430% surge in next generation cyber attacks aimed at actively infiltrating open source software supply chains, Sonatype has found.

attacks open source

Rise of next-gen software supply chain attacks

According to the report, 929 next generation software supply chain attacks were recorded from July 2019 through May 2020. By comparison 216 such attacks were recorded in the four years between February 2015 and June 2019.

The difference between “next generation” and “legacy” software supply chain attacks is simple but important: next generation attacks like ​Octopus Scanner​ and ​electron-native-notify​ are strategic and involve bad actors intentionally targeting and surreptitiously compromising “upstream” open source projects so they can subsequently exploit vulnerabilities when they inevitably flow “downstream” into the wild.

Conversely, legacy software supply chain attacks like ​Equifax​ are tactical and involve bad actors waiting for new zero day vulnerabilities to be publicly disclosed and then racing to take advantage in the wild before others can remediate.

“Following the notorious Equifax breach of 2017, enterprises significantly ramped investments to prevent similar attacks on open source software supply chains,” said Wayne Jackson, CEO at Sonatype.

“Our research shows that commercial engineering teams are getting faster in their ability to respond to new zero day vulnerabilities. Therefore, it should come as no surprise that next generation supply chain attacks have increased 430% as adversaries are shifting their activities ‘upstream’ where they can infect a single open source component that has the potential to be distributed ‘downstream” where it can be strategically and covertly exploited.”

Speed remains critical when responding to legacy software supply chain attacks

According to the report, enterprise software development teams differ in their response times to vulnerabilities in open source software components:

  • 47% of organizations ​became aware of new open source vulnerabilities after a week, and
  • 51% of organizations​ took more than a week to remediate the open source vulnerabilities

The researchers discovered that not all organizations prioritize improved risk management practices at the expense of developer productivity. This year’s report reveals that high performing development teams are ​26x faster at detecting and remediating open source vulnerabilities,​ and ​deploy changes to code 15x more frequently​ than their peers.

High performers are also:

  • 59% more likely​ to be using automated software composition analysis (SCA) to detect and remediate known vulnerable OSS components across the SDLC
  • 51% more likely​ to centrally maintain a software bill of materials (SBOMs) for applications
  • 4.9x more likely​ to successfully update dependencies and fix vulnerabilities without breakage
  • 33x more likely​ to be confident that OSS dependencies are secure (i.e., no known vulnerabilities)

Additional findings

  • 1.5 trillion component download requests​ projected in 2020 across all major open source ecosystems
  • 10% of java OSS component downloads ​by developers​ ​had known security vulnerabilities
  • 11% of open source components​ developers build into their applications are known vulnerable, with 38 vulnerabilities discovered on average
  • 40% of npm packages ​contain dependencies with known vulnerabilities​
  • New open source zero-day vulnerabilities are exploited in the wild within​ 3 days of public disclosure​
  • The average enterprise sources code from 3,500 OSS projects including over 11,000 component releases.

attacks open source

“We found that high performers are able to simultaneously achieve security and productivity objectives,” said Gene Kim, DevOps researcher and author of The Unicorn Project. “It’s fantastic to gain a better understanding of the principles and practices of how this is achieved, as well as their measurable outcomes.”

“It was really exciting to find so much evidence that this much-discussed tradeoff between security and productivity is really a false dichotomy. With the right culture, workflow, and tools development teams can achieve great security and compliance outcomes together with class-leading productivity,” said Dr. Stephen Magill, Principal Scientist at Galois & CEO of MuseDev.

Maximizing data privacy: Making sensitive data secure by default

Maximizing data privacy should be on every organization’s priority list. We all know how important it is to keep data and applications secure, but what happens when access to private data is needed to save lives? Should privacy be sacrificed? Does it need to be?

maximizing data privacy

Consider the case of contact tracing, which has become a key tool in the fight to control COVID-19. It’s a daunting task greatly facilitated by collecting and analyzing real-time identity and geo-location data gathered from mobile devices—sometimes voluntarily and sometimes not.

In most societies, such as the United States and the European Union, the use of location and proximity data by governments may be strictly regulated or even forbidden—implicitly impeding the ability to efficiently contain the spread of the virus. Where public health has been prioritized over data privacy, the use of automated tracing has contributed to the ability to quickly identify carriers and prevent disease spread. However, data overexposure remains a major concern for those using the application. They worry about the real threat that their sensitive location data may eventually be misused by bad actors, IT insiders, or governments.

What if it were possible to access the data needed to get contact tracing answers without actually exposing personal data to anyone anywhere? What if data and applications could be secure by default—so that data could be collected, stored, and results delivered without exposing the actual data to anyone except the people involved?

Unfortunately, current systems and software will never deliver the absolute level of data privacy required because of a fundamental hardware flaw: data cannot be simultaneously used and secured. Once data is put into memory, it must be decrypted and exposed to be processed. This means that once a bad actor or malicious insider gains access to a system, it’s fairly simple for that system’s memory and/or storage to be read, effectively exposing all data. It’s this data security flaw that’s at the foundation of virtually every data breach.

Academic and industry experts, including my co-founder Dr. Yan Michalevsky, have known for years that the ultimate, albeit theoretical, resolution of this flaw was to create a compute environment rooted in secure hardware. These solutions have already been implemented in cell phones and some laptops to secure storage and payments and they are working, well proving the concept works as expected.

It wasn’t until 2015 that Intel introduced Software Guard Extensions (SGX)—a set of security-related machine-level instruction codes built into their new CPUs. AMD has also added a similar proprietary instruction set called SEV technology into their CPUs. These new and proprietary silicon-level command sets enable the creation of encrypted and isolated parts of memory, and they establish a hardware root of trust that helps close the data security flaw. Such isolated and secured segments of memory are known as secure enclaves or, more generically, Trusted Execution Environments (TEEs).

A broad consortium of cloud and software vendors (called the Confidential Computing Consortium) is working to develop these hardware-level technologies by creating the tools and cloud ecosystems over which enclave-secured applications and data can run. Amazon Web Services announced its version of secure enclave technology, Nitro Enclaves, in late 2019. Most recently, both Microsoft (Azure confidential computing) and Google announced their support for secure enclaves as well.

These enclave technologies and secure clouds should enable applications, such as COVID-19 contact tracing, to be implemented without sacrificing user privacy. The data and application enclaves created using this technology enable sensitive data to be processed without ever exposing either the data or the computed results to anyone but the actual end user. This means public health organizations can have automated contact tracing that can identify, analyze, and provide needed alerts in real-time—while simultaneously maximizing data privacy.

Creating or shifting applications and data to the secure confines of an enclave can take a significant investment of time, knowledge, and tools. That’s changing quickly. New technologies are becoming available that will streamline the operation of moving existing applications and all data into secure enclaves without modification.

As this happens, all organizations will be able to secure all data by default. This will enable CISOs, security professionals—and public health officials—to sleep soundly, knowing that private data and applications in their care will be kept truly safe and secure.

Organizations knowingly ship vulnerable code despite using AppSec tools

Nearly half of organizations regularly and knowingly ship vulnerable code despite using AppSec tools, according to Veracode.

ship vulnerable code

Among the top reasons cited for pushing vulnerable code were pressure to meet release deadlines (54%) and finding vulnerabilities too late in the software development lifecycle (45%).

Respondents said that the lack of developer knowledge to mitigate issues and lack of integration between AppSec tools were two of the top challenges they face with implementing DevSecOps. However, nearly nine of ten companies said they would invest further in AppSec this year.

The software development landscape is evolving

The research sheds light on how AppSec practices and tools are intersecting with emerging development methods and creating new priorities such as reducing open source risk and API testing.

“The software development landscape today is evolving at light speed. Microservices-driven architecture, containers, and cloud-native applications are shifting the dynamics of how developers build, test, and deploy code. Without better testing, integration, and regular developer training, organizations will put themselves at jeopardy for a significant breach,” said Chris Wysopal, CTO at Veracode.

Key findings

  • 60% of organizations report having production applications exploited by OWASP Top 10 vulnerabilities in the past 12 months. Similarly, seven in 10 applications have a security flaw in an open source library on initial scan.
  • Developers’ lack of knowledge on how to mitigate issues is the biggest AppSec challenge – 53% of organizations only provide security training for developers once a year or less. Data shows that the top 1% of applications with the highest scan frequency carry about five times less security debt, or unresolved flaws, than the least frequently scanned applications, which means frequent scanning helps developers find and fix flaws to significantly lower their organization’s risk.
  • 43% cited DevOps integration as the most important aspect to improving their AppSec program.
  • 84% report challenges due to too many AppSec tools, making DevOps integration difficult. 43% of companies report that they have between 11-20 AppSec tools in use, while 22% said they use between 21-50.

ship vulnerable code

According to ESG, the most effective AppSec programs report the following as some of the critical components of their program:

  • Application security is highly integrated into the CI/CD toolchain
  • Ongoing, customized AppSec training for developers
  • Tracking continuous improvement metrics within individual development teams
  • AppSec best practices are being shared by development managers
  • Using analytics to track progress of AppSec programs and to provide data to management

Securing human resources from cyber attack

As COVID-19 forced organizations to re-imagine how the workplace operates just to maintain basic operations, HR departments and their processes became key players in the game of keeping our economy afloat while keeping people alive.

Without a doubt, people form the core of any organization. The HR department must strike an increasingly delicate balance while fulfilling the myriad of needs of workers in this “new normal” and supporting organizational efficiency. As the tentative first steps of re-opening are being taken, many organizations remain remote, while others are transitioning back into the office environment.

Navigating the untested waters of managing HR through this shift to remote and back again is complex enough without taking cybercrime and data security into account, yet it is crucial that HR do exactly that. The data stored by HR is the easy payday cybercriminals are looking for and a nightmare keeping CISOs awake at night.

Why securing HR data is essential

If compromised, the data stored by HR can do a devastating amount of damage to both the company and the personal lives of its employees. HR data is one of the highest risk types of information stored by an organization given that it contains everything from basic contractor details and employee demographics to social security numbers and medical information.

Many state and federal laws and regulations govern the storage, transmission and use of this high value data. The sudden shift to a more distributed workforce due to COVID-19 increased risks because a large portion of the HR workforce being remote means more and higher access levels across cloud, VPN, and personal networks.

Steps to security

Any decent security practitioner will tell you that no security setup is foolproof, but there are steps that can me taken to significantly reduce risk in an ever-evolving environment. A multi-layer approach to security offers better protection than any single solution. Multiple layers of protection might seem redundant, but if one layer fails, the other layers work fill in gaps.

Securing HR-related data needs to be approached from both a technical and end user perspective. This includes controls designed to protect the end user or force them into making appropriate choices, and at the same time providing education and awareness so they understand how to be good stewards of their data.

Secure the identity

The first step to securing HR data is making sure that the ways in which users access data are both secure and easy to use. Each system housing HR data should be protected by a federated login of some variety. Federated logins use a primary source of identity for managing usernames and passwords such as Active Directory.

When a user uses a federated login, the software utilizes a system like LDAP, SAML, or OAuth to query the primary source of identity to validate the username and password, as well as ensure that the user has appropriate rights to access. This ensures that users only have to learn one username and password and we can ensure that the password complies with organizationally mandated complexity policies.

The next step to credential security is to add a second factor of authentication on every system storing HR data. This is referred to as Multi-factor Authentication (MFA) and is a vital preventative measure when used well. The primary rule of MFA says that the second factor should be something “the user is or has” to be most effective.

This second factor of authentication can be anything from a PIN generated on a mobile device to a biometric check to ensure the person entering the password is, in fact, the actual owner. Both of these systems are easy for end users to use and add very little additional friction to the authentication effort, while significantly reducing the risk of credential theft, as it’s difficult for someone to compromise users’ credentials and steal their mobile device or a copy of their fingerprints.

Infrastructure

In today’s world, HR users working from somewhere other than the office is not unusual. With this freedom comes the need to secure the means by which they access data, regardless of the network they are using. The best way to accomplish this is to set up a VPN and ensure that all HR systems are only accessible either from inside of the corporate network or from IPs that are connected to the VPN.

A VPN creates an encrypted tunnel between the end user’s device and the internal network. The use of a VPN protects the user against snooping even if they are using an unsecured network like a public Wi-Fi at a coffee shop. Additionally, VPNs require authentication and, if that includes MFA, there are three layers of security to ensure that the person connecting in is a trusted user.

Tracking usage

Next, you have to ensure that access is being used appropriately or that no anomalous use is taking place. This is done through a combination of good logging and good analytics software. Solutions that leverage AI or ML to review how access is being utilized and identify usage trends further increase security. The logging solution verifies appropriate usage while the analysis portion helps to identify any questionable activity taking place. This functions as an early warning system in case of compromised accounts and insider threats.

Comprehensive analytics solutions will notice trends in behavior and flag an account if the user changes their normal routine. If odd activity occurs (e.g., going through every HR record), the system alerts an administrator to delve deeper into why this user is viewing so many files. If it notices access occurring from IP ranges coming in through the VPN from outside of the expected geographical areas, accounts can be automatically disabled while alerts are sent out and a deeper investigation takes place. This are ways to shrink the scope of an incident and reduce the damage should an attack occur.

Secure the user

Security awareness training for end users is one of the most essential components of infrastructure security. The end user is a highly valuable target because they already have access to internal resources. The human element is often considered a high-risk factor because humans are easier to “hack” than passwords or automatic security controls.

Social engineering attacks succeed when people aren’t educated to spot red flags indicating an attack is being attempted. Social engineering attacks are the easiest and least costly option for an attacker because any charismatic criminal with good social skills and a mediocre acting ability can be successful. The fact that this type of cyberattack requires no specialized technical skill expands the potential number of attackers.

The most important step of a solid layered security model is the one that prevent these attacks through education and awareness. By providing end users engaging, thorough, and relevant training about types of attacks such as phishing and social engineering, organizations arm their staff with the tools they need to avoid malicious links, prevent malware or rootkit installation, and dodge credential theft.

No perfect security

No matter where the job gets done, HR needs to deliver effective services to employees while still taking steps to keep employee data safe. Even though an organization cannot control every aspect of how work is getting done, these steps will help keep sensitive HR data safe.

Control over accounts, how they are monitored, and what they are accessing are important steps. Arming the end user directly, with the awareness needed to prevent having their good intentions weaponized, requires a combination of training and controls that create a pro-active system of prevention, early warnings, and swift remediation. There is no perfect security solution for protecting HR data, but multiple, overlapping security layers can protect valuable HR assets without making it impossible for HR employees to do their work.

10-point plan for securing employee health data collected for COVID-19 prevention

The COVID-19 pandemic has dramatically changed the business landscape and, over the past few months, employers have found themselves in uncharted waters on more than one occasion. First, it was getting entire workforces up-and-running from home practically overnight. And now, as employees are welcomed back onsite, employers are required to follow new health and safety protocols to prevent the virus’ spread and maintain near-normal operations.

One health initiative causing confusion (and often tension) within many organizations is the use of contact-tracing applications. The Center for Disease Control (CDC) believes contact tracing is key to slowing the spread of COVID-19, putting business owners and managers under pressure to use these applications.

Many are also sensitive to how this measure might affect employee privacy rights. Contact-tracing applications require employers to collect all kinds of employee health data that they never had to worry about before – temperatures, health symptoms and travel history, for example – and they aren’t sure how to use and protect this data in a way that balances health and safety with privacy.

Data protection guidance

Employee health data is considered personally identifiable information (PII) and should be protected accordingly. This is easier said than done, though. In the U.S., there’s no single federal law that regulates the protection of PII or a certification body for compliance. Instead, there’s a mix of federal (e.g., the FTC and Gramm-Leach-Bliley Act) and state laws (e.g., the California Consumer Privacy Act), sector-specific regulations (e.g., the Health Insurance Portability and Accountability Act) and self-regulatory programs developed by industry groups. It’s up to individual organizations to become familiar with the federal, local and industry requirements applicable to their business and ensure they are in full compliance with all relevant policies.

For organizations that aren’t aware of these PII protection mandates and don’t have a documented data classification policy in place, protecting COVID-19-prompted employee health data can be an overwhelming concept. To help get you started on the right path, here is a 10-point plan for securing PII, including new employee health data collected through COVID-19 contact-tracing applications and other healthcare tracking systems.

1. Identify a single point of contact who will be responsible for the privacy and security of PII

This best practice is self-explanatory, but it’s worth taking a moment to discuss why it’s so important in the data protection process. There are many business departments involved in the collection and usage of PII, including security teams, compliance teams, the legal department, HR, business units, etc. Without a designated leader to define roles, responsibilities and processes, it’s likely that PII privacy and protection activity will be minimal, because each employee will assume someone else is taking care of it.

2. Determine your goal for collecting employee health data

Why are you collecting employee health data? Your answer will determine which data fields you need to collect and store. For example, if your goal is to prevent the spread of COVID-19, you might document an employee’s name, number, temperature or location data. It’s important to note here, that in the world of security, less is better – don’t collect data that is irrelevant to your goal.

3. Store only the minimum data necessary for the minimum amount of time necessary

To reiterate the point I just made: The less data you have on file, the less you have to secure and there’s a lesser chance of a privacy or security breach. This is why it’s so important to keep only the data you absolutely need to achieve your health and safety goals for only the required length of time – and no longer.

4. Implement strict access controls based on job requirements

In addition to determining why you’re collecting employee health data, it’s also important to identify who will be accessing the information, so you can implement the proper security controls. Role-based access control (RABC), as its name implies, can help you restrict access to PII based on employees’ roles within the company. Once access controls are put in place, it’s important to implement consistent monitoring measures to prevent unauthorized access that could lead to privacy or security issues.

5. Only store PII in documented and approved locations within the network

Make sure employee health data is stored within the trusted internal network and not in a DMZ network (i.e., a demilitarized zone). Housing this data on external-facing systems that sit on untrusted networks, such as the internet, can greatly escalate security risk. Data flow charts are a great way to keep track of which applications are storing data and where they reside.

6. Vet your vendors and business partners to ensure they meet your organization’s security standards

Before partnering with a third-party vendor to manage employee health data and systems, assess their internal security and compliance processes and how they apply to their work with customers. Ensure contracts include:

  • Protocols for safeguarding your data.
  • Breach notification requirements.
  • A defined process for the destruction or handing back of your data at the end of the contract.

7. Protect data by encrypting it at rest and during transit

According to data from nCipher Security, fewer than 50% of enterprises have an encryption strategy applied consistently across their organizations. Encryption is a basic best practice in any security program, but it plays a critical role in protecting PII from both insider and external threats. The reason is, even if employee data falls into the wrong hands, when encrypted, the attacker won’t be able to use the stolen information.

8. Ensure data is regularly archived in accordance with your organization’s disaster recovery/business continuity (DR/BC) plan

In addition to storing and archiving data in accordance with compliance mandates, make sure your data archiving processes follow your DR/BC plan requirements as well. Additionally, if you need to add systems to your infrastructure for employee health data tracking, you must update your DR/BC plan accordingly. Given the rate at which environments change, it’s always a good idea to review DR/BC plans on a periodic basis to make sure they reflect your current IT estate.

9. Destroy PII when it’s no longer needed

Remember how I mentioned only storing data for the minimum amount of time necessary? Once you no longer need employee health data, you must eliminate it from your network to reduce security and privacy risk. To keep up with data hygiene, implement a process to ensure all unneeded employee health data is destroyed on an established schedule.

10. Implement privacy principles

There are several privacy principles that should be included in any data classification program. These include:

  • Notice – Let your employees know what data is stored and why.
  • Consent – Offer employees an authorization form, so they can give their consent to the collection, use and disclosure of PII for specific purposes.
  • Withdrawal – Make sure employees understand that they have the right to withdraw consent at any time.
  • Policy – Create policies that lay out the collection, use and disclosure of PII.
  • Limited purpose – Only collect PII that is relevant, and do not exceed the stated business goal.
  • Accessibility – Give employees the right to access their data at any time.
  • Accuracy – Give employees the ability to request corrections.

Balancing the scales

Preventing the spread of COVID-19 is a top priority for companies around the world, but it must be done in a way that adheres to security requirements and maintains employee privacy. Hopefully, this 10-point roadmap will get you on your way to creating a data classification program that gives equal weight to health, safety and employee privacy considerations. Doing so will result in not only healthy employees, but happy employees as well.