At the recent KubeCon + CloudNativeCon North America 2020, I had the opportunity to take part in a keynote panel with a number of other cloud native security practitioners. We got questions on a wide range of cloud native security topics and through those and other talks at the conference, I’ve been able to identify some key concerns around container security and the wider cloud native ecosystem.
It’s not just Kubernetes
The Kubernetes project understandably gets a lot of focus – it’s a key part of most companies’ cloud native strategies – but if you look at real-world deployments, you’ll see a host of supporting tools, from package managers and container image repositories to tools to help scaling. The cloud native landscape shows an infamously complex picture of the number of options available.
From a security perspective this is very important point: It’s not only Kubernetes security you need to consider when deploying cloud native technologies, but also the security of the surrounding tools. That’s why, when looking at a deployment, it’s worth asking what else is in the mix, and how those components handle security concerns.
And in addition to the supporting products, with every containerized deployment it’s also important to consider all the layers that make up a Kubernetes cluster. Besides Kubernetes itself, there will always be a container runtime (like Docker) and usually an underlying node operating system (mostly Linux). In sum, it’s not enough to just look at the security of Kubernetes itself—Docker security hardening, container image vulnerability management and operating system security are all still concerns.
Secure defaults matter
A lot of cloud native tooling is relatively complex, with a large number of configuration options. As a result, for good reasons and bad, many companies will start from the default settings. In fact, looking at the default posture of projects and their security should be an important part of the decision-making processes when deploying cloud native solutions.
There is external help available: as part of its graduation process, the Cloud Native Computing Foundation (CNCF) requires a complete and independent security review. Going through these reviews can help companies determine whether the project’s approach to security matches their own requirements.
Detection and response
Along with the usual array of preventative controls that are deployed as part of a cloud native platform, companies need to focus on detection and response to breaches. It’s important to note that the usual toolsets that are put in place will need to be supplemented by cloud native tools that can provide targeted visibility into container-based workflows.
Projects like Falco, which can integrate with container workloads at a low level, are an important part of this. Additionally, companies should make sure to properly use the facilities that Kubernetes provides. For example, Kubernetes audit logging is rarely enabled by default, but it’s an important control for any production cluster.
Getting it right from the start
A key takeaway for container security deployments is the importance of getting security controls in place before workloads are placed into production. Ensuring that developers are making use of Kubernetes features like Security Contexts to harden their deployments will make the deployment of mandatory controls much easier.
Also ensuring that a “least privilege” initial approach is taken to network traffic in a cluster can help avoid the “hard shell, soft inside” approach to security that allows attackers to easily expand their access after an initial compromise has occurred.
Keeping up with developments
One constant of the cloud native world, which was reinforced at KubeCon, is that there are always new products and services on the horizon, and all of them need to be considered from a security perspective. This year, one of the themes of the cloud native landscape has been the rise of operators and using Kubernetes as a control plane for other services like databases instead of only to manage containers. Expanding the scope of services run by Kubernetes means that the impact of any security misconfiguration may be more widely felt across the IT environment.
This places more emphasis on the Kubernetes RBAC system, and requires ensuring that both users and services with access to it are working on a least-privilege basis. All too often, software installed on a cluster will request “cluster-admin” access, which provides blanket access to every resource managed by the Kubernetes API.
Bringing it all together
It’s clear from the continued success of containerization and cloud native computing that they’re going to be a long-lasting trend in the IT world, and while there are many new buzzwords to absorb, a lot of the security controls that need to be deployed are essentially the same as for more traditional environments. However, care needs to be taken to ensure that our existing security environments are updated to account for differences in how the cloud native world operates.
The global cloud security market is projected to account for $20.9 billion by 2027, according to a report by Million Insights and is expected to grow with 14.6% CAGR from 2020 to 2027. Growing investment in cloud infrastructure and an increasing number of cyber attacks are expected to drive the market growth.
The cloud infrastructure is gaining popularity due to several benefits such as scalability, flexibility, cost-effectiveness, and on-demand services.
Additionally, the emergence of hybrid cloud to a tussle between private and public cloud has given several frameworks and platforms to cloud users to choose from. The adoption of cloud has been gaining traction in recent years, thereby security concerns among cloud users have been increased.
What fuels demand for cloud security?
The demand for cloud security is expected to increase during the forecast period due to the rising number of cyber attacks, and data breaches.
In addition, industry players are also playing an important role in implementing compliance laws and regulations according to industry-wide standards. Increasing policy implementation and demand for security services are expected to drive the cloud security market growth in the next few years.
Moreover, diverse threat vectors and versatility of data lead to security-as-a-service offerings. Sharing responsibility between cloud end users and cloud service providers for data security is expected to witness a significant impact on market growth.
Further, technologies like convergence and virtualization coupled with initiatives like computer emergency readiness teams (CERTs) is expected to support for implementing security at a high level for cloud infrastructure.
Growing sophistication in hacking techniques, as well as technological advancements in cyber espionage, are unleashing new attacks like advanced persistent threats (APTs), ransomware, zero-day threats, malicious insider, DDoS. As a result, industry players are focusing on partnerships and collaborations to tackle such cyber attacks.
Further key findings
- Self-mutating codes, evasion techniques, and polymorphic have changed the convectional endpoint protection mechanisms and security technologies.
- In the past few years, the number of data theft has increased including Anthem, Home Depot, and Ashley Madison.
- In 2019, North America accounted for the largest market share due to growing awareness about cyber attacks and corporate espionages.
- Several regions and countries like the European Union have implemented cyber regulations to protect information and data. For instance, Germany is striving for greater data privacy wherein other countries like France and U.S. are looking for better visibility in internet traffic.
- Numerous industry specific regulations like Payment Card Industry Data Security Standard (PCI DSS) for financial sector, Health Insurance Portability and Accountability Act of 1996 (HIPAA) for the healthcare sector and international laws such as Safe Harbor Act & European Union Data Protection Directive are expected to drive the cloud security market growth.
- Key players such as CA, Intel, IBM, Trend Micro and Symantec are concentrating on partnerships, collaborations, and alliances to strengthen their market position.
Bitglass released a report which uncovers whether organizations are properly equipped to defend themselves in the cloud. IT and security professionals were surveyed to understand their top security concerns and identify the actions that enterprises are taking to protect data in the cloud.
Orgs struggling to use cloud-based resources safely
93% of respondents were moderately to extremely concerned about the security of the public cloud. The report’s findings suggest that organizations are struggling to use cloud-based resources safely. For example, a mere 31% of organizations use cloud DLP, despite 66% citing data leakage as their top cloud security concern.
Similarly, organizations are unable to maintain visibility into file downloads (45%), file uploads (50%), DLP policy violations (50%), and external sharing (55%) in the cloud.
Many still using legacy tools
The report also found that many still try to use tools like firewalls (44%), network encryption (36%), and network monitoring (26%) to secure the use of the cloud–despite 82% of respondents recognizing that such legacy tools are poorly suited to do so and that they should instead use security capabilities designed for the cloud.
“To address modern cloud security needs, organizations should leverage multi-faceted security platforms that are capable of providing comprehensive and consistent security for any interaction between any device, app, web destination, on-premises resource, or infrastructure,” said Anurag Kahol, CTO at Bitglass.
“According to our research, 79% of organizations already believe it would be helpful to have such a consolidated security platform; now they just need to choose and implement the right one.”
Many companies tend to jump into the cloud before thinking about security. They may think they’ve thought about security, but when moving to the cloud, the whole concept of security changes. The security model must transform as well.
Moving to the cloud and staying secure
Most companies maintain a “castle, moat, and drawbridge” attitude to security. They put everything inside the “castle” (datacenter); establish a moat around it, with sharks and alligators, guns on turrets; and control access by raising the drawbridge. The access protocol involves a request for access, vetting through firewall rules where the access is granted or denied. That’s perimeter security.
When moving to the cloud, perimeter security is still important, but identity-based security is available to strengthen the security posture. That’s where a cloud partner skilled at explaining and operating a different security model is needed.
Anybody can grab a virtual machine, build the machine in the cloud, and be done, but establishing a VM and transforming the machine to a service with identity-based security is a different prospect. When identity is added to security, the model looks very different, resulting in cost savings and an increased security posture.
Advanced technology, cost of security, and lack of cybersecurity professionals place a strain on organizations. Cloud providers invest heavily in infrastructure, best-in-class tools, and a workforce uniquely focused on security. As a result, organizations win operationally, financially, and from a security perspective, when moving to the cloud. To be clear, moving applications and servers, as is, to the cloud does not make them secure.
Movement to the cloud should be a standardized process and should use a Cloud Center of Excellence (CCoE) or Cloud Business Office (CBO); however, implemented within a process focused on security first, organizations can reap the security benefits.
Although security is marketed as a shared responsibility in the cloud, ultimately, the owner of the data (customer) is responsible and the responsibility is non-transferrable. In short, the customer must understand the responsibility matrix (RACI) involved to accomplish their end goals. Every cloud provider has a shared responsibility matrix, but organizations often misunderstand the responsibilities or the lines fall into a grey area. Regardless of responsibility models, the data owner has a responsibility to protect the information and systems. As a result, the enterprise must own an understanding of all stakeholders, their responsibilities, and their status.
When choosing a partner, it’s vital for companies to identify their exact needs, their weaknesses, and even their culture. No cloud vendor will cover it all from the beginning, so it’s essential that organizations take control and ask the right questions (see Cloud Security Alliance’s CAIQ), in order to place trust in any cloud provider. If it’s to be a managed service, for example, it’s crucial to ask detailed questions about how the cloud provider intends to execute the offering.
It’s important to develop a standard security questionnaire and probe multiple layers deep into the service model until the provider is unable to meet the need. Looking through a multilayer deep lens allows the customer and service provider to understand the exact lines of responsibility and the details around task accomplishment.
It might sound obvious, but it’s worth stressing: trust is a shared responsibility between the customer and cloud provider. Trust is also earned over time and is critical to the success of the customer-cloud provider relationship. That said, zero trust is a technical term that means, from a technology viewpoint, assume danger and breach. Organizations must trust their cloud provider but should avoid blind trust and validate. Trust as a Service (TaaS) is a newer acronym that refers to third-party endorsement of a provider’s security practices.
Key influencers of a customer’s trust in their cloud provider include:
- Data location
- Investigation status and location of data
- Data segregation (keeping cloud customers’ data separated from others)
- Privileged access
- Backup and recovery
- Regulatory compliance
- Long-term viability
A TaaS example: Google Cloud
Google has taken great strides to earn customer trust, designing the Google Cloud Platform with a key eye on zero trust and its implementation of the model BeyondCorp. For example, Google has implemented two core concepts including:
- Delivery of services and data: ensuring that people with the correct identity and the right purpose can access the required data every time
- Prioritization and focus: access and innovation are placed ahead of threats and risks, meaning that as products are innovated, security is built into the environment
Transparency is very important to the trust relationship. Google has enabled transparency through strong visibility and control of data. When evaluating cloud providers, understanding their transparency related to access and service status is crucial. Google ensures transparency by using specific controls including:
- Limited data center access from a physical standpoint, adhering to strict access controls
- Disclosing how and why customer data is accessed
- Incorporating a process of access approvals
Multi-layered security for a trusted infrastructure
Finally, cloud services must provide customers with an understanding of how each layer of infrastructure works and build rules into each. This includes operational and device security, encrypting data at rest, multiple layers of identity, and finally storage services: multi-layered, and supported by security by default.
Cloud native companies have a security-first approach and naturally have a higher security understanding and posture. That said, when choosing a cloud provider, enterprises should always understand, identify, and ensure that their cloud solution addresses each one of their security needs, and who’s responsible for what.
Essentially, every business must find a cloud partner that can answer all the key questions, provide transparency, and establish a trusted relationship in the zero trust world where we operate.
As evolution to the cloud is accelerated by digital transformation across industries, virtual appliance security has fallen behind, Orca Security reveals.
Virtual appliance security
The report illuminated major gaps in virtual appliance security, finding many are being distributed with known, exploitable and fixable vulnerabilities and on outdated or unsupported operating systems.
To help move the cloud security industry towards a safer future and reduce risks for customers, 2,218 virtual appliance images from 540 software vendors were analyzed for known vulnerabilities and other risks to provide an objective assessment score and ranking.
Virtual appliances are an inexpensive and relatively easy way for software vendors to distribute their wares for customers to deploy in public and private cloud environments.
“Customers assume virtual appliances are free from security risks, but we found a troubling combination of rampant vulnerabilities and unmaintained operating systems,” said Avi Shua, CEO, Orca Security.
“The Orca Security 2020 State of Virtual Appliance Security Report shows how organizations must be vigilant to test and close any vulnerability gaps, and that the software industry still has a long way to go in protecting its customers.”
Known vulnerabilities run rampant
Most software vendors are distributing virtual appliances with known vulnerabilities and exploitable and fixable security flaws.
- The research found that less than 8 percent of virtual appliances (177) were free of known vulnerabilities. In total, 401,571 vulnerabilities were discovered across the 2,218 virtual appliances from 540 software vendors.
- For this research, 17 critical vulnerabilities were identified, deemed to have serious implications if found unaddressed in a virtual appliance. Some of these well-known and
easily exploitable vulnerabilities included: EternalBlue, DejaBlue, BlueKeep, DirtyCOW, and Heartbleed.
- Meanwhile, 15 percent of virtual appliances received an F rating, deemed to have failed the research test.
- More than half of tested virtual appliances were below an average grade, with 56 percent obtaining a C rating or below (15.1 percent F; 16.1 percent D; 25 percent C).
- However, due to a retesting of the 287 updates made by software vendors after receiving findings, the average grade of these rescanned virtual appliances has increased from a B to an A.
Outdated appliances increase risk
Multiple virtual appliances were at security risk from age and lack of updates. The research found that most vendors are not updating or discontinuing their outdated or end-of-life (EOL) products.
- The research found that only 14 percent (312) of the virtual appliance images had been updated within the last three months.
- Meanwhile, 47 percent (1,049) had not been updated within the last year; 5 percent (110) had been neglected for at least three years, and 11 percent (243) were running on out of date or EOL operating systems.
- Although, some outdated virtual appliances have been updated after initial testing. For example, Redis Labs had a product that scored an F due to an out-of-date operating system and many vulnerabilities, but now scored an A+ after updates.
The silver lining
Under the principle of Coordinated Vulnerability Disclosure, researchers emailed each vendor directly, giving them the opportunity to fix their security issues. Fortunately, the tests have started to move the cloud security industry forward.
As a direct result of this research, vendors reported that 36,259 out of 401,571 vulnerabilities have been removed by patching or discontinuing their virtual appliances from distribution. Some of these key corrections or updates included:
- Dell EMC issued a critical security advisory for its CloudBoost Virtual Edition
- Cisco published fixes to 15 security issues found in the one of its virtual appliances scanned in the research
- IBM updated or removed three of its virtual appliances within a week
- Symantec removed three poorly scoring products
- Splunk, Oracle, IBM, Kaspersky Labs and Cloudflare also removed products
- Zoho updated half of its most vulnerable products
- Qualys updated a 26-month-old virtual appliance that included a user enumeration vulnerability that Qualys itself had discovered and reported in 2018
Maintaining virtual appliances
For customers and software vendors concerned about the issues illuminated in the report, there are corrective and preventive actions that can be taken. Software suppliers should ensure their virtual appliances are well maintained and that new patches are provided as vulnerabilities are identified.
When vulnerabilities are discovered, the product should be patched or discontinued for use. Meanwhile, vulnerability management tools can also discover virtual appliances and scan them for known issues. Finally, companies should also use these tools to scan all virtual appliances for vulnerabilities before use as supplied by any software vendor.
DevSecOps tactics and tools are dramatically changing the way organizations bring their applications to fruition. Having a mindset that security must be incorporated into every stage of the software development lifecycle – and that everyone is responsible for security – can reduce the total cost of software development and ensure faster release of secure applications.
A common goal of any security strategy is to resolve issues quickly and safely before they can be exploited for a breach resulting in data loss. Application developers are not security specialists, and likely do not have the knowledge and skills to find and fix security issues in a timely manner. This is where security automation can help.
Security automation uses tools to continuously scan, detect, investigate, and remediate threats and vulnerabilities in code or the application environment, with or without human intervention. Tools scale the process of incorporating security into the DevSecOps process without requiring an increase in human skills or resources. They do this by automatically putting up safety rails around the issue whenever they find something that is a clear and obvious violation of security policy.
The AWS cloud platform is ripe for security automation
Amazon claims to have more than a million customers on its cloud computing platform, mostly small and mid-size companies but also enterprise-scale users. Regardless of customer size, Amazon has always had a model of shared responsibility for security.
Amazon commits to securing every component under its control. Customers, however, are responsible for securing what they control, which includes configurations, code, applications, and most importantly, data. This leaves a lot of opportunity for misconfigurations, insecure code, vulnerable APIs, and poorly secured data that can all lead to a data breach.
A common security problem in AWS is an open S3 storage bucket where data is publicly readable on the Internet. Despite the default configuration of S3 buckets being private, it’s fairly easy for developers to change policies to be open and for that permission change to apply in a nested fashion. A security automation tool should be able to find and identify this insecure configuration and simply disable public access to the resource without requiring human intervention.
Amazon added such tools in 2017 and again in 2018, yet we keep seeing headlines of companies whose data has been breached due to open S3 buckets. The security tool should communicate to the appropriate teams, but in many situations based on the sensitive contents of the data, the tool should also auto-remediate the misconfigured access policies. Teams that embrace security automation can also use this type of alerting and auto-remediation to become more aware of issues in their code or environment and, hopefully, head them off before they occur again.
What else can be auto-remediated? There are hundreds of vulnerabilities in AWS that can and should be fixed without human intervention. Here are just a few examples:
- AWS CloudTrail data-at-rest encryption levels
- AWS CloudFront Distribution logging access control
- AWS Elastic Block Store access control
- AWS S3 bucket access control
- AWS S3 bucket ransomware exposure
- AWS Simple Queue Service exposure
Essential features of a security automation tool
There are important categories of features of a security automation product for AWS. One category addresses data-in-motion with auto-remediation of API and queuing authentication and encryption. The other addresses data-at-rest with auto-remediation of database and storage encryption and backup. Security monitoring and enforcement are needed to automatically protect developers from making mistakes in how they are moving or storing data.
Here are four essential features to look for in a security automation tool.
1. Continuous discovery of shadow APIs within cloud, mobile, and web apps
APIs enable machine-to-machine data retrieval, essentially removing barriers and accelerating access to data. There is hardly a modern application today that doesn’t provide an API to integrate with other applications and data sources. A developer only needs to write a few lines of code to create an API. A shadow API is one that operates outside the purview of the security team. It’s a challenge to enforce security on code that is known only to the programmer. Thus, a security automation tool must have the ability to continuously scan for and discover APIs that may pose a security threat to prevent a data breach.
2. Full-stack security analysis of mobile and modern web apps
Before data gets taken up into the AWS cloud, it often starts at the client layer with a web or mobile app. Protecting user privacy and securing sensitive data is a continuous effort that requires vulnerability analysis from mobile to web to backend cloud services. Modern attackers often focus on exploiting the client layer to highjack user sessions, embedded passwords, and toxic tokens left inside mobile apps or single-page applications.
3. Automation fully integrated into the CI/CD pipeline with support for auto-remediation
Most vulnerability assessment tools integrate into the CI/CD pipeline by reporting what they find to systems such as Jira, Bugzilla and Jenkins. This is table stakes for assessment tools. What’s more valuable, however, is to include auto-remediation of the issues in the CI/CD pipeline. Instead of waiting for a human to make and verify the fix for the vulnerability, the tool does it automatically and reports the results to the ticketing system. This frees developers from having to spend time resolving common issues.
4. Automated vulnerability hacking toolkits for scheduled pre-production assessments
Companies often hire white hat hackers to do, basically, a moment-in-time penetration test in their pre-production environment. A more modern approach is to deploy a toolkit that continuously performs the same hacking activities. Not only is using such a toolkit much more cost effective, but it also works non-stop to find and fix vulnerabilities.
When auto-remediation may not be appropriate
Automatic remediation of some security issues isn’t always appropriate. Rather, it’s better that the tool simply discovers the issue and raises an alert to allow a person to decide how to resolve it. For example, auto-remediation is generally unsuitable when an encryption key is required, such as for a database, and for configurations that require user interactions, such as selecting a VPC or an IAM rule. It’s also not appropriate when the fix requires changes to existing code logic within the customer’s proprietary code base.
Nonetheless, some tools do aid in dealing with insecure code. One helpful feature that isn’t found in all security automation tools is the recognition of faulty code and recommendations on how to fix it with secure code. Seeing the recommended code fix in the pre-production stage helps resolve issues quickly without wasting time doing research on why the code is troublesome. Developers get to focus on their applications while security teams ensure continuous security validation.
AWS is a complex environment with many opportunities for misconfigurations and other issues that can lead to a data breach. Security automation with auto-remediation takes pressure off developers to find and fix a wide variety of vulnerabilities in code and configurations to help keep their organizations’ data safe.
Today’s organizations desire the accessibility and flexibility of the cloud, yet these benefits ultimately mean little if you’re not operating securely. One misconfigured server and your company may be looking at financial or reputational damage that takes years to overcome.
Fortunately, there’s no reason why cloud computing can’t be done securely. You need to recognize the most critical cloud security challenges and develop a strategy for minimizing these risks. By doing so, you can get ahead of problems before they start, and help ensure that your security posture is strong enough to keep your core assets safe in any environment.
With that in mind, let’s dive into the five most pressing cloud security challenges faced by modern organizations.
1. The perils of cloud migration
According to Gartner, the shift to cloud computing will generate roughly $1.3 trillion in IT spending by 2022. The vast majority of enterprise workloads are now run on public, private or hybrid cloud environments.
Yet if organizations heedlessly race to migrate without making security a primary consideration, critical assets can be left unprotected and exposed to potential compromise. To ensure that migration does not create unnecessary risks, it’s important to:
- Migrate in stages, beginning with non-critical or redundant data. Mistakes are often more likely to occur earlier in the process. So, begin moving data that won’t lead to damaging consequences to the enterprise in case it gets corrupted or erased.
- Fully understand your cloud provider’s security practices. Go beyond “trust by reputation” and really dig into how your data is stored and protected.
- Maintain operational continuity and data integrity. Once migration occurs, it’s important to ensure that controls are still functioning and there is no disruption to business operations.
- Manage risk associated with the lack of visibility and control during migration. One effective way to manage risk during transition is to use breach and attack simulation software. These automated solutions launch continuous, simulated attacks to view your environment through the eyes of an adversary by identifying hidden vulnerabilities, misconfigurations and user activity that can be leveraged for malicious gain. This continuous monitoring provides a significant advantage during migration – a time when IT staff are often stretched thin, learning new concepts and operating with less visibility into key assets.
2. The need to master identity and access management (IAM)
Effectively managing and defining the roles, privileges and responsibilities of various network users is a critical objective for maintaining robust security. This means giving the right users the right access to the right assets in the appropriate context.
As workers come and go and roles change, this mandate can be quite a challenge, especially in the context of the cloud, where data can be accessed from anywhere. Fortunately, technology has improved our ability to track activities, adjust roles and enforce policies in a way that minimizes risk.
Today’s organizations have no shortage of end-to-end solutions for identity governance and management. Yet it’s important to understand that these tools alone are not the answer. No governance or management product can provide perfect protection as organizations are eternally at the mercy of human error. To help support smart identity and access management, it’s critical to have a layered and active approach to managing and mitigating security vulnerabilities that will inevitably arise.
Taking steps like practicing the principle of least privilege by permitting only the minimal amount of access necessary to perform tasks will greatly enhance your security posture.
3. The risks posed by vendor relationships
The explosive growth of cloud computing has highlighted new and deeper relationships between businesses and vendors, as organizations seek to maximize efficiencies through outsourcing and vendors assume more important roles in business operations. Effectively managing vendor relations within the context of the cloud is a core challenge for businesses moving forward.
Why? Because integrating third-party vendors often substantially raises cybersecurity risk. A Ponemon institute study in 2018 noted that nearly 60% of companies surveyed had encountered a breach due to a third-party. APT groups have adopted a strategy of targeting large enterprises via such smaller partners, where security is often weaker. Adversaries know you’re only as strong as your weakest link and take the least path of resistance to compromise assets. Due to this, it is incumbent upon today’s organizations to vigorously and securely manage third-party vendor relations in the cloud. This means developing appropriate guidance for SaaS operations (including sourcing and procurement solutions) and undertaking periodic vendor security evaluations.
4. The problem of insecure APIs
APIs are the key to successful cloud integration and interoperability. Yet insecure APIs are also one of the most significant threats to cloud security. Adversaries can exploit an open line of communication and steal valuable private data by compromising APIs. How often does this really occur? Consider this: By 2022, Gartner predicts insecure APIs will be the vector most commonly used to target enterprise application data.
With APIs growing ever more critical, attackers will continue to use tactics such as exploiting inadequate authentications or planting vulnerabilities within open source code, creating the possibility of devastating supply chain attacks. To minimize the odds of this occurring, developers should design APIs with proper authentication and access control in mind and seek to maintain as much visibility as possible into the enterprise security environment. This will allow for the quick identification and remediation of such API risks.
5. Dealing with limited user visibility
We’ve mentioned visibility on multiple occasions in this article – and for good reason. It is one of the keys to operating securely in the cloud. The ability to tell friend from foe (or authorized user from unauthorized user) is a prerequisite for protecting the cloud. Unfortunately, that’s a challenging task as cloud environments grow larger, busier and more complex.
Controlling shadow IT and maintaining better user visibility via behavior analytics and other tools should be a top priority for organizations. Given the lack of visibility across many contexts within cloud environments, it’s a smart play to develop a security posture that is dedicated to continuous improvement and supported by continuous testing and monitoring.
Critical cloud security challenges: The takeaway
Cloud security is achievable as long as you understand, anticipate and address the most significant challenges posed by migration and operation. By following the ideas outlined above, your organization will be in a much stronger position to prevent and defeat even the most determined adversaries.
A number of organizations face shortcomings in monitoring and securing their cloud environments, according to a Tripwire survey of 310 security professionals.
76% of security professionals state they have difficulty maintaining security configurations in the cloud, and 37% said their risk management capabilities in the cloud are worse compared with other parts of their environment. 93% are concerned about human error accidentally exposing their cloud data.
Few orgs assessing overall cloud security posture in real time
Attackers are known to run automated searches to find sensitive data exposed in the cloud, making it critical for organizations to monitor their cloud security posture on a recurring basis and fix issues immediately.
However, the report found that only 21% of organizations assess their overall cloud security posture in real time or near real time. While 21% said they conduct weekly evaluations, 58% do so only monthly or less frequently. Despite widespread worry about human errors, 22% still assess their cloud security posture manually.
“Security teams are dealing with much more complex environments, and it can be extremely difficult to stay on top of the growing cloud footprint without having the right strategy and resources in place,” said Tim Erlin, VP of product management and strategy at Tripwire.
“Fortunately, there are well-established frameworks, such as CIS benchmarks, which provide prioritized recommendations for securing the cloud. However, the ongoing work of maintaining proper security controls often goes undone or puts too much strain on resources, leading to human error.”
Utilizing a framework to secure the cloud
Most organizations utilize a framework for securing their cloud environments – CIS and NIST being two of the most popular – but only 22% said they are able to maintain continuous cloud security compliance over time.
While 91% of organizations have implemented some level of automated enforcement in the cloud, 92% still want to increase their level of automated enforcement.
Additional survey findings show that automation levels varied across cloud security best practices:
- Only 51% have automated solutions that ensure proper encryption settings are enabled for databases or storage buckets.
- 45% automatically assess new cloud assets as they are added to the environment.
- 51% have automated alerts with context for suspicious behavior.
Maximizing data privacy should be on every organization’s priority list. We all know how important it is to keep data and applications secure, but what happens when access to private data is needed to save lives? Should privacy be sacrificed? Does it need to be?
Consider the case of contact tracing, which has become a key tool in the fight to control COVID-19. It’s a daunting task greatly facilitated by collecting and analyzing real-time identity and geo-location data gathered from mobile devices—sometimes voluntarily and sometimes not.
In most societies, such as the United States and the European Union, the use of location and proximity data by governments may be strictly regulated or even forbidden—implicitly impeding the ability to efficiently contain the spread of the virus. Where public health has been prioritized over data privacy, the use of automated tracing has contributed to the ability to quickly identify carriers and prevent disease spread. However, data overexposure remains a major concern for those using the application. They worry about the real threat that their sensitive location data may eventually be misused by bad actors, IT insiders, or governments.
What if it were possible to access the data needed to get contact tracing answers without actually exposing personal data to anyone anywhere? What if data and applications could be secure by default—so that data could be collected, stored, and results delivered without exposing the actual data to anyone except the people involved?
Unfortunately, current systems and software will never deliver the absolute level of data privacy required because of a fundamental hardware flaw: data cannot be simultaneously used and secured. Once data is put into memory, it must be decrypted and exposed to be processed. This means that once a bad actor or malicious insider gains access to a system, it’s fairly simple for that system’s memory and/or storage to be read, effectively exposing all data. It’s this data security flaw that’s at the foundation of virtually every data breach.
Academic and industry experts, including my co-founder Dr. Yan Michalevsky, have known for years that the ultimate, albeit theoretical, resolution of this flaw was to create a compute environment rooted in secure hardware. These solutions have already been implemented in cell phones and some laptops to secure storage and payments and they are working, well proving the concept works as expected.
It wasn’t until 2015 that Intel introduced Software Guard Extensions (SGX)—a set of security-related machine-level instruction codes built into their new CPUs. AMD has also added a similar proprietary instruction set called SEV technology into their CPUs. These new and proprietary silicon-level command sets enable the creation of encrypted and isolated parts of memory, and they establish a hardware root of trust that helps close the data security flaw. Such isolated and secured segments of memory are known as secure enclaves or, more generically, Trusted Execution Environments (TEEs).
A broad consortium of cloud and software vendors (called the Confidential Computing Consortium) is working to develop these hardware-level technologies by creating the tools and cloud ecosystems over which enclave-secured applications and data can run. Amazon Web Services announced its version of secure enclave technology, Nitro Enclaves, in late 2019. Most recently, both Microsoft (Azure confidential computing) and Google announced their support for secure enclaves as well.
These enclave technologies and secure clouds should enable applications, such as COVID-19 contact tracing, to be implemented without sacrificing user privacy. The data and application enclaves created using this technology enable sensitive data to be processed without ever exposing either the data or the computed results to anyone but the actual end user. This means public health organizations can have automated contact tracing that can identify, analyze, and provide needed alerts in real-time—while simultaneously maximizing data privacy.
Creating or shifting applications and data to the secure confines of an enclave can take a significant investment of time, knowledge, and tools. That’s changing quickly. New technologies are becoming available that will streamline the operation of moving existing applications and all data into secure enclaves without modification.
As this happens, all organizations will be able to secure all data by default. This will enable CISOs, security professionals—and public health officials—to sleep soundly, knowing that private data and applications in their care will be kept truly safe and secure.
Cloud breaches will likely increase in velocity and scale, and highlights steps that can be taken to mitigate them, according to Accurics.
“While the adoption of cloud native infrastructure such as containers, serverless, and servicemesh is fueling innovation, misconfigurations are becoming commonplace and creating serious risk exposure for organizations,” said Om Moolchandani, CTO, Accurics. “As cloud infrastructure becomes increasingly programmable, we believe that the most effective defense is to codify security into development pipelines and enforce it throughout the lifecycle of the infrastructure. The receptiveness of the developer community toward assuming more security responsibility has been encouraging and a step in the right direction.”
Key report findings
Misconfigured cloud storage services are commonplace in a stunning 93% of the cloud deployments analyzed, and most also have at least one network exposure where a security group is left wide open. These issues will likely increase in both velocity and scale—and they’ve already contributed to more than 200 breaches over the past two years.
One emerging problem area is that despite the broad availability of tools like HashiCorp Vault and AWS Key Management Service (KMS), hardcoded private keys turned up in 72% of the deployments analyzed. Specifically, unprotected credentials stored in container configuration files were found in half of these deployments, which is an issue given that 84% of organizations use containers.
Going one level deeper, 41% of the organizations had high privileges associated with the hardcoded keys and were used to provision compute resources; any breach involving these would expose all associated resources. Hardcoded keys have contributed to a number of cloud breaches.
Network exposures resulting from misconfigured routing rules posed the greatest risk to all organizations. In 100% of deployments, an altered routing rule exposed a private subnet containing sensitive resources, such as databases, to the Internet.
Automated detection of risks paired with a manual approach to resolution is creating alert fatigue, and only 6% of issues are being addressed. An emerging practice known as Remediation as Code, in which the code to resolve the issue is automatically generated, is enabling organizations to address 80% of risks.
Automated threat modeling is also needed to determine whether changes such as privilege increases, and route changes introduce breach paths in a cloud deployment. As organizations embrace Infrastructure as Code (IaC) to define and manage cloud-native infrastructure, codifying security into development pipelines becomes possible and can significantly reduce the attack surface before cloud infrastructure is provisioned.
The new report makes the case for establishing the IaC as a baseline to maintain risk posture after cloud infrastructure is provisioned. Continuous assessment of new cloud resources and configuration changes against the baseline will surface new risks. If a change is legitimate, update the IaC to reflect the change; if it’s not, redeploy the cloud from the baseline.
Dustin Rigg Hillard, CTO at eSentire, is responsible for leading product development and technology innovation. His vision is rooted in simplifying and accelerating the adoption of machine learning for new use cases.
In this interview Dustin talks about modern digital threats, the challenges cybersecurity teams face, cloud-native security platforms, and more.
What types of challenges do in-house cybersecurity teams face today?
The main challenges that in-house cybersecurity teams have to deal with today are largely due to ongoing security gaps. As a result, overwhelmed security teams don’t have the visibility, scalability or expertise to adapt to an evolving digital ecosystem.
Organizations are moving toward the adoption of modern and transformative IT initiatives that are outpacing the ability of their security teams to adapt. For security teams, this means constant change, disruptions with unknown consequences, increased risk, more data to decipher, more noise, more competing priorities, and a growing, disparate, and diverse IT ecosystem to protect. The challenge for cybersecurity teams is finding effective ways to deliver and maintain security at the speed of digital transformation, ensuring that every new technology, digital process, customer and partner interaction and innovation is protected.
Cybercrime is being conducted at scale, and threat actors are constantly changing techniques. What are the most significant threats at the moment?
Threat actors, showing their usual agility, have shifted efforts to target remote workers and take advantage of current events. We are seeing attackers exploiting user behavior by misleading users into opening and executing a malicious file, going to a malicious site or handing over information, typically using lures which create urgency (e.g., by masquerading as payment and invoice notifications) or leverage current crises and events.
What are the main benefits of cloud-native security platforms?
A cloud-native platform offers important advantages over legacy approaches—advantages that provide real, important benefits for cybersecurity providers and the clients who depend on them.
- A cloud-native architecture is more easily extensible, which means more features, sooner, to enable analysts and protect clients
- A cloud-native platform offers higher performance because the microservices inside it can maximally utilize the cloud’s vast compute, storage and network resources; this performance is necessary to ingest and process the vast streams of data which need to be processed to keep up with real-time threats
- A cloud-native platform can effortlessly scale to handle increased workloads without degradation to performance or client experience
Security platforms usually deliver a variety of metrics, but how does an analyst know which ones are meaningful?
The most important metrics are:
- How platform delivers security outcomes
- How many threats were stopped with active response?
- How many potentially malicious connections were blocked?
- How many malware executions were halted?
- How quickly was a threat contained after initial detection?
Modern security platforms help simplify data analytics by delivering capabilities that amplify threat detection, response and mitigation activities; deliver risk-management insights; and help organizations stay ahead of potential threats.
Cloud-native security platforms can output a wide range of data insights including information about threat actors, indicators of compromise, attack patterns, attacker motivations and capabilities, signatures, CVEs, tactics, and vulnerabilities.
How can security teams take advantage of the myriad of security tools that have been building in the organization’s IT ecosystem for many years?
Cloud-native security platforms ingest data from a wide variety of sources such as security devices, applications, databases, cloud systems, SaaS platforms, IoT devices, network traffic and endpoints. Modern security platforms can correlate and analyze data from all available sources, providing a complete picture of the organization’s environment and security posture for effective decision-making.
Twilio has confirmed that, for 8 or so hours on July 19, a malicious version of their TaskRouter JS SDK was being served from their one of their AWS S3 buckets.
“Due to a misconfiguration in the S3 bucket that was hosting the library, a bad actor was able to inject code that made the user’s browser load an extraneous URL that has been associated with the Magecart group of attacks,” the company shared.
Who’s behind the attack?
Twilio is a cloud communications platform as a service (CPaaS) company, which provides web service APIs developers can use to add messaging, voice, and video in their web and mobile applications.
“The TaskRouter JS SDK is a library that allows customers to easily interact with Twilio TaskRouter, which provides an attribute-based routing engine that routes tasks to agents or processes,” Twilio explained.
The misconfigured AWS S3 bucket, which is used to serve public content from the domain twiliocdn.com, hosts copies of other SDKs, but only the TaskRouter SDK had been modified.
The misconfiguration allowed anybody on the Internet to read and write to the S3 bucket, and the opportunity was seized by the attacker(s).
“We do not believe this was an attack targeted at Twilio or any of our customers,” the company opined.
Jordan Herman, Threat Researcher at RiskIQ, which detailed previous threat campaigns that used the same malicious traffic redirector, told Help Net Security that because of how easy misconfigured Amazon S3 buckets are to find and the level of access they grant attackers, they are seeing attacks like this happening at an alarming rate.
Om Moolchandani, co-founder and CTO at code to cloud security company Accurics, noted that there are many similarities between waterhole attacks and the Twilio incident.
“Taking over a cloud hosted SDK allows attackers to ‘cloud waterhole’ into the victim environments by landing directly into the operation space of victims,” he said.
Due to this incident, Twillio checked the permissions on all of their AWS S3 buckets and found others that were misconfigured, but they stored no production or customer data and haven’t been tampered with.
“During our incident review, we identified a number of systemic improvements that we can make to prevent similar issues from occurring in the future. Specifically, our teams will be engaging in efforts to restrict direct access to S3 buckets and deliver content only via our known CDNs, improve our monitoring of S3 bucket policy changes to quickly detect unsafe access policies, and determine the best way for us to provide integrity checking so customers can validate that they are using known good versions of our SDKs,” the company shared.
They say it’s difficult to gauge the impact on the attack on individual users, since the “links used in these attacks are deprecated and rotated and since the script itself doesn’t execute on all platforms.”
The company urges those who have downloaded a copy of the TaskRouter JS SDK between July 19th, 2020 1:12 PM and July 20th, 10:30 PM PDT (UTC-07:00) to re-download it, check its integrity and replace it.
“If your application loads v1.20 of the TaskRouter JS SDK dynamically from our CDN, that software has already been updated and you do not need to do anything,” they pointed out.
70% of organizations experienced a public cloud security incident in the last year – including ransomware and other malware (50%), exposed data (29%), compromised accounts (25%), and cryptojacking (17%), according to Sophos.
Organizations running multi-cloud environments are greater than 50% more likely to suffer a cloud security incident than those running a single cloud.
Europeans suffered the lowest percentage of security incidents in the cloud, an indicator that compliance with GDPR guidelines are helping to protect organizations from being compromised. India, on the other hand, fared the worst, with 93% of organizations being hit by an attack in the last year.
“Ransomware, not surprisingly, is one of the most widely reported cybercrimes in the public cloud. The most successful ransomware attacks include data in the public cloud, according to the State of Ransomware 2020 report, and attackers are shifting their methods to target cloud environments that cripple necessary infrastructure and increase the likelihood of payment,” said Chester Wisniewski, principal research scientist, Sophos.
“The recent increase in remote working provides extra motivation to disable cloud infrastructure that is being relied on more than ever, so it’s worrisome that many organizations still don’t understand their responsibility in securing cloud data and workloads. Cloud security is a shared responsibility, and organizations need to carefully manage and monitor cloud environments in order to stay one step ahead of determined attackers.”
The unintentional open door: How attackers break in
Accidental exposure continues to plague organizations, with misconfigurations exploited in 66% of reported attacks. Misconfigurations drive the majority of incidents and are all too common given cloud management complexities.
Additionally, 33% of organizations report that cybercriminals gained access through stolen cloud provider account credentials. Despite this, only a quarter of organizations say managing access to cloud accounts is a top area of concern.
Data further reveals that 91% of accounts have overprivileged identity and access management roles, and 98% have multi-factor authentication disabled on their cloud provider accounts.
Public cloud security incident: The silver lining
96% of respondents admit to concern about their current level of cloud security, an encouraging sign that it’s top of mind and important.
Appropriately, “data leaks” top the list of security concerns for nearly half of respondents (44%); identifying and responding to security incidents is a close second (41%). Notwithstanding this silver lining, only one in four respondents view lack of staff expertise as a top concern.
Even before lockdowns, there was a steady migration toward more flexible workforce arrangements. Given the new normal of so many more people working from home—on top of a pile of evidence showing that productivity and quality of life typically go up with remote work—it is inevitable that many more companies will continue to offer those arrangements even as stay-at-home orders are lifted.
Unfortunately, a boom in remote access goes hand-in-hand with an increased risk to sensitive information. Verizon reports that 30 percent of recent data breaches were a direct result of the move to web applications and services.
Data is much harder to track, govern, and protect when it lives inside a cloud. In large part, these threats are associated with internet-exposed storage.
Emerging threat matrix
Traditionally, system administrators rely on perimeter security to stop outside intruders, yet even the most conscientious are exposed after a single missed or delayed update. Beyond that, insiders are widely considered the biggest threat to data security.
Misconfiguration accounts for the vast majority of insider errors. It is usually the result of failure to properly secure cloud storage or firewall settings, and largely relates to unsecured databases or file storage that are directly exposed on a cloud service.
In many cases, employees mislabel private documents by setting storage privileges to public. According to the Verizon report, among financial services and insurance firms, this is now the second most common type of misconfiguration error.
Addressing this usually means getting open sharing under control, figuring out where sensitive data resides and who owns it, and running a certificate program to align data access with organizational needs.
Optimistically, companies hope that a combination of technological safeguards and diligence on the part of users—whether employees, partners, or customers—will eliminate, or at least minimize, costly mistakes.
Other internal threats come as a part of a cloud migration or backup process, where a system admin or DBA will often stand up an instance of data on a cloud platform but fail to put inconvenient but necessary access controls in place.
Consider the example of cloud data warehouses. Providers such as Amazon, Google, and Snowflake now make it simple to store vast quantities of data cheaply, to migrate data easily, and to scale up or down at will. Little wonder that these services are growing so quickly.
Yet even the best services need some help when it comes to tracking data access. Some tools makes it easy to authenticate remote users before letting them inside the gate of the cloud data warehouse. After that, though, things often get murky. Who is accessing which data, how much of it, when, and from where?
These are issues that every company must confront. That data is ripe for exploitation by dishonest insiders, or by careless employees, with serious consequences. In more fortunate circumstances, it is discovered by security teams, or by management who make an irate call to the CISO.
Born in the cloud
More approaches to data security that are born in the cloud are now appearing, and the new normal means the enterprise is motivated to adapt. As most organizations turn to the cloud for what used to be on-premises IT deployments, the responsibility and techniques to secure the infrastructure and applications that hold data are also being moved to the cloud.
For instance, infrastructure-as-a-service (IaaS) provides virtualized computing resources like virtual firewalls and network security hardware, and virtual intrusion detection and prevention, but these are an intermediate step at best.
The idea is that IaaS can offer a set of defenses at scale for all of a cloud provider’s customers, built into the platform itself, which will relieve an individual cloud customer from having to do many of the things that used to be on-premises data-protection requirements.
But what has really changed? A top certification may be enough to be called “above average” data security, but in reality that security still remains totally contingent on perimeter defenses, hardware appliances, and proper configurations by system administrators and DBMs. And it’s still only as good as the data hygiene of end users. There are a lot of “ifs” and “buts,” which is nothing new.
Data Security-as-a-Service (DSaaS) complements IaaS as it integrates data protection at the application layer. This places data access services in the path between users who want data and the data itself. It is also portable because it goes where the application goes.
Developers can embed data access governance and protection into applications through a thin layer of technology wrapped around database drivers or APIs, which all applications use to connect to their databases. An obvious advantage is that this is more easily maintained over time.
Data security is a shared responsibility among security pros, end users, and cloud providers. As the new normal becomes reality, shared responsibility means that a cloud provider handles the underlying network security such that the cloud infrastructure ensures basic, customer-level network isolation and secure physical routers and switches.
From here, under the DSaaS model the cloud service provider offers DSaaS—or else the customer provisions it through a third party—as a set of automated data security components that complete a secure cloud environment.
This makes it possible to govern each user at a granular level so that they access only the types of data they should, and perform only those actions with the data for which they are authorized. CISOs can implement and adapt rulesets to govern the flow of data by type and role. In terms of data protection, application-layer data security makes it possible to isolate and block bad traffic, including excessive data volumes, down to an individual user.
From this perspective, DSaaS can act as both an intrusion detection system (IDS) and intrusion prevention system (IPS). It can inspect data access and analyze it for intrusion attempts or vulnerabilities in workload components that could potentially exploit a cloud environment, and then automatically stop data access in progress until system admins can look into the situation.
At this level it is also feasible to log data activity such as what each user does with the data they access, satisfying both security and compliance—a notable accomplishment, considering that the two functions are often at odds with one another.
Incorporating security at the application layer also offers data protection capabilities that are similar to network intrusion appliances, or security agents that reside at the OS level on a virtual machine or at the hypervisor level.
Moreover, DSaaS governance and protection is so fine-grained that it does not inhibit traffic flow, data availability, and uptime even in the face of multiple sustained attacks.
Everyone is talking about how the “new normal” is impacting data security, but the enterprise was well on this path before the pandemic. It is tempting for vigilance to give rise to pessimism since data security has too often been a laggard, and an inventory of the cloud data-security bona fides of most companies is not encouraging.
However, data protection and governance can be assured should we adopt shared models for responsibility and finely tuned, application-level controls. It’s a new world and we can be ready for it.
The recent pandemic created a new normal that redefines the way business operates by eliminating security and physical work borders. An Avertium study found that having employees work from home during the pandemic saved U.S. employers more than $30 billion per day.
The study also predicts that 25-30% of the workforce will be working from home for multiple days per week by the end of 2021. For IT Security teams, this poses many new challenges.
“As we move forward with increasingly complex and fragmented business models, it’s crucial to fully assess and protect business assets from new and emerging cybercrimes,” says Paul Caiazzo, senior vice president, security and compliance at Avertium.
“The goal is to prevent a wide array of online threats and attacks, including data breaches, ransomware attacks, identity theft, hacking at home, business, cloud and hybrid cloud locations and online predators. Work with cybersecurity professionals who understand the increased threats in our new, post-COVID world, and can increase security to mitigate risk.”
Organizations losing visibility into their business network traffic
Many organizations’ security monitoring infrastructure is based upon the assumption that most employees are connected directly to the corporate LAN. By collecting data from Active Directory domain controllers, the perimeter firewall, server and workstation event logs, endpoint protection logs and other key on-premises based data sources an organization can maintain a high level of visibility into activity within their network.
But since many employees have moved outside of the network perimeter, whether by using mobile devices or working from a home or remote environment organizations have lost visibility into a large percentage of their business network traffic.
Cybercriminals have pounced on the chance to leverage the resulting distraction for their own gain by turning up the volume of their efforts. Bad actors have recently made news by stealing personal data from unemployment benefit applicants in several states, waging ongoing COVID-19-themed phishing campaigns, and creating a 238% surge in cyberattacks against banks.
With so much at stake, it’s important to establish ways of monitoring telework security in a world with disappearing network perimeters.
Telework redefines the network perimeter
With a fully remote workforce, many organizations have been forced to make choices between usability and security. Existing VPN infrastructure was not designed to support a fully remote workforce.
Adoption of split-tunnel VPNs has been widely recommended as a solution to the VPN scalability problem. However, while allowing Internet-bound traffic to flow directly to its destination, instead of over the corporate VPN, increases usability, it does so at the cost of security and network visibility.
Cybercriminals are capitalizing on this opportunity. The United States Department of Homeland Security (DHS) Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom’s National Cyber Security Centre (NCSC) recently issued a joint alert noting an increase in cyberattacks exploiting VPN vulnerabilities.
With unmonitored connections to the public Internet, a remote workforce’s laptops can become compromised by malware or a cybercriminal without detection. These devices can then be used as a stepping stone to access the corporate environment via their VPN connection. For a remote workforce, employee devices and home networks are the new corporate network edge.
Securing the endpoint from the cloud
With the network perimeter shifted to teleworkers’ devices, securing the enterprise requires shifting security to these devices as well. Organizations require at least the same level of visibility into activity as they have on the corporate network.
By deploying agents onto the corporate-owned devices used by teleworkers, an organization can implement endpoint detection and response beyond the confines of the corporate network. This includes the ability to prevent and detect malware, viruses, ransomware, and other threats based upon signature analysis and behavioral analysis of potentially malicious processes.
However, an organization also requires centralized visibility into the devices of their remote workforce. For this purpose, a centrally-managed cloud-based solution is the ideal choice.
By moving security to the cloud, an enterprise reduces load on the corporate network and VPN infrastructure, especially in a split-tunnel connectivity architecture. Cloud-based monitoring and threat management also can achieve a higher level of scalability and performance than an on-premises solution.
A cloud-based zero trust platform can also act as an access broker to resources both on the public internet and the corporate private network.
Zero trust agents installed on telecommuters’ devices can securely and dynamically route all traffic to a cloud-based gateway and then on to the target resource in a way that provides the same or better control and visibility than even a well-configured traditional full tunnel VPN solution. By uniquely identifying the use, device and context, zero trust provides fine-grained precision on access control for the enterprise.
Data from the cloud-based ZTN gateway can additionally be used to perform behavioral analytics within a cloud-based SIEM platform, enhancing security visibility above and beyond traditional networking approaches.
Ensuring employee privacy while monitoring telework security
Monitoring telework security can be a thorny issue for an organization from a privacy and security perspective. On the one side, an organization requires the ability to secure the sensitive data used by employees for daily work in order to meet regulatory requirements. However, deploying network monitoring solutions at employees’ homes presents significant privacy issues.
An agent-based solution, supported by cloud-based infrastructure, provides a workable solution to both issues. For corporate-owned devices, company policy should have an explicit consent to monitor clause, which enables the organization to monitor activity on company devices.
Agents installed on these devices enable an organization to exercise these rights without inappropriately monitoring employee network activity on personal devices connected to the same home network.
Monitoring BYOD security
For personal devices used for remote work under a BYOD policy, the line between privacy and security becomes blurrier. Since devices are owned by the employee, it may seem more difficult to enforce installation of the software agent, and these dual-use devices may cause inadvertent corporate monitoring of personal traffic.
All organizations employing a BYOD model should document in policy the requirements for usage of personally owned devices, including cloud-based anti-malware and endpoint detection and response tools as described earlier.
The most secure way to enable BYOD is a combination of corporately managed cloud-based anti-malware/EDR, supplemented by a ZTN architecture. In such a model, traffic bound for public internet resources can be passed along to the destination without interference, but malicious activity can still be detected and prevented.
With more and more IT resources moving to the cloud and remote work becoming a ubiquitous business practice due to COVID-19, perimeter-based security is undeniably becoming a weak link, especially since attackers have repeatedly demonstrated they can bypass firewalls and spread laterally within enterprise networks.
It’s time for a different approach – one that centers on user identity and risk rather than binary network connectivity. In addition, security must be enforced closer to end users, rather than backhauling traffic across the Internet to a centralized data center for inspection. Two complementary concepts have emerged that address these challenges.
The first has existed for several years and is now gaining real traction to address the security gap created by the disintegration of the network perimeter. Zero trust architecture (ZTA) states that an organization should not trust anything inside or outside its borders by default and should instead verify anything and everything (users, IoT devices, bots, microservice processes) trying to connect to its systems and resources before granting access. ZTA enforces granular controls so that entities can only reach the resources they require.
In addition to ZTA, the second and emerging concept that helps close the holes that have been punched through the enterprise perimeter involves moving security to the network edge. Research firm Gartner has dubbed this approach the Secure Access Service Edge, or SASE (pronounced “Sassy”). According to the firm’s analysts, the SASE approach provides the agility to rapidly deliver security capabilities when and where they are needed without compromising on effectiveness or the user experience.
The SASE architecture is designed to eliminate the need for VPNs and backhauling traffic to a data center for inspection, relying instead on a fabric of security capabilities that are available throughout the Internet as a utility and can be provisioned wherever and whenever they are needed.
Organizations can use SASE technology to achieve a zero trust security posture. Here are a few key things to consider when moving an organization along this path:
Adopt an Identity Provider (IdP): Everything is moving to the cloud – applications, security controls and identities. Migrating to an Internet-based identity infrastructure simplifies integration with cloud resources and positions an organization for the future.
Enforce multi-factor authentication (MFA): In a recent study by Microsoft, 99.9% of all compromised accounts did not have MFA enabled. MFA can dramatically reduce an organization’s susceptibility to account takeover threats and lay the groundwork for all other zero trust principles.
Reduce reliance on VPNs: With the increasing use of SaaS applications, the need for VPN access is shrinking. Furthermore, VPN access can pose a liability if it enables users to reach assets they should not be able to see or use. VDI, SaaS, access proxies, and other tools can be used to eliminate VPNs and migrate to an alternative remote access control and management architecture.
Protect remote workstations: SASE can be used to provide always-on network security protection to remote workstations without requiring a VPN. These users can get the benefit of the security and visibility afforded from being “behind the firewall” without having to perform an authentication action, without having their traffic backhauled across the Internet, and without the risk that a compromise of their devices can infect other corporate assets.
Use microsegmentation to narrow zones of trust: A key tenant of zero trust is reducing the avenues that attackers can use to move laterally within enterprise networks once they have achieved an initial point of compromise via a server, workstation, or remote worker’s personal device. No matter where the compromise starts, containing it is the key to stopping an incident from turning into a significant breach.
Cloud adoption and remote work trends have irrevocably changed the enterprise security landscape, making many traditional perimeter controls obsolete. Zero trust and new network architectures like SASE promise to fill the void by eliminating attack vectors that are built-in to reactive, legacy security models while improving user experience and business agility.
The Cloud Security Alliance has released a report examining privacy and security of patient data in the cloud.
In the wake of COVID-19, health delivery organizations (HDOs) have quickly increased their utilization of telehealth capabilities (i.e., remote patient monitoring (RPM) and telemedicine) to treat patients in their homes. These technology solutions allow for the delivery of patient treatment, comply with COVID-19 mitigation best practices, and reduce the risk of exposure for healthcare providers.
Remote healthcare comes with security challenges
Going forward, telehealth solutions — which introduce high levels of patient data over the internet and in the cloud — can be used to remotely monitor and treat patients who have mild cases of the virus, as well as other health issues. However, this remote environment also comes with an array of privacy and security challenges.
“For health care systems, telehealth has emerged as a critical technology for safe and efficient communications between healthcare providers and patients, and accordingly, it’s vital to review the end-to-end architecture of a telehealth delivery system,” said Dr. Jim Angle, co-chair of CSA’s Health Information Management Working Group.
“A full analysis can help determine whether privacy and security vulnerabilities exist, what security controls are required for proper cybersecurity of the telehealth ecosystem, and if patient privacy protections are adequate.”
The HDO must understand regulations and technologies
With the increased use of telehealth in the cloud, HDOs must adequately and proactively address data, privacy, and security issues. The HDO cannot leave this up to the cloud service provider, as it is a shared responsibility. The HDO must understand regulatory requirements, as well as the technologies that support the system.
Regulatory mandates may span multiple jurisdictions, and requirements may include both the GDPR and HIPAA. Armed with the right information, the HDO can implement and maintain a secure and robust telehealth program.
With the dramatic shift toward remote workforces over the last three months, many organizations are relying more heavily on cloud tools and application suites. One of the most popular is Microsoft’s OneDrive.
While OneDrive may seem like a secure cloud storage solution for companies looking to use Microsoft’s suite of business tools, many glaring security issues can expose sensitive data and personally identifiable information (PII) if proper protection protocols are ignored. Data theft, data loss, ransomware, and compliance violations are just a few things that organizations need to watch for as their employees increasingly rely on this application to save more and more documents to the cloud.
While OneDrive does provide cloud storage, it doesn’t have cloud backup functionality, a critical distinction that must be made when choosing which information to upload and share. The data is accessible, but not protected. How can businesses ensure they’re mitigating security risks, while also enabling employee access? Below we’ll discuss some of the most significant security gaps associated with OneDrive and highlight the steps organizations can take to better protect their data.
One area that often breeds confusion for OneDrive users is who can access company files once they’re uploaded in the cloud. For employees saving documents on their personal accounts, all the files created or added outside of a “Shared with Me” folder are private until the user decides otherwise. At this point, files are encrypted for anyone but the creator and Microsoft personnel with administrative rights. For someone else to see your data, you have to share the folder or a separate file.
The same rule holds for files shared on a OneDrive for Business account, with one exception: a policy set by an administrator determines the visibility of the data you create in the “Shared” folder.
Are sensitive documents safe in OneDrive?
For purposes of this article, sensitive documents refer to materials that contain either personally identifiable information (PII), personal health information (PHI), financial information, or data covered under FISMA and GLBA compliance requirements. As we established above, these types of documents can be saved one of two ways – by an individual under a personal OneDrive account or uploaded under a Business account. Even if your business does not subscribe to a OneDrive business account, organizations should be aware that employees may be emailing themselves documents or sharing them to their personal OneDrive folders for easy access, especially over the past several months with most employees working from home.
For personal users, OneDrive has a feature called Personal Vault (PV). How secure is the OneDrive Personal Vault? It is a safe located in your Files folder explicitly designed for sensitive information.
When using PV, your files are encrypted until your identity is verified. It has several different verification methods that users can set up, whether it’s a fingerprint, a face ID, or a one-time code sent via email or SMS. The PV folder also has an idle-time screensaver that locks if you are inactive for 3 minutes on the mobile app, and 20 minutes on the web. To regain access, you need to verify yourself again.
Interestingly, the PV function isn’t available in the OneDrive for Business package. Therefore, if your organization has no other way to store sensitive data than on OneDrive, additional security measures must be taken.
OneDrive is not a backup solution
OneDrive is not a backup tool. OneDrive provides cloud storage, and there is a massive difference between cloud backup and cloud storage. They have a few things in common, like storing your files on remote hardware. But it’s not enough to make them interchangeable.
In short, cloud storage is a place in the cloud where you upload (manually or automatically) and keep all your files. Cloud storage allows you to reach files from any device at any time, making it an attractive option for workers on the go and those that work from different locations. It also allows you to manually restore files from storage in case of unwanted deletion and scale storage for your needs. While “restoring files” sounds eerily similar to backup protection, it has some fundamental faults. For example, if you accidentally delete a file in storage, or it was hit by ransomware and encrypted, you can consider the file lost. This makes OneDrive storage alone a weak solution for businesses. If disaster strikes and information is compromised, the organization will have no way to restore high volumes of data.
Cloud backup, on the other hand, is a service that uses cloud storage to save files, but its functionality doesn’t end there. Cloud backup services automatically copy your data to the storage area and restore your data relatively quickly after a disaster. You can also restore multiple versions of a backed-up file, search for specific files, and it protects data from most of the widespread threats, including accidental deletion, brute-force attacks, and ransomware.
In summary: cloud storage provides access, cloud backup provides protection.
What are the most common OneDrive risks?
All the security issues tied with using OneDrive are common for most cloud storage services. Both individual OneDrive and OneDrive for Business have multiple risks, including data theft, data loss, corrupted data, and the inadvertent sharing of critical information. Given the ease of access to documents in OneDrive, compliance violations are also a top concern for organizations that deal with sensitive data.
How can you maximize OneDrive security?
To minimize the above security issues, organizations need to follow a set of strict protocols, including:
1. Device security protocols – Several general security protocols should be implemented with devices using OneDrive. Some of the most basic include mandatory downloading of antivirus software and ensuring it is current on all employee devices. Other steps include using a firewall, which will block all questionable inbound traffic, and activating idle-time screensaver passwords. As employees return from remote work locations and bring their devices back on-premise, it’s crucial to ensure all devices have updated security and meet the latest compliance requirements.
2. Network security protocols – In addition to using protected devices, employees should be especially cautious when connecting to any unsecured networks. Before connecting to a hotspot, instruct employees to make sure the connection is encrypted and never open OneDrive if the link is unfamiliar. Turning off the functionality that allows your computer to connect to in-range networks automatically is one easy way to add a layer of protection.
3. Protocols for secure sharing – Make sure to terminate OneDrive for Business access for any users who are no longer with the company. Having an employee offboarding process that includes this step lessens the risk of a former employee stealing documents or information. Make sure to allow access to only invited viewers on OneDrive. If you share a file or folder with “Everyone” or enable access with the link, it opens up new risks as anyone on the internet can find and access your document. It’s also helpful to have outlined rules for downloading and sharing documents inside, and outside, the corporation.
4. Secure sensitive data – Avoid storing any payment data in any Office 365 products. For other confidential documents, individual users can use PV. Organizations can store sensitive data only by using a secure on-premises or encrypted third-party cloud backup service that is compliant with data regulations mandatory for your organization.
5. Use a cloud backup solution – To best protect your company from all sides, it’s essential to use a cloud backup solution when saving valuable information to OneDrive. Make sure any backup solution you choose has cloud-to-cloud capabilities with automatic daily backup. In addition, a ransomware protection service that scans OneDrive and other Office 365 services for ransomware and automatically blocks attacks is your best defense against costly takeovers.
Whether it’s preparing for upcoming mandatory regulations or dealing with the sudden management of employees working offsite, the security landscape is ever-changing. Keeping up with the latest methods to keep your company both protected and compliant is a challenge that needs constant attention. With a few critical steps and the utilization of new technology, business users can protect themselves and lessen the risk to their data.
Nearly 80% of the companies had experienced at least one cloud data breach in the past 18 months, and 43% reported 10 or more breaches, a new Ermetic survey reveals.
According to the 300 CISOs that participated in the survey, security misconfiguration (67%), lack of adequate visibility into access settings and activities (64%) and identity and access management (IAM) permission errors (61%) were their top concerns associated with cloud production environments.
Meanwhile, 80% reported they are unable to identify excessive access to sensitive data in IaaS/PaaS environments. Only hacking ranked higher than misconfiguration errors as a source of data breaches.
“Even though most of the companies surveyed are already using IAM, data loss prevention, data classification and privileged account management products, more than half claimed these were not adequate for protecting cloud environments,” said Shai Morag, CEO of Ermetic.
“In fact, two thirds cited cloud native capabilities for authorization and permission management, and security configuration as either a high or an essential priority.”
Excessive access permissions may go unnoticed
Driven by the dynamic and on-demand nature of public cloud infrastructure deployments, users and applications often accumulate access permissions beyond what is necessary for their legitimate needs.
Excessive permissions may go unnoticed as they are often granted by default when a new resource or service is added to the cloud environment. These are a primary target for attackers as they can be used for malicious activities such as stealing sensitive data, delivering malware or causing damage such as disrupting critical processes and business operations.
As part of the study, IDC surveyed 300 senior IT decision makers in the US across the Banking (12%), Insurance (10%), Healthcare (11%), Government (8%), Utilities (9%), Manufacturing (10%), Retail (9%), Media (11%), Software (10%) and Pharmaceutical (10%) sectors. Organizations ranged in size from 1,500 to more than 20,000 employees.
Some of the report’s key findings include:
- 79% of companies experienced at least one cloud data breach in the past 18 months, and 43% said they had 10 or more
- Top three cloud security threats are security misconfiguration of production environments (67%), lack of visibility into access in production environments (64%) and improper IAM and permission configurations (61%)
- Top three cloud security priorities are compliance monitoring (78%), authorization and permission management (75%), and security configuration management (73%)
- Top cloud access security priorities are maintaining confidentiality of sensitive data (67%), regulatory compliance (61%) and providing the right level of access (53%)
- Top cloud access security challenges are insufficient personal/expertise (66%), integrating disparate security solutions (52%) and lack of solutions that can meet their needs (39%)
A code injection vulnerability (CVE-2020-3956) affecting VMware vCloud Director could be exploited to take over the infrastructure of cloud services, Citadelo researchers have discovered.
About VMware vCloud Director and CVE-2020-3956
VMware Cloud Director (formerly known as vCloud Director) is a cloud service delivery platform used by public and private cloud providers to operate and manage cloud infrastructure.
CVE-2020-3956 was discovered by Citadelo penetration testers during a security audit of a customer’s VMWare Cloud Director-based cloud infrastructure.
“An authenticated actor may be able to send malicious traffic to VMware Cloud Director which may lead to arbitrary remote code execution. This vulnerability can be exploited through the HTML5- and Flex-based UIs, the API Explorer interface and API access,” VMware explained in a security advisory published on May 19, after the company finished releasing patches for several versions of vCloud Director.
The researchers have provided more details about the vulnerability, explained how it can be exploited, and shared an exploit.
The damage attackers can do after exploiting the flaw is substantial. They can:
- View content of the internal system database, including password hashes of any customers allocated to this infrastructure
- Modify the system database to steal foreign virtual machines (VM) assigned to different organizations within Cloud Director
- Escalate privileges from “Organization Administrator” (normally a customer account) to “System Administrator” with access to all cloud accounts (organization) as an attacker can change the hash for this account
- Modify the login page to Cloud Director, which allows the attacker to capture passwords of another customer in plaintext, including System Administrator accounts
- Read other sensitive data related to customers.
The vulnerability has been patched
The vulnerability was privately reported to VMware, and has been addressed in April and May.
VMware considers the flaw to be “important” and not “critical”, since an attacker must be authenticated in order to exploit CVE-2020-3956. But, as the researchers noted, “cloud providers offering a free trial to potential new customers using VMware Cloud Director are at high risk because an untrusted actor can quickly take advantage.”
Admins are advised to upgrade to vCloud Director versions 10.0.0.2, 220.127.116.11, 18.104.22.168 or 22.214.171.124 to plug the security hole. A workaround is also available for those that can’t upgrade to a recommended version (temporarily or ever).
VMware Cloud Director v10.1.0 and vCloud Director versions 9.0.x and 8.x are not affected by the flaw.
Currently, organizations are struggling to adjust to the new normal amidst the COVID-19 pandemic, a Bitglass survey reveals. 41% have not taken any steps to expand secure access for the remote workforce, and 50% are citing proper equipment as the biggest impediment to doing so. Consequently, 65% of organizations now enable personal devices to access managed applications. Remote work and secure access concerns When asked what their organizations are primarily concerned with securing while employees … More
The post 41% of organizations have not taken any steps to expand secure access for the remote workforce appeared first on Help Net Security.